亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $\mu$ be a probability measure on $\mathbb{R}^d$ and $\mu_N$ its empirical measure with sample size $N$. We prove a concentration inequality for the optimal transport cost between $\mu$ and $\mu_N$ for cost functions with polynomial local growth, that can have superpolynomial global growth. This result generalizes and improves upon estimates of Fournier and Guillin. The proof combines ideas from empirical process theory with known concentration rates for compactly supported $\mu$. By partitioning $\mathbb{R}^d$ into annuli, we infer a global estimate from local estimates on the annuli and conclude that the global estimate can be expressed as a sum of the local estimate and a mean-deviation probability for which efficient bounds are known.

相關內容

Procedural content generation (PCG) is a growing field, with numerous applications in the video game industry and great potential to help create better games at a fraction of the cost of manual creation. However, much of the work in PCG is focused on generating relatively straightforward levels in simple games, as it is challenging to design an optimisable objective function for complex settings. This limits the applicability of PCG to more complex and modern titles, hindering its adoption in industry. Our work aims to address this limitation by introducing a compositional level generation method that recursively composes simple low-level generators to construct large and complex creations. This approach allows for easily-optimisable objectives and the ability to design a complex structure in an interpretable way by referencing lower-level components. We empirically demonstrate that our method outperforms a non-compositional baseline by more accurately satisfying a designer's functional requirements in several tasks. Finally, we provide a qualitative showcase (in Minecraft) illustrating the large and complex, but still coherent, structures that were generated using simple base generators.

We develop a statistical inference method for an optimal transport map between distributions on real numbers with uniform confidence bands. The concept of optimal transport (OT) is used to measure distances between distributions, and OT maps are used to construct the distance. OT has been applied in many fields in recent years, and its statistical properties have attracted much interest. In particular, since the OT map is a function, a uniform norm-based statistical inference is significant for visualization and interpretation. In this study, we derive a limit distribution of a uniform norm of an estimation error for the OT map, and then develop a uniform confidence band based on it. In addition to our limit theorem, we develop a smoothed bootstrap method with its validation and guarantee on an asymptotic coverage probability of the confidence band. Our proof is based on the functional delta method and the representation of OT maps on the reals.

Storage codes on graphs are an instance of \emph{codes with locality}, which are used in distributed storage schemes to provide local repairability. Specifically, the nodes of the graph correspond to storage servers, and the neighbourhood of each server constitute the set of servers it can query to repair its stored data in the event of a failure. A storage code on a graph with $n$-vertices is a set of $n$-length codewords over $\field_q$ where the $i$th codeword symbol is stored in server $i$, and it can be recovered by querying the neighbours of server $i$ according to the underlying graph. In this work, we look at binary storage codes whose repair function is the parity check, and characterise the tradeoff between the locality of the code and its rate. Specifically, we show that the maximum rate of a code on $n$ vertices with locality $r$ is bounded between $1-1/n\lceil n/(r+1)\rceil$ and $1-1/n\lceil n/(r+1)\rceil$. The lower bound on the rate is derived by constructing an explicit family of graphs with locality $r$, while the upper bound is obtained via a lower bound on the binary-field rank of a class of symmetric binary matrices. Our upper bound on maximal rate of a storage code matches the upper bound on the larger class of codes with locality derived by Tamo and Barg. As a corollary to our result, we obtain the following asymptotic separation result: given a sequence $r(n), n\geq 1$, there exists a sequence of graphs on $n$-vertices with storage codes of rate $1-o(1)$ if and only if $r(n)=\omega(1)$.

A new variant of Newton's method for empirical risk minimization is studied, where at each iteration of the optimization algorithm, the gradient and Hessian of the objective function are replaced by robust estimators taken from existing literature on robust mean estimation for multivariate data. After proving a general theorem about the convergence of successive iterates to a small ball around the population-level minimizer, consequences of the theory in generalized linear models are studied when data are generated from Huber's epsilon-contamination model and/or heavytailed distributions. An algorithm for obtaining robust Newton directions based on the conjugate gradient method is also proposed, which may be more appropriate for high-dimensional settings, and conjectures about the convergence of the resulting algorithm are offered. Compared to robust gradient descent, the proposed algorithm enjoys the faster rates of convergence for successive iterates often achieved by second-order algorithms for convex problems, i.e., quadratic convergence in a neighborhood of the optimum, with a stepsize that may be chosen adaptively via backtracking linesearch.

We design a novel algorithm for optimal transport by drawing from the entropic optimal transport, mirror descent and conjugate gradients literatures. Our algorithm is able to compute optimal transport costs with arbitrary accuracy without running into numerical stability issues. The algorithm is implemented efficiently on GPUs and is shown empirically to converge more quickly than traditional algorithms such as Sinkhorn's Algorithm both in terms of number of iterations and wall-clock time in many cases. We pay particular attention to the entropy of marginal distributions and show that high entropy marginals make for harder optimal transport problems, for which our algorithm is a good fit. We provide a careful ablation analysis with respect to algorithm and problem parameters, and present benchmarking over the MNIST dataset. The results suggest that our algorithm can be a useful addition to the practitioner's optimal transport toolkit. Our code is open-sourced at //github.com/adaptive-agents-lab/MDOT-PNCG .

We formulate standard and multilevel Monte Carlo methods for the $k$th moment $\mathbb{M}^k_\varepsilon[\xi]$ of a Banach space valued random variable $\xi\colon\Omega\to E$, interpreted as an element of the $k$-fold injective tensor product space $\otimes^k_\varepsilon E$. For the standard Monte Carlo estimator of $\mathbb{M}^k_\varepsilon[\xi]$, we prove the $k$-independent convergence rate $1-\frac{1}{p}$ in the $L_q(\Omega;\otimes^k_\varepsilon E)$-norm, provided that (i) $\xi\in L_{kq}(\Omega;E)$ and (ii) $q\in[p,\infty)$, where $p\in[1,2]$ is the Rademacher type of $E$. By using the fact that Rademacher averages are dominated by Gaussian sums combined with a version of Slepian's inequality for Gaussian processes due to Fernique, we moreover derive corresponding results for multilevel Monte Carlo methods, including a rigorous error estimate in the $L_q(\Omega;\otimes^k_\varepsilon E)$-norm and the optimization of the computational cost for a given accuracy. Whenever the type of the Banach space $E$ is $p=2$, our findings coincide with known results for Hilbert space valued random variables. We illustrate the abstract results by three model problems: second-order elliptic PDEs with random forcing or random coefficient, and stochastic evolution equations. In these cases, the solution processes naturally take values in non-Hilbertian Banach spaces. Further applications, where physical modeling constraints impose a setting in Banach spaces of type $p<2$, are indicated.

This work is concerned with the use of Gaussian surrogate models for Bayesian inverse problems associated with linear partial differential equations. A particular focus is on the regime where only a small amount of training data is available. In this regime the type of Gaussian prior used is of critical importance with respect to how well the surrogate model will perform in terms of Bayesian inversion. We extend the framework of Raissi et. al. (2017) to construct PDE-informed Gaussian priors that we then use to construct different approximate posteriors. A number of different numerical experiments illustrate the superiority of the PDE-informed Gaussian priors over more traditional priors.

In this article, we propose two kinds of neural networks inspired by power method and inverse power method to solve linear eigenvalue problems. These neural networks share similar ideas with traditional methods, in which the differential operator is realized by automatic differentiation. The eigenfunction of the eigenvalue problem is learned by the neural network and the iterative algorithms are implemented by optimizing the specially defined loss function. The largest positive eigenvalue, smallest eigenvalue and interior eigenvalues with the given prior knowledge can be solved efficiently. We examine the applicability and accuracy of our methods in the numerical experiments in one dimension, two dimensions and higher dimensions. Numerical results show that accurate eigenvalue and eigenfunction approximations can be obtained by our methods.

In the kernelized bandit problem, a learner aims to sequentially compute the optimum of a function lying in a reproducing kernel Hilbert space given only noisy evaluations at sequentially chosen points. In particular, the learner aims to minimize regret, which is a measure of the suboptimality of the choices made. Arguably the most popular algorithm is the Gaussian Process Upper Confidence Bound (GP-UCB) algorithm, which involves acting based on a simple linear estimator of the unknown function. Despite its popularity, existing analyses of GP-UCB give a suboptimal regret rate, which fails to be sublinear for many commonly used kernels such as the Mat\'ern kernel. This has led to a longstanding open question: are existing regret analyses for GP-UCB tight, or can bounds be improved by using more sophisticated analytical techniques? In this work, we resolve this open question and show that GP-UCB enjoys nearly optimal regret. In particular, our results directly imply sublinear regret rates for the Mat\'ern kernel, improving over the state-of-the-art analyses and partially resolving a COLT open problem posed by Vakili et al. Our improvements rely on two key technical results. First, we use modern supermartingale techniques to construct a novel, self-normalized concentration inequality that greatly simplifies existing approaches. Second, we address the importance of regularizing in proportion to the smoothness of the underlying kernel $k$. Together, these new technical tools enable a simplified, tighter analysis of the GP-UCB algorithm.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

北京阿比特科技有限公司