亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For $V : \mathbb{R}^d \to \mathbb{R}$ coercive, we study the convergence rate for the $L^1$-distance of the empiric minimizer, which is the true minimum of the function $V$ sampled with noise with a finite number $n$ of samples, to the minimum of $V$. We show that in general, for unbounded functions with fast growth, the convergence rate is bounded above by $a_n n^{-1/q}$, where $q$ is the dimension of the latent random variable and where $a_n = o(n^\varepsilon)$ for every $\varepsilon > 0$. We then present applications to optimization problems arising in Machine Learning and in Monte Carlo simulation.

相關內容

We study multivariate approximation of periodic function in the worst case setting with the error measured in the $L_\infty$ norm. We consider algorithms that use standard information $\Lambda^{\rm std}$ consisting of function values or general linear information $\Lambda^{\rm all}$ consisting of arbitrary continuous linear functionals. We investigate the equivalences of various notions of algebraic and exponential tractability for $\Lambda^{\rm std}$ and $\Lambda^{\rm all}$ under the absolute or normalized error criterion, and show that the power of $\Lambda^{\rm std}$ is the same as the one of $\Lambda^{\rm all}$ for some notions of algebraic and exponential tractability. Our result can be applied to weighted Korobov spaces and Korobov spaces with exponential weight. This gives a special solution to Open problem 145 as posed by Novak and Wo\'zniakowski (2012).

We investigate the approximation of weighted integrals over $\mathbb{R}^d$ for integrands from weighted Sobolev spaces of mixed smoothness. We prove upper and lower bounds of the convergence rate of optimal quadratures with respect to $n$ integration nodes for functions from these spaces. In the one-dimensional case $(d=1)$, we obtain the right convergence rate of optimal quadratures. For $d \ge 2$, the upper bound is performed by sparse-grid quadratures with integration nodes on step hyperbolic crosses in the function domain $\mathbb{R}^d$.

For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future. More abstractly, we argue that natural selection operates on systems that compete and vary, and that selfish species typically have an advantage over species that are altruistic to other species. This Darwinian logic could also apply to artificial agents, as agents may eventually be better able to persist into the future if they behave selfishly and pursue their own interests with little regard for humans, which could pose catastrophic risks. To counteract these risks and Darwinian forces, we consider interventions such as carefully designing AI agents' intrinsic motivations, introducing constraints on their actions, and institutions that encourage cooperation. These steps, or others that resolve the problems we pose, will be necessary in order to ensure the development of artificial intelligence is a positive one.

We consider the problem of online interval scheduling on a single machine, where intervals arrive online in an order chosen by an adversary, and the algorithm must output a set of non-conflicting intervals. Traditionally in scheduling theory, it is assumed that intervals arrive in order of increasing start times. We drop that assumption and allow for intervals to arrive in any possible order. We call this variant any-order interval selection (AOIS). We assume that some online acceptances can be revoked, but a feasible solution must always be maintained. For unweighted intervals and deterministic algorithms, this problem is unbounded. Under the assumption that there are at most $k$ different interval lengths, we give a simple algorithm that achieves a competitive ratio of $2k$ and show that it is optimal amongst deterministic algorithms, and a restricted class of randomized algorithms we call memoryless, contributing to an open question by Adler and Azar 2003; namely whether a randomized algorithm without access to history can achieve a constant competitive ratio. We connect our model to the problem of call control on the line, and show how the algorithms of Garay et al. 1997 can be applied to our setting, resulting in an optimal algorithm for the case of proportional weights. We also discuss the case of intervals with arbitrary weights, and show how to convert the single-length algorithm of Fung et al. 2014 into a classify and randomly select algorithm that achieves a competitive ratio of 2k. Finally, we consider the case of intervals arriving in a random order, and show that for single-lengthed instances, a one-directional algorithm (i.e. replacing intervals in one direction), is the only deterministic memoryless algorithm that can possibly benefit from random arrivals. Finally, we briefly discuss the case of intervals with arbitrary weights.

This paper focuses on the study of the order of power series that are linear combinations of a given finite set of power series. The order of a formal power series, known as $\textrm{ord}(f)$, is defined as the minimum exponent of $x$ that has a non-zero coefficient in $f(x)$. Our first result is that the order of the Wronskian of these power series is equivalent up to a polynomial factor, to the maximum order which occurs in the linear combination of these power series. This implies that the Wronskian approach used in (Kayal and Saha, TOCT'2012) to upper bound the order of sum of square roots is optimal up to a polynomial blowup. We also demonstrate similar upper bounds, similar to those of (Kayal and Saha, TOCT'2012), for the order of power series in a variety of other scenarios. We also solve a special case of the inequality testing problem outlined in (Etessami et al., TOCT'2014). In the second part of the paper, we study the equality variant of the sum of square roots problem, which is decidable in polynomial time due to (Bl\"omer, FOCS'1991). We investigate a natural generalization of this problem when the input integers are given as straight line programs. Under the assumption of the Generalized Riemann Hypothesis (GRH), we show that this problem can be reduced to the so-called ``one dimensional'' variant. We identify the key mathematical challenges for solving this ``one dimensional'' variant.

This paper focuses on the design, analysis and implementation of a new preconditioning concept for linear second order partial differential equations, including the convection-diffusion-reaction problems discretized by Galerkin or discontinuous Galerkin methods. We expand on the approach introduced by Gergelits et al. and adapt it to the more general settings, assuming that both the original and preconditioning matrices are composed of sparse matrices of very low ranks, representing local contributions to the global matrices. When applied to a symmetric problem, the method provides bounds to all individual eigenvalues of the preconditioned matrix. We show that this preconditioning strategy works not only for Galerkin discretization, but also for the discontinuous Galerkin discretization, where local contributions are associated with individual edges of the triangulation. In the case of non-symmetric problems, the method yields guaranteed bounds to real and imaginary parts of the resulting eigenvalues. We include some numerical experiments illustrating the method and its implementation, showcasing its effectiveness for the two variants of discretized (convection-)diffusion-reaction problems.

This paper considers the Westervelt equation, one of the most widely used models in nonlinear acoustics, and seeks to recover two spatially-dependent parameters of physical importance from time-trace boundary measurements. Specifically, these are the nonlinearity parameter $\kappa(x)$ often referred to as $B/A$ in the acoustics literature and the wave speed $c_0(x)$. The determination of the spatial change in these quantities can be used as a means of imaging. We consider identifiability from one or two boundary measurements as relevant in these applications. For a reformulation of the problem in terms of the squared slowness $\mathfrak{s}=1/c_0^2$ and the combined coefficient $\eta=\frac{B/A+2}{\varrho_0 c_0^4}$ we devise a frozen Newton method and prove its convergence. The effectiveness (and limitations) of this iterative scheme are demonstrated by numerical examples.

Given a random sample from a multivariate normal distribution whose covariance matrix is a Toeplitz matrix, we study the largest off-diagonal entry of the sample correlation matrix. Assuming the multivariate normal distribution has the covariance structure of an auto-regressive sequence, we establish a phase transition in the limiting distribution of the largest off-diagonal entry. We show that the limiting distributions are of Gumbel-type (with different parameters) depending on how large or small the parameter of the autoregressive sequence is. At the critical case, we obtain that the limiting distribution is the maximum of two independent random variables of Gumbel distributions. This phase transition establishes the exact threshold at which the auto-regressive covariance structure behaves differently than its counterpart with the covariance matrix equal to the identity. Assuming the covariance matrix is a general Toeplitz matrix, we obtain the limiting distribution of the largest entry under the ultra-high dimensional settings: it is a weighted sum of two independent random variables, one normal and the other following a Gumbel-type law. The counterpart of the non-Gaussian case is also discussed. As an application, we study a high-dimensional covariance testing problem.

We present an exponentially convergent numerical method to approximate the solution of the Cauchy problem for the inhomogeneous fractional differential equation with an unbounded operator coefficient and Caputo fractional derivative in time. The numerical method is based on the newly obtained solution formula that consolidates the mild solution representations of sub-parabolic, parabolic and sub-hyperbolic equations with sectorial operator coefficient $A$ and non-zero initial data. The involved integral operators are approximated using the sinc-quadrature formulas that are tailored to the spectral parameters of $A$, fractional order $\alpha$ and the smoothness of the first initial condition, as well as to the properties of the equation's right-hand side $f(t)$. The resulting method possesses exponential convergence for positive sectorial $A$, any finite $t$, including $t = 0$, and the whole range $\alpha \in (0,2)$. It is suitable for a practically important case, when no knowledge of $f(t)$ is available outside the considered interval $t \in [0, T]$. The algorithm of the method is capable of multi-level parallelism. We provide numerical examples that confirm the theoretical error estimates.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司