亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In $d$ dimensions, approximating an arbitrary function oscillating with frequency $\lesssim k$ requires $\sim k^d$ degrees of freedom. A numerical method for solving the Helmholtz equation (with wavenumber $k$) suffers from the pollution effect if, as $k\to \infty$, the total number of degrees of freedom needed to maintain accuracy grows faster than this natural threshold. While the $h$-version of the finite element method (FEM) (where accuracy is increased by decreasing the meshwidth $h$ and keeping the polynomial degree $p$ fixed) suffers from the pollution effect, the celebrated papers [Melenk, Sauter 2010], [Melenk, Sauter 2011], [Esterhazy, Melenk 2012], and [Melenk, Parsania, Sauter 2013] showed that the $hp$-FEM (where accuracy is increased by decreasing the meshwidth $h$ and increasing the polynomial degree $p$) applied to a variety of constant-coefficient Helmholtz problems does not suffer from the pollution effect. The heart of the proofs of these results is a PDE result splitting the solution of the Helmholtz equation into "high" and "low" frequency components. In this expository paper we prove this splitting for the constant-coefficient Helmholtz equation in full space (i.e., in $\mathbb{R}^d$) using only integration by parts and elementary properties of the Fourier transform; this is in contrast to the proof for this set-up in [Melenk, Sauter 2010] which uses somewhat-involved bounds on Bessel and Hankel functions. The proof in this paper is motivated by the recent proof in [Lafontaine, Spence, Wunsch 2022] of this splitting for the variable-coefficient Helmholtz equation in full space; indeed, the proof in [Lafontaine, Spence, Wunsch 2022] uses more-sophisticated tools that reduce to the elementary ones above for constant coefficients.

相關內容

We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for efficient approximate residual calculations in AA, which in turn reduce computational time and memory storage while maintaining convergence. Specifically, we propose a reduced variant of AA, which consists in projecting the least-squares used to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by computable heuristic quantities guided by rigorous theoretical error bounds. The use of heuristics to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the convergence, ensures the convergence of the numerical scheme within a prescribed tolerance threshold on the residual. We numerically show and assess the performance of AA with approximate calculations on: (i) linear deterministic fixed-point iterations arising from the Richardson's scheme to solve linear systems with open-source benchmark matrices with various preconditioners and (ii) non-linear deterministic fixed-point iterations arising from non-linear time-dependent Boltzmann equations.

We describe a `discretize-then-relax' strategy to globally minimize integral functionals over functions $u$ in a Sobolev space satisfying prescribed Dirichlet boundary conditions. The strategy applies whenever the integral functional depends polynomially on $u$ and its derivatives, even if it is nonconvex. The `discretize' step uses a bounded finite-element scheme to approximate the integral minimization problem with a convergent hierarchy of polynomial optimization problems over a compact feasible set, indexed by the decreasing size $h$ of the finite-element mesh. The `relax' step employs sparse moment-SOS relaxations to approximate each polynomial optimization problem with a hierarchy of convex semidefinite programs, indexed by an increasing relaxation order $\omega$. We prove that, as $\omega\to\infty$ and $h\to 0$, solutions of such semidefinite programs provide approximate minimizers that converge in $L^p$ to the global minimizer of the original integral functional if this is unique. We also report computational experiments that show our numerical strategy works well even when technical conditions required by our theoretical analysis are not satisfied.

Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.

In this paper, we present a numerical approach to solve the McKean-Vlasov equations, which are distribution-dependent stochastic differential equations, under some non-globally Lipschitz conditions for both the drift and diffusion coefficients. We establish a propagation of chaos result, based on which the McKean-Vlasov equation is approximated by an interacting particle system. A truncated Euler scheme is then proposed for the interacting particle system allowing for a Khasminskii-type condition on the coefficients. To reduce the computational cost, the random batch approximation proposed in [Jin et al., J. Comput. Phys., 400(1), 2020] is extended to the interacting particle system where the interaction could take place in the diffusion term. An almost half order of convergence is proved in $L^p$ sense. Numerical tests are performed to verify the theoretical results.

Symbolic Regression (SR) algorithms attempt to learn analytic expressions which fit data accurately and in a highly interpretable manner. Conventional SR suffers from two fundamental issues which we address here. First, these methods search the space stochastically (typically using genetic programming) and hence do not necessarily find the best function. Second, the criteria used to select the equation optimally balancing accuracy with simplicity have been variable and subjective. To address these issues we introduce Exhaustive Symbolic Regression (ESR), which systematically and efficiently considers all possible equations -- made with a given basis set of operators and up to a specified maximum complexity -- and is therefore guaranteed to find the true optimum (if parameters are perfectly optimised) and a complete function ranking subject to these constraints. We implement the minimum description length principle as a rigorous method for combining these preferences into a single objective. To illustrate the power of ESR we apply it to a catalogue of cosmic chronometers and the Pantheon+ sample of supernovae to learn the Hubble rate as a function of redshift, finding $\sim$40 functions (out of 5.2 million trial functions) that fit the data more economically than the Friedmann equation. These low-redshift data therefore do not uniquely prefer the expansion history of the standard model of cosmology. We make our code and full equation sets publicly available.

Since the control of the Lipschitz constant has a great impact on the training stability, generalization, and robustness of neural networks, the estimation of this value is nowadays a real scientific challenge. In this paper we introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory and a new alternative to the Power iteration. Called the Gram iteration, our approach exhibits a superlinear convergence. First, we show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability. Then, it proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches. Code is available at //github.com/blaisedelattre/lip4conv.

Given a closed simple polygon $P$, we say two points $p,q$ see each other if the segment $pq$ is fully contained in $P$. The art gallery problem seeks a minimum size set $G\subset P$ of guards that sees $P$ completely. The only currently correct algorithm to solve the art gallery problem exactly uses algebraic methods and is attributed to Sharir. As the art gallery problem is ER-complete, it seems unlikely to avoid algebraic methods, without additional assumptions. In this paper, we introduce the notion of vision stability. In order to describe vision stability consider an enhanced guard that can see "around the corner" by an angle of $\delta$ or a diminished guard whose vision is by an angle of $\delta$ "blocked" by reflex vertices. A polygon $P$ has vision stability $\delta$ if the optimal number of enhanced guards to guard $P$ is the same as the optimal number of diminished guards to guard $P$. We will argue that most relevant polygons are vision stable. We describe a one-shot vision stable algorithm that computes an optimal guard set for visionstable polygons using polynomial time and solving one integer program. It guarantees to find the optimal solution for every vision stable polygon. We implemented an iterative visionstable algorithm and show its practical performance is slower, but comparable with other state of the art algorithms. Our iterative algorithm is inspired and follows closely the one-shot algorithm. It delays several steps and only computes them when deemed necessary. Given a chord $c$ of a polygon, we denote by $n(c)$ the number of vertices visible from $c$. The chord-width of a polygon is the maximum $n(c)$ over all possible chords $c$. The set of vision stable polygons admits an FPT algorithm when parametrized by the chord-width. Furthermore, the one-shot algorithm runs in FPT time, when parameterized by the number of reflex vertices.

Elitism, which constructs the new population by preserving best solutions out of the old population and newly-generated solutions, has been a default way for population update since its introduction into multi-objective evolutionary algorithms (MOEAs) in the late 1990s. In this paper, we take an opposite perspective to conduct the population update in MOEAs by simply discarding elitism. That is, we treat the newly-generated solutions as the new population directly (so that all selection pressure comes from mating selection). We propose a simple non-elitist MOEA (called NE-MOEA) that only uses Pareto dominance sorting to compare solutions, without involving any diversity-related selection criterion. Preliminary experimental results show that NE-MOEA can compete with well-known elitist MOEAs (NSGA-II, SMS-EMOA and NSGA-III) on several combinatorial problems. Lastly, we discuss limitations of the proposed non-elitist algorithm and suggest possible future research directions.

In this paper, a computational method is developed to find an approximate solution of the stochastic Volterra-Fredholm integral equation using the Walsh function approximation and its operational matrix. Moreover, convergence and error analysis of the method is carried out to strengthen the validity of the method. Furthermore, the method is numerically compared to the block pulse function method and the Haar wavelet method for some non-trivial examples.

We present efficient algorithms for Quantile Join Queries, abbreviated as %JQ. A %JQ asks for the answer at a specified relative position (e.g., 50% for the median) under some ordering over the answers to a Join Query (JQ). Our goal is to avoid materializing the set of all join answers, and to achieve quasilinear time in the size of the database, regardless of the total number of answers. A recent dichotomy result rules out the existence of such an algorithm for a general family of queries and orders. Specifically, for acyclic JQs without self-joins, the problem becomes intractable for ordering by sum whenever we join more than two relations (and these joins are not trivial intersections). Moreover, even for basic ranking functions beyond sum, such as min or max over different attributes, so far it is not known whether there is any nontrivial tractable %JQ. In this work, we develop a new approach to solving %JQ. Our solution uses two subroutines: The first one needs to select what we call a "pivot answer". The second subroutine partitions the space of query answers according to this pivot, and continues searching in one partition that is represented as new %JQ over a new database. For pivot selection, we develop an algorithm that works for a large class of ranking functions that are appropriately monotone. The second subroutine requires a customized construction for the specific ranking function at hand. We show the benefit and generality of our approach by using it to establish several new complexity results. First, we prove the tractability of min and max for all acyclic JQs, thereby resolving the above question. Second, we extend the previous %JQ dichotomy for sum to all partial sums. Third, we handle the intractable cases of sum by devising a deterministic approximation scheme that applies to every acyclic JQ.

北京阿比特科技有限公司