亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We analyse parallel overlapping Schwarz domain decomposition methods for the Helmholtz equation, where the subdomain problems satisfy first-order absorbing (impedance) transmission conditions, and exchange of information between subdomains is achieved using a partition of unity. We provide a novel analysis of this method at the PDE level (without discretization). First, we formulate the method as a fixed point iteration, and show (in dimensions 1,2,3) that it is well-defined in a tensor product of appropriate local function spaces, each with $L^2$ impedance boundary data. Given this, we then obtain a bound on the norm of the fixed point operator in terms of the local norms of certain impedance-to-impedance maps arising from local interactions between subdomains. These bounds provide conditions under which (some power of) the fixed point operator is a contraction. In 2-d, for rectangular domains and strip-wise domain decompositions (with each subdomain only overlapping its immediate neighbours), we present two techniques for verifying the assumptions on the impedance-to-impedance maps which ensure power contractivity of the fixed point operator. The first is through semiclassical analysis, which gives rigorous estimates valid as the frequency tends to infinity. These results verify the required assumptions for sufficiently large overlap. For more realistic domain decompositions, we directly compute the norms of the impedance-to-impedance maps by solving certain canonical (local) eigenvalue problems. We give numerical experiments that illustrate the theory. These also show that the iterative method remains convergent and/or provides a good preconditioner in cases not covered by the theory, including for general domain decompositions, such as those obtained via automatic graph-partitioning software.

相關內容

Using the calculus of variations, we prove the following structure theorem for noise stable partitions: a partition of $n$-dimensional Euclidean space into $m$ disjoint sets of fixed Gaussian volumes that maximize their noise stability must be $(m-1)$-dimensional, if $m-1\leq n$. In particular, the maximum noise stability of a partition of $m$ sets in $\mathbb{R}^{n}$ of fixed Gaussian volumes is constant for all $n$ satisfying $n\geq m-1$. From this result, we obtain: (i) A proof of the Plurality is Stablest Conjecture for $3$ candidate elections, for all correlation parameters $\rho$ satisfying $0<\rho<\rho_{0}$, where $\rho_{0}>0$ is a fixed constant (that does not depend on the dimension $n$), when each candidate has an equal chance of winning. (ii) A variational proof of Borell's Inequality (corresponding to the case $m=2$). The structure theorem answers a question of De-Mossel-Neeman and of Ghazi-Kamath-Raghavendra. Item (i) is the first proof of any case of the Plurality is Stablest Conjecture of Khot-Kindler-Mossel-O'Donnell (2005) for fixed $\rho$, with the case $\rho\to1^{-}$ being solved recently. Item (i) is also the first evidence for the optimality of the Frieze-Jerrum semidefinite program for solving MAX-3-CUT, assuming the Unique Games Conjecture. Without the assumption that each candidate has an equal chance of winning in (i), the Plurality is Stablest Conjecture is known to be false.

Whereas quantum complexity theory has traditionally been concerned with problems arising from classical complexity theory (such as computing boolean functions), it also makes sense to study the complexity of inherently quantum operations such as constructing quantum states or performing unitary transformations. With this motivation, we define models of interactive proofs for synthesizing quantum states and unitaries, where a polynomial-time quantum verifier interacts with an untrusted quantum prover, and a verifier who accepts also outputs an approximation of the target state (for the state synthesis problem) or the result of the target unitary applied to the input state (for the unitary synthesis problem); furthermore there should exist an "honest" prover which the verifier accepts with probability 1. Our main result is a "state synthesis" analogue of the inclusion PSPACE $\subseteq$ IP: any sequence of states computable by a polynomial-space quantum algorithm (which may run for exponential time) admits an interactive protocol of the form described above. Leveraging this state synthesis protocol, we also give a unitary synthesis protocol for polynomial space-computable unitaries that act nontrivially on only a polynomial-dimensional subspace.

We consider discrete Schr\"odinger operators with aperiodic potentials given by a Sturmian word, which is a natural generalisation of the Fibonacci Hamiltonian. We introduce the finite section method, which is often used to solve operator equations approximately, and apply it first to periodic Schr\"odinger operators. It turns out that the applicability of the method is always guaranteed for integer-valued potentials provided that the operator is invertible. By using periodic approximations, we find a necessary and sufficient condition for the applicability of the finite section method for aperiodic Schr\"odinger operators and a numerical method to check it.

We propose an Extended Hybrid High-Order scheme for the Poisson problem with solution possessing weak singularities. Some general assumptions are stated on the nature of this singularity and the remaining part of the solution. The method is formulated by enriching the local polynomial spaces with appropriate singular functions. Via a detailed error analysis, the method is shown to converge optimally in both discrete and continuous energy norms. Some tests are conducted in two dimensions for singularities arising from irregular geometries in the domain. The numerical simulations illustrate the established error estimates, and show the method to be a significant improvement over a standard Hybrid High-Order method.

Bregman proximal point algorithm (BPPA), as one of the centerpieces in the optimization toolbox, has been witnessing emerging applications. With simple and easy to implement update rule, the algorithm bears several compelling intuitions for empirical successes, yet rigorous justifications are still largely unexplored. We study the computational properties of BPPA through classification tasks with separable data, and demonstrate provable algorithmic regularization effects associated with BPPA. We show that BPPA attains non-trivial margin, which closely depends on the condition number of the distance generating function inducing the Bregman divergence. We further demonstrate that the dependence on the condition number is tight for a class of problems, thus showing the importance of divergence in affecting the quality of the obtained solutions. In addition, we extend our findings to mirror descent (MD), for which we establish similar connections between the margin and Bregman divergence. We demonstrate through a concrete example, and show BPPA/MD converges in direction to the maximal margin solution with respect to the Mahalanobis distance. Our theoretical findings are among the first to demonstrate the benign learning properties BPPA/MD, and also provide corroborations for a careful choice of divergence in the algorithmic design.

Peskin's Immersed Boundary (IB) model and method are among the most popular modeling tools and numerical methods. The IB method has been known to be first order accurate in the velocity. However, almost no rigorous theoretical proof can be found in the literature for Stokes equations with a prescribed velocity boundary condition. In this paper, it has been shown that the pressure of the Stokes equation has convergence order $O(\sqrt{h})$ in the $L^2$ norm while the velocity has $O(h)$ convergence in the infinity norm in two-dimensions (2D). The proofs are based on the idea of the immersed interface method, and the convergence proof of the IB method for elliptic interface problems \cite{li:mathcom}. The proof is intuitive and the conclusion can apply to different boundary conditions as long as the problem is well-posed. The proof process also provides an efficient way to decouple the system into three Helmholtz/Poisson equations without affecting the accuracy. A non-trivial numerical example is also provided to confirm the theoretical analysis.

We design a Hybrid High-Order (HHO) scheme for the Poisson problem that is fully robust on polytopal meshes in the presence of small edges/faces. We state general assumptions on the stabilisation terms involved in the scheme, under which optimal error estimates (in discrete and continuous energy norms, as well as $L^2$-norm) are established with multiplicative constants that do not depend on the maximum number of faces in each element, or the relative size between an element and its faces. We illustrate the error estimates through numerical simulations in 2D and 3D on meshes designed by agglomeration techniques (such meshes naturally have elements with a very large numbers of faces, and very small faces).

We propose a unified frequency domain cross-validation (FDCV) method to obtain an HAC standard error. Our proposed method allows for model/tuning parameter selection across parametric and nonparametric spectral estimators simultaneously. Our candidate class consists of restricted maximum likelihood-based (REML) autoregressive spectral estimators and lag-weights estimators with the Parzen kernel. We provide a method for efficiently computing the REML estimators of the autoregressive models. In simulations, we demonstrate the reliability of our FDCV method compared with the popular HAC estimators of Andrews-Monahan and Newey-West. Supplementary material for the article is available online.

Distributionally robust optimization (DRO) is a worst-case framework for stochastic optimization under uncertainty that has drawn fast-growing studies in recent years. When the underlying probability distribution is unknown and observed from data, DRO suggests to compute the worst-case distribution within a so-called uncertainty set that captures the involved statistical uncertainty. In particular, DRO with uncertainty set constructed as a statistical divergence neighborhood ball has been shown to provide a tool for constructing valid confidence intervals for nonparametric functionals, and bears a duality with the empirical likelihood (EL). In this paper, we show how adjusting the ball size of such type of DRO can reduce higher-order coverage errors similar to the Bartlett correction. Our correction, which applies to general von Mises differentiable functionals, is more general than the existing EL literature that only focuses on smooth function models or $M$-estimation. Moreover, we demonstrate a higher-order "self-normalizing" property of DRO regardless of the choice of divergence. Our approach builds on the development of a higher-order expansion of DRO, which is obtained through an asymptotic analysis on a fixed point equation arising from the Karush-Kuhn-Tucker conditions.

Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.

北京阿比特科技有限公司