亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We establish optimal error bounds for the exponential wave integrator (EWI) applied to the nonlinear Schr\"odinger equation (NLSE) with $ L^\infty $-potential and/or locally Lipschitz nonlinearity under the assumption of $ H^2 $-solution of the NLSE. For the semi-discretization in time by the first-order Gautschi-type EWI, we prove an optimal $ L^2 $-error bound at $ O(\tau) $ with $ \tau>0 $ being the time step size, together with a uniform $ H^2 $-bound of the numerical solution. For the full-discretization scheme obtained by using the Fourier spectral method in space, we prove an optimal $ L^2 $-error bound at $ O(\tau + h^2) $ without any coupling condition between $ \tau $ and $ h $, where $ h>0 $ is the mesh size. In addition, for $ W^{1, 4} $-potential and a little stronger regularity of the nonlinearity, under the assumption of $ H^3 $-solution, we obtain an optimal $ H^1 $-error bound. Furthermore, when the potential is of low regularity but the nonlinearity is sufficiently smooth, we propose an extended Fourier pseudospectral method which has the same error bound as the Fourier spectral method while its computational cost is similar to the standard Fourier pseudospectral method. Our new error bounds greatly improve the existing results for the NLSE with low regularity potential and/or nonlinearity. Extensive numerical results are reported to confirm our error estimates and to demonstrate that they are sharp.

相關內容

Our objective is to calculate the derivatives of data corrupted by noise. This is a challenging task as even small amounts of noise can result in significant errors in the computation. This is mainly due to the randomness of the noise, which can result in high-frequency fluctuations. To overcome this challenge, we suggest an approach that involves approximating the data by eliminating high-frequency terms from the Fourier expansion of the given data with respect to the polynomial-exponential basis. This truncation method helps to regularize the issue, while the use of the polynomial-exponential basis ensures accuracy in the computation. We demonstrate the effectiveness of our approach through numerical examples in one and two dimensions.

In this paper, we construct a quadrature scheme to numerically solve the nonlocal diffusion equation $(\mathcal{A}^\alpha+b\mathcal{I})u=f$ with $\mathcal{A}^\alpha$ the $\alpha$-th power of the regularly accretive operator $\mathcal{A}$. Rigorous error analysis is carried out and sharp error bounds (up to some negligible constants) are obtained. The error estimates include a wide range of cases in which the regularity index and spectral angle of $\mathcal{A}$, the smoothness of $f$, the size of $b$ and $\alpha$ are all involved. The quadrature scheme is exponentially convergent with respect to the step size and is root-exponentially convergent with respect to the number of solves. Some numerical tests are presented in the last section to verify the sharpness of our estimates. Furthermore, both the scheme and the error bounds can be utilized directly to solve and analyze time-dependent problems.

The rotation-two-component Camassa--Holm system, which possesses strongly nonlinear coupled terms and high-order differential terms, tends to have continuous nonsmooth solitary wave solutions, such as peakons, stumpons, composite waves and even chaotic waves. In this paper an accurate semi-discrete conservative difference scheme for the system is derived by taking advantage of its Hamiltonian invariants. We show that the semi-discrete numerical scheme preserves at least three discrete conservative laws: mass, momentum and energy. Furthermore, a fully discrete finite difference scheme is proposed without destroying anyone of the conservative laws. Combining a nonlinear iteration process and an efficient threshold strategy, the accuracy of the numerical scheme can be guaranteed. Meanwhile, the difference scheme can capture the formation and propagation of solitary wave solutions with satisfying long time behavior under the smooth/nonsmooth initial data. The numerical results reveal a new type of asymmetric wave breaking phenomenon under the nonzero rotational parameter.

Functional principal component analysis (FPCA) has played an important role in the development of functional time series analysis. This note investigates how FPCA can be used to analyze cointegrated functional time series and proposes a modification of FPCA as a novel statistical tool. Our modified FPCA not only provides an asymptotically more efficient estimator of the cointegrating vectors, but also leads to novel FPCA-based tests for examining essential properties of cointegrated functional time series.

A partition $\mathcal{P}$ of $\mathbb{R}^d$ is called a $(k,\varepsilon)$-secluded partition if, for every $\vec{p} \in \mathbb{R}^d$, the ball $\overline{B}_{\infty}(\varepsilon, \vec{p})$ intersects at most $k$ members of $\mathcal{P}$. A goal in designing such secluded partitions is to minimize $k$ while making $\varepsilon$ as large as possible. This partition problem has connections to a diverse range of topics, including deterministic rounding schemes, pseudodeterminism, replicability, as well as Sperner/KKM-type results. In this work, we establish near-optimal relationships between $k$ and $\varepsilon$. We show that, for any bounded measure partitions and for any $d\geq 1$, it must be that $k\geq(1+2\varepsilon)^d$. Thus, when $k=k(d)$ is restricted to ${\rm poly}(d)$, it follows that $\varepsilon=\varepsilon(d)\in O\left(\frac{\ln d}{d}\right)$. This bound is tight up to log factors, as it is known that there exist secluded partitions with $k(d)=d+1$ and $\varepsilon(d)=\frac{1}{2d}$. We also provide new constructions of secluded partitions that work for a broad spectrum of $k(d)$ and $\varepsilon(d)$ parameters. Specifically, we prove that, for any $f:\mathbb{N}\rightarrow\mathbb{N}$, there is a secluded partition with $k(d)=(f(d)+1)^{\lceil\frac{d}{f(d)}\rceil}$ and $\varepsilon(d)=\frac{1}{2f(d)}$. These new partitions are optimal up to $O(\log d)$ factors for various choices of $k(d)$ and $\varepsilon(d)$. Based on the lower bound result, we establish a new neighborhood version of Sperner's lemma over hypercubes, which is of independent interest. In addition, we prove a no-free-lunch theorem about the limitations of rounding schemes in the context of pseudodeterministic/replicable algorithms.

It has been extensively studied in the literature that solving Maxwell equations is very sensitive to the mesh structure, space conformity and solution regularity. Roughly speaking, for almost all the methods in the literature, optimal convergence for low-regularity solutions heavily relies on conforming spaces and highly-regular simplicial meshes. This can be a significant limitation for many popular methods based on polytopal meshes in the case of inhomogeneous media, as the discontinuity of electromagnetic parameters can lead to quite low regularity of solutions near media interfaces, and potentially worsened by geometric singularities, making many popular methods based on broken spaces, non-conforming or polytopal meshes particularly challenging to apply. In this article, we present a virtual element method for solving an indefinite time-harmonic Maxwell equation in 2D inhomogeneous media with quite arbitrary polytopal meshes, and the media interface is allowed to have geometric singularity to cause low regularity. There are two key novelties: (i) the proposed method is theoretically guaranteed to achieve robust optimal convergence for solutions with merely $\mathbf{H}^{\theta}$ regularity, $\theta\in(1/2,1]$; (ii) the polytopal element shape can be highly anisotropic and shrinking, and an explicit formula is established to describe the relationship between the shape regularity and solution regularity. Extensive numerical experiments will be given to demonstrate the effectiveness of the proposed method.

Solving the ground state and the ground-state properties of quantum many-body systems is generically a hard task for classical algorithms. For a family of Hamiltonians defined on an $m$-dimensional space of physical parameters, the ground state and its properties at an arbitrary parameter configuration can be predicted via a machine learning protocol up to a prescribed prediction error $\varepsilon$, provided that a sample set (of size $N$) of the states can be efficiently prepared and measured. In a recent work [Huang et al., Science 377, eabk3333 (2022)], a rigorous guarantee for such an generalization was proved. Unfortunately, an exponential scaling, $N = m^{ {\cal{O}} \left(\frac{1}{\varepsilon} \right) }$, was found to be universal for generic gapped Hamiltonians. This result applies to the situation where the dimension of the parameter space is large while the scaling with the accuracy is not an urgent factor, not entering the realm of more precise learning and prediction. In this work, we consider an alternative scenario, where $m$ is a finite, not necessarily large constant while the scaling with the prediction error becomes the central concern. By exploiting physical constraints and positive good kernels for predicting the density matrix, we rigorously obtain an exponentially improved sample complexity, $N = \mathrm{poly} \left(\varepsilon^{-1}, n, \log \frac{1}{\delta}\right)$, where $\mathrm{poly}$ denotes a polynomial function; $n$ is the number of qubits in the system, and ($1-\delta$) is the probability of success. Moreover, if restricted to learning ground-state properties with strong locality assumptions, the number of samples can be further reduced to $N = \mathrm{poly} \left(\varepsilon^{-1}, \log \frac{n}{\delta}\right)$. This provably rigorous result represents a significant improvement and an indispensable extension of the existing work.

Submodular optimization generalizes many classic problems in combinatorial optimization and has recently found a wide range of applications in machine learning (e.g., feature engineering and active learning). For many large-scale optimization problems, we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient for a distributed algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of adaptive submodular optimization. Our main result is a distributed algorithm for maximizing a monotone submodular function with cardinality constraint $k$ that achieves a $(1-1/e-\varepsilon)$-approximation in expectation. This algorithm runs in $O(\log(n))$ adaptive rounds and makes $O(n)$ calls to the function evaluation oracle in expectation. The approximation guarantee and query complexity are optimal, and the adaptivity is nearly optimal. Moreover, the number of queries is substantially less than in previous works. Last, we extend our results to the submodular cover problem to demonstrate the generality of our algorithm and techniques.

Nonsmooth composite optimization with orthogonality constraints has a broad spectrum of applications in statistical learning and data science. However, this problem is generally challenging to solve due to its non-convex and non-smooth nature. Existing solutions are limited by one or more of the following restrictions: (i) they are full gradient methods that require high computational costs in each iteration; (ii) they are not capable of solving general nonsmooth composite problems; (iii) they are infeasible methods and can only achieve the feasibility of the solution at the limit point; (iv) they lack rigorous convergence guarantees; (v) they only obtain weak optimality of critical points. In this paper, we propose \textit{\textbf{OBCD}}, a new Block Coordinate Descent method for solving general nonsmooth composite problems under Orthogonality constraints. \textit{\textbf{OBCD}} is a feasible method with low computation complexity footprints. In each iteration, our algorithm updates $k$ rows of the solution matrix ($k\geq2$ is a parameter) to preserve the constraints. Then, it solves a small-sized nonsmooth composite optimization problem under orthogonality constraints either exactly or approximately. We demonstrate that any exact block-$k$ stationary point is always an approximate block-$k$ stationary point, which is equivalent to the critical stationary point. We are particularly interested in the case where $k=2$ as the resulting subproblem reduces to a one-dimensional nonconvex problem. We propose a breakpoint searching method and a fifth-order iterative method to solve this problem efficiently and effectively. We also propose two novel greedy strategies to find a good working set to further accelerate the convergence of \textit{\textbf{OBCD}}. Finally, we have conducted extensive experiments on several tasks to demonstrate the superiority of our approach.

Given a convex function $f$ on $\mathbb{R}^n$ with an integer minimizer, we show how to find an exact minimizer of $f$ using $O(n^2 \log n)$ calls to a separation oracle and $O(n^4 \log n)$ time. The previous best polynomial time algorithm for this problem given in [Jiang, SODA 2021, JACM 2022] achieves $\widetilde{O}(n^2)$ oracle complexity. However, the overall runtime of Jiang's algorithm is at least $\widetilde{\Omega}(n^8)$, due to expensive sub-routines such as the Lenstra-Lenstra-Lov\'asz (LLL) algorithm [Lenstra, Lenstra, Lov\'asz, Math. Ann. 1982] and random walk based cutting plane method [Bertsimas, Vempala, JACM 2004]. Our significant speedup is obtained by a nontrivial combination of a faster version of the LLL algorithm due to [Neumaier, Stehl\'e, ISSAC 2016] that gives similar guarantees, the volumetric center cutting plane method (CPM) by [Vaidya, FOCS 1989] and its fast implementation given in [Jiang, Lee, Song, Wong, STOC 2020]. For the special case of submodular function minimization (SFM), our result implies a strongly polynomial time algorithm for this problem using $O(n^3 \log n)$ calls to an evaluation oracle and $O(n^4 \log n)$ additional arithmetic operations. Both the oracle complexity and the number of arithmetic operations of our more general algorithm are better than the previous best-known runtime algorithms for this specific problem given in [Lee, Sidford, Wong, FOCS 2015] and [Dadush, V\'egh, Zambelli, SODA 2018, MOR 2021].

北京阿比特科技有限公司