亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the Cauchy problem for a first-order evolution equation with memory in a finite-dimensional Hilbert space when the integral term is related to the time derivative of the solution. The main problems of the approximate solution of such nonlocal problems are due to the necessity to work with the approximate solution for all previous time moments. We propose a transformation of the first-order integrodifferential equation to a system of local evolutionary equations. We use the approach known in the theory of Voltaire integral equations with an approximation of the difference kernel by the sum of exponents. We formulate a local problem for a weakly coupled system of equations with additional ordinary differential equations. We have given estimates of the stability of the solution by initial data and the right-hand side for the solution of the corresponding Cauchy problem. The primary attention is paid to constructing and investigating the stability of two-level difference schemes, which are convenient for computational implementation. The numerical solution of a two-dimensional model problem for the evolution equation of the first order, when the Laplace operator conditions the dependence on spatial variables, is presented.

相關內容

We consider a class of statistical estimation problems in which we are given a random data matrix ${\boldsymbol X}\in {\mathbb R}^{n\times d}$ (and possibly some labels ${\boldsymbol y}\in{\mathbb R}^n$) and would like to estimate a coefficient vector ${\boldsymbol \theta}\in{\mathbb R}^d$ (or possibly a constant number of such vectors). Special cases include low-rank matrix estimation and regularized estimation in generalized linear models (e.g., sparse regression). First order methods proceed by iteratively multiplying current estimates by ${\boldsymbol X}$ or its transpose. Examples include gradient descent or its accelerated variants. Celentano, Montanari, Wu proved that for any constant number of iterations (matrix vector multiplications), the optimal first order algorithm is a specific approximate message passing algorithm (known as `Bayes AMP'). The error of this estimator can be characterized in the high-dimensional asymptotics $n,d\to\infty$, $n/d\to\delta$, and provides a lower bound to the estimation error of any first order algorithm. Here we present a simpler proof of the same result, and generalize it to broader classes of data distributions and of first order algorithms, including algorithms with non-separable nonlinearities. Most importantly, the new proof technique does not require to construct an equivalent tree-structured estimation problem, and is therefore susceptible of a broader range of applications.

We introduce a new numerical method for solving time-harmonic acoustic scattering problems. The main focus is on plane waves scattered by smoothly varying material inhomogeneities. The proposed method works for any frequency $\omega$, but is especially efficient for high-frequency problems. It is based on a time-domain approach and consists of three steps: \emph{i)} computation of a suitable incoming plane wavelet with compact support in the propagation direction; \emph{ii)} solving a scattering problem in the time domain for the incoming plane wavelet; \emph{iii)} reconstruction of the time-harmonic solution from the time-domain solution via a Fourier transform in time. An essential ingredient of the new method is a front-tracking mesh adaptation algorithm for solving the problem in \emph{ii)}. By exploiting the limited support of the wave front, this allows us to make the number of the required degrees of freedom to reach a given accuracy significantly less dependent on the frequency $\omega$. We also present a new algorithm for computing the Fourier transform in \emph{iii)} that exploits the reduced number of degrees of freedom corresponding to the adapted meshes. Numerical examples demonstrate the advantages of the proposed method and the fact that the method can also be applied with external source terms such as point sources and sound-soft scatterers. The gained efficiency, however, is limited in the presence of trapping modes.

This paper studies efficient estimation of causal effects when treatment is (quasi-) randomly rolled out to units at different points in time. We solve for the most efficient estimator in a class of estimators that nests two-way fixed effects models and other popular generalized difference-in-differences methods. A feasible plug-in version of the efficient estimator is asymptotically unbiased with efficiency (weakly) dominating that of existing approaches. We provide both $t$-based and permutation-test based methods for inference. We illustrate the performance of the plug-in efficient estimator in simulations and in an application to \citet{wood_procedural_2020}'s study of the staggered rollout of a procedural justice training program for police officers. We find that confidence intervals based on the plug-in efficient estimator have good coverage and can be as much as eight times shorter than confidence intervals based on existing state-of-the-art methods. As an empirical contribution of independent interest, our application provides the most precise estimates to date on the effectiveness of procedural justice training programs for police officers.

In backward error analysis, an approximate solution to an equation is compared to the exact solution to a nearby "modified" equation. In numerical ordinary differential equations, the two agree up to any power of the step size. If the differential equation has a geometric property then the modified equation may share it. In this way, known properties of differential equations can be applied to the approximation. But for partial differential equations, the known modified equations are of higher order, limiting applicability of the theory. Therefore, we study symmetric solutions of discretized partial differential equations that arise from a discrete variational principle. These symmetric solutions obey infinite-dimensional functional equations. We show that these equations admit second-order modified equations which are Hamiltonian and also possess first-order Lagrangians in modified coordinates. The modified equation and its associated structures are computed explicitly for the case of rotating travelling waves in the nonlinear wave equation.

In this paper we study some theoretical and numerical issues of the Boussinesq/Full dispersion system. This is a a three-parameter system of pde's that models the propagation of internal waves along the interface of two-fluid layers with rigid lid condition for the upper layer, and under a Boussinesq regime for the upper layer and a full dispersion regime for the lower layer. We first discretize in space the periodic initial-value problem with a Fourier-Galerkin spectral method and prove error estimates for several ranges of values of the parameters. Solitary waves of the model systems are then studied numerically in several ways. The numerical generation is analyzed by approximating the ode system with periodic boundary conditions for the solitary-wave profiles with a Fourier spectral scheme, implemented in a collocation form, and solving iteratively the corresponding algebraic system in Fourier space with the Petviashvili method accelerated with the minimal polynomial extrapolation technique. Motivated by the numerical results, a new result of existence of solitary waves is proved. In the last part of the paper, the dynamics of these solitary waves is studied computationally, To this end, the semidiscrete systems obtained from the Fourier-Galerkin discretization in space are integrated numerically in time by a Runge-Kutta Composition method of order four. The fully discrete scheme is used to explore numerically the stability of solitary waves, their collisions, and the resolution of other initial conditions into solitary waves.

Dynamic mode decomposition (DMD) is an emerging methodology that has recently attracted computational scientists working on nonintrusive reduced order modeling. One of the major strengths that DMD possesses is having ground theoretical roots from the Koopman approximation theory. Indeed, DMD may be viewed as the data-driven realization of the famous Koopman operator. Nonetheless, the stable implementation of DMD incurs computing the singular value decomposition of the input data matrix. This, in turn, makes the process computationally demanding for high dimensional systems. In order to alleviate this burden, we develop a framework based on sketching methods, wherein a sketch of a matrix is simply another matrix which is significantly smaller, but still sufficiently approximates the original system. Such sketching or embedding is performed by applying random transformations, with certain properties, on the input matrix to yield a compressed version of the initial system. Hence, many of the expensive computations can be carried out on the smaller matrix, thereby accelerating the solution of the original problem. We conduct numerical experiments conducted using the spherical shallow water equations as a prototypical model in the context of geophysical flows. The performance of several sketching approaches is evaluated for capturing the range and co-range of the data matrix. The proposed sketching-based framework can accelerate various portions of the DMD algorithm, compared to classical methods that operate directly on the raw input data. This eventually leads to substantial computational gains that are vital for digital twinning of high dimensional systems.

In the $(1+\varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $\delta$, the goal is to construct a data structure for a given set $\mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $C\in\mathcal{C}$ such that $\delta(Q,C)\le r$, then return a curve $C'\in\mathcal{C}$ with $\delta(Q,C')\le(1+\varepsilon)r$. There exists an efficient reduction from the $(1+\varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $C\in\mathcal{C}$ with $\delta(Q,C)\le(1+\varepsilon)\cdot\delta(Q,C^*)$, where $C^*$ is the curve of $\mathcal{C}$ closest to $Q$. Given a set $\mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $n\cdot O(\frac{1}{\varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Fr\'echet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k \ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.

In this article, we propose a higher order approximation to Caputo fractional (C-F) derivative using graded mesh and standard central difference approximation for space derivatives, in order to obtain the approximate solution of time fractional partial differential equations (TFPDE). The proposed approximation for C-F derivative tackles the singularity at origin effectively and is easily applicable to diverse problems. The stability analysis and truncation error bounds of the proposed scheme are discussed, along with this, analyzed the required regularity of the solution. Few numerical examples are presented to support the theory.

In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.

In two-phase image segmentation, convex relaxation has allowed global minimisers to be computed for a variety of data fitting terms. Many efficient approaches exist to compute a solution quickly. However, we consider whether the nature of the data fitting in this formulation allows for reasonable assumptions to be made about the solution that can improve the computational performance further. In particular, we employ a well known dual formulation of this problem and solve the corresponding equations in a restricted domain. We present experimental results that explore the dependence of the solution on this restriction and quantify imrovements in the computational performance. This approach can be extended to analogous methods simply and could provide an efficient alternative for problems of this type.

北京阿比特科技有限公司