亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a novel numerical method for solving the anisotropic diffusion equation in magnetic fields confined to a periodic box which is accurate and provably stable. We derive energy estimates of the solution of the continuous initial boundary value problem. A discrete formulation is presented using operator splitting in time with the summation by parts finite difference approximation of spatial derivatives for the perpendicular diffusion operator. Weak penalty procedures are derived for implementing both boundary conditions and parallel diffusion operator obtained by field line tracing. We prove that the fully-discrete approximation is unconditionally stable. Discrete energy estimates are shown to match the continuous energy estimate given the correct choice of penalty parameters. A nonlinear penalty parameter is shown to provide an effective method for tuning the parallel diffusion penalty and significantly minimises rounding errors. Several numerical experiments, using manufactured solutions, the ``NIMROD benchmark'' problem and a single island problem, are presented to verify numerical accuracy, convergence, and asymptotic preserving properties of the method. Finally, we present a magnetic field with chaotic regions and islands and show the contours of the anisotropic diffusion equation reproduce key features in the field.

相關內容

For boundary value problem of an elliptic equation with variable coefficients describing the physical field distribution in inhomogeneous media, the Levi function can represent the solution in terms of volume and surface potentials, with the drawback that the volume potential involving in the solution expression requires heavy computational costs as well as the solvability of the integral equations with respect to the density pair. We introduce an modified integral expression for the solution to an elliptic equation in divergence form under the Levi function framework. The well-posedness of the linear integral system with respect to the density functions to be determined is rigorously proved. Based on the singularity decomposition for the Levi function, we propose two schemes to deal with the volume integrals so that the density functions can be solved efficiently. One method is an adaptive discretization scheme (ADS) for computing the integrals with continuous integrands, leading to the uniform accuracy of the integrals in the whole domain, and consequently the efficient computations for the density functions. The other method is the dual reciprocity method (DRM) which is a meshless approach converting the volume integrals into boundary integrals equivalently by expressing the volume density as the combination of the radial basis functions determined by the interior grids. The proposed schemes are justified numerically to be of satisfactory computation costs. Numerical examples in 2-dimensional and 3-dimensional cases are presented to show the validity of the proposed schemes.

We present new second-kind integral-equation formulations of the interior and exterior Dirichlet problems for Laplace's equation. The operators in these formulations are both continuous and coercive on general Lipschitz domains in $\mathbb{R}^d$, $d\geq 2$, in the space $L^2(\Gamma)$, where $\Gamma$ denotes the boundary of the domain. These properties of continuity and coercivity immediately imply that (i) the Galerkin method converges when applied to these formulations; and (ii) the Galerkin matrices are well-conditioned as the discretisation is refined, without the need for operator preconditioning (and we prove a corresponding result about the convergence of GMRES). The main significance of these results is that it was recently proved (see Chandler-Wilde and Spence, Numer. Math., 150(2):299-271, 2022) that there exist 2- and 3-d Lipschitz domains and 3-d starshaped Lipschitz polyhedra for which the operators in the standard second-kind integral-equation formulations for Laplace's equation (involving the double-layer potential and its adjoint) $\textit{cannot}$ be written as the sum of a coercive operator and a compact operator in the space $L^2(\Gamma)$. Therefore there exist 2- and 3-d Lipschitz domains and 3-d starshaped Lipschitz polyhedra for which Galerkin methods in $L^2(\Gamma)$ do $\textit{not}$ converge when applied to the standard second-kind formulations, but $\textit{do}$ converge for the new formulations.

Approximating invariant subspaces of generalized eigenvalue problems (GEPs) is a fundamental computational problem at the core of machine learning and scientific computing. It is, for example, the root of Principal Component Analysis (PCA) for dimensionality reduction, data visualization, and noise filtering, and of Density Functional Theory (DFT), arguably the most popular method to calculate the electronic structure of materials. For a Hermitian definite GEP $HC=SC\Lambda$, let $\Pi_k$ be the true spectral projector on the invariant subspace that is associated with the $k$ smallest (or largest) eigenvalues. Given $H,$ $S$, an integer $k$, and accuracy $\varepsilon\in(0,1)$, we show that we can compute a matrix $\widetilde\Pi_k$ such that $\lVert\Pi_k-\widetilde\Pi_k\rVert_2\leq \varepsilon$, in $O\left( n^{\omega+\eta}\mathrm{polylog}(n,\varepsilon^{-1},\kappa(S),\mathrm{gap}_k^{-1}) \right)$ bit operations in the floating point model with probability $1-1/n$. Here, $\eta>0$ is arbitrarily small, $\omega\lesssim 2.372$ is the matrix multiplication exponent, $\kappa(S)=\lVert S\rVert_2\lVert S^{-1}\rVert_2$, and $\mathrm{gap}_k$ is the gap between eigenvalues $k$ and $k+1$. To the best of our knowledge, this is the first end-to-end analysis achieving such "forward-error" approximation guarantees with nearly $O(n^{\omega+\eta})$ bit complexity, improving classical $\widetilde O(n^3)$ eigensolvers, even for the regular case $(S=I)$. Our methods rely on a new $O(n^{\omega+\eta})$ stability analysis for the Cholesky factorization, and a new smoothed analysis for computing spectral gaps, which can be of independent interest. Ultimately, we obtain new matrix multiplication-type bit complexity upper bounds for PCA problems, including classical PCA and (randomized) low-rank approximation.

We develop further the theory of monoidal bicategories by introducing and studying bicategorical counterparts of the notions of a linear explonential comonad, as considered in the study of linear logic, and of a codereliction transformation, introduced to study differential linear logic via differential categories. As an application, we extend the differential calculus of Joyal's analytic functors to analytic functors between presheaf categories, just as ordinary calculus extends from a single variable to many variables.

Bounds on the smallest eigenvalue of the neural tangent kernel (NTK) are a key ingredient in the analysis of neural network optimization and memorization. However, existing results require distributional assumptions on the data and are limited to a high-dimensional setting, where the input dimension $d_0$ scales at least logarithmically in the number of samples $n$. In this work we remove both of these requirements and instead provide bounds in terms of a measure of the collinearity of the data: notably these bounds hold with high probability even when $d_0$ is held constant versus $n$. We prove our results through a novel application of the hemisphere transform.

This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.

We propose a new simple and explicit numerical scheme for time-homogeneous stochastic differential equations. The scheme is based on sampling increments at each time step from a skew-symmetric probability distribution, with the level of skewness determined by the drift and volatility of the underlying process. We show that as the step-size decreases the scheme converges weakly to the diffusion of interest. We then consider the problem of simulating from the limiting distribution of an ergodic diffusion process using the numerical scheme with a fixed step-size. We establish conditions under which the numerical scheme converges to equilibrium at a geometric rate, and quantify the bias between the equilibrium distributions of the scheme and of the true diffusion process. Notably, our results do not require a global Lipschitz assumption on the drift, in contrast to those required for the Euler--Maruyama scheme for long-time simulation at fixed step-sizes. Our weak convergence result relies on an extension of the theory of Milstein \& Tretyakov to stochastic differential equations with non-Lipschitz drift, which could also be of independent interest. We support our theoretical results with numerical simulations.

This paper addresses the approximation of the mean curvature flow of thin structures for which classical phase field methods are not suitable. By thin structures, we mean surfaces that are not domain boundaries, typically higher codimension objects such as 1D curves in 3D, i.e. filaments, or soap films spanning a boundary curve. To approximate the mean curvature flow of such surfaces, we consider a small thickening and we apply to the thickened set an evolution model that combines the classical Allen-Cahn equation with a penalty term that takes on larger values around the skeleton of the set. The novelty of our approach lies in the definition of this penalty term that guarantees a minimal thickness of the evolving set and prevents it from disappearing unexpectedly. We prove a few theoretical properties of our model, provide examples showing the connection with higher codimension mean curvature flow, and introduce a quasi-static numerical scheme with explicit integration of the penalty term. We illustrate the numerical efficiency of the model with accurate approximations of filament structures evolving by mean curvature flow, and we also illustrate its ability to find complex 3D approximations of solutions to the Steiner problem or the Plateau problem.

Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.

Multivariate probabilistic verification is concerned with the evaluation of joint probability distributions of vector quantities such as a weather variable at multiple locations or a wind vector for instance. The logarithmic score is a proper score that is useful in this context. In order to apply this score to ensemble forecasts, a choice for the density is required. Here, we are interested in the specific case when the density is multivariate normal with mean and covariance given by the ensemble mean and ensemble covariance, respectively. Under the assumptions of multivariate normality and exchangeability of the ensemble members, a relationship is derived which describes how the logarithmic score depends on ensemble size. It permits to estimate the score in the limit of infinite ensemble size from a small ensemble and thus produces a fair logarithmic score for multivariate ensemble forecasts under the assumption of normality. This generalises a study from 2018 which derived the ensemble size adjustment of the logarithmic score in the univariate case. An application to medium-range forecasts examines the usefulness of the ensemble size adjustments when multivariate normality is only an approximation. Predictions of vectors consisting of several different combinations of upper air variables are considered. Logarithmic scores are calculated for these vectors using ECMWF's daily extended-range forecasts which consist of a 100-member ensemble. The probabilistic forecasts of these vectors are verified against operational ECMWF analyses in the Northern mid-latitudes in autumn 2023. Scores are computed for ensemble sizes from 8 to 100. The fair logarithmic scores of ensembles with different cardinalities are very close, in contrast to the unadjusted scores which decrease considerably with ensemble size. This provides evidence for the practical usefulness of the derived relationships.

北京阿比特科技有限公司