亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An implicit variable-step BDF2 scheme is established for solving the space fractional Cahn-Hilliard equation, involving the fractional Laplacian, derived from a gradient flow in the negative order Sobolev space $H^{-\alpha}$, $\alpha\in(0,1)$. The Fourier pseudo-spectral method is applied for the spatial approximation. The proposed scheme inherits the energy dissipation law in the form of the modified discrete energy under the sufficient restriction of the time-step ratios. The convergence of the fully discrete scheme is rigorously provided utilizing the newly proved discrete embedding type convolution inequality dealing with the fractional Laplacian. Besides, the mass conservation and the unique solvability are also theoretically guaranteed. Numerical experiments are carried out to show the accuracy and the energy dissipation both for various interface widths. In particular, the multiple-time-scale evolution of the solution is captured by an adaptive time-stepping strategy in the short-to-long time simulation.

相關內容

Discrete latent space models have recently achieved performance on par with their continuous counterparts in deep variational inference. While they still face various implementation challenges, these models offer the opportunity for a better interpretation of latent spaces, as well as a more direct representation of naturally discrete phenomena. Most recent approaches propose to train separately very high-dimensional prior models on the discrete latent data which is a challenging task on its own. In this paper, we introduce a latent data model where the discrete state is a Markov chain, which allows fast end-to-end training. The performance of our generative model is assessed on a building management dataset and on the publicly available Electricity Transformer Dataset.

We introduce and analyze a symmetric low-regularity scheme for the nonlinear Schr\"odinger (NLS) equation beyond classical Fourier-based techniques. We show fractional convergence of the scheme in $L^2$-norm, from first up to second order, both on the torus $\mathbb{T}^d$ and on a smooth bounded domain $\Omega \subset \mathbb{R}^d$, $d\le 3$, equipped with homogeneous Dirichlet boundary condition. The new scheme allows for a symmetric approximation to the NLS equation in a more general setting than classical splitting, exponential integrators, and low-regularity schemes (i.e. under lower regularity assumptions, on more general domains, and with fractional rates). We motivate and illustrate our findings through numerical experiments, where we witness better structure preserving properties and an improved error-constant in low-regularity regimes.

This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist's B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL, in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established.

We develop the no-propagate algorithm for sampling the linear response of random dynamical systems, which are non-uniform hyperbolic deterministic systems perturbed by noise with smooth density. We first derive a Monte-Carlo type formula and then the algorithm, which is different from the ensemble (stochastic gradient) algorithms, finite-element algorithms, and fast-response algorithms; it does not involve the propagation of vectors or covectors, and only the density of the noise is differentiated, so the formula is not cursed by gradient explosion, dimensionality, or non-hyperbolicity. We demonstrate our algorithm on a tent map perturbed by noise and a chaotic neural network with 51 layers $\times$ 9 neurons. By itself, this algorithm approximates the linear response of non-hyperbolic deterministic systems, with an additional error proportional to the noise. We also discuss the potential of using this algorithm as a part of a bigger algorithm with smaller error.

We derive a Bernstein von-Mises theorem in the context of misspecified, non-i.i.d., hierarchical models parametrized by a finite-dimensional parameter of interest. We apply our results to hierarchical models containing non-linear operators, including the squared integral operator, and PDE-constrained inverse problems. More specifically, we consider the elliptic, time-independent Schr\"odinger equation with parametric boundary condition and general parabolic PDEs with parametric potential and boundary constraints. Our theoretical results are complemented with numerical analysis on synthetic data sets, considering both the square integral operator and the Schr\"odinger equation.

We prove closed-form equations for the exact high-dimensional asymptotics of a family of first order gradient-based methods, learning an estimator (e.g. M-estimator, shallow neural network, ...) from observations on Gaussian data with empirical risk minimization. This includes widely used algorithms such as stochastic gradient descent (SGD) or Nesterov acceleration. The obtained equations match those resulting from the discretization of dynamical mean-field theory (DMFT) equations from statistical physics when applied to gradient flow. Our proof method allows us to give an explicit description of how memory kernels build up in the effective dynamics, and to include non-separable update functions, allowing datasets with non-identity covariance matrices. Finally, we provide numerical implementations of the equations for SGD with generic extensive batch-size and with constant learning rates.

Making inference with spatial extremal dependence models can be computationally burdensome since they involve intractable and/or censored likelihoods. Building on recent advances in likelihood-free inference with neural Bayes estimators, that is, neural networks that approximate Bayes estimators, we develop highly efficient estimators for censored peaks-over-threshold models that encode censoring information in the neural network architecture. Our new method provides a paradigm shift that challenges traditional censored likelihood-based inference methods for spatial extremal dependence models. Our simulation studies highlight significant gains in both computational and statistical efficiency, relative to competing likelihood-based approaches, when applying our novel estimators to make inference with popular extremal dependence models, such as max-stable, $r$-Pareto, and random scale mixture process models. We also illustrate that it is possible to train a single neural Bayes estimator for a general censoring level, precluding the need to retrain the network when the censoring level is changed. We illustrate the efficacy of our estimators by making fast inference on hundreds-of-thousands of high-dimensional spatial extremal dependence models to assess extreme particulate matter 2.5 microns or less in diameter (PM2.5) concentration over the whole of Saudi Arabia.

Given samples of a real or complex-valued function on a set of distinct nodes, the traditional linear Chebyshev approximation is to compute the best minimax approximation on a prescribed linear functional space. Lawson's iteration is a classical and well-known method for that task. However, Lawson's iteration converges linearly and in many cases, the convergence is very slow. In this paper, by the duality theory of linear programming, we first provide an elementary and self-contained proof for the well-known Alternation Theorem in the real case. Also, relying upon the Lagrange duality, we further establish an $L_q$-weighted dual programming for the linear Chebyshev approximation. In this framework, we revisit the convergence of Lawson's iteration, and moreover, propose a Newton type iteration, the interior-point method, to solve the $L_2$-weighted dual programming. Numerical experiments are reported to demonstrate its fast convergence and its capability in finding the reference points that characterize the unique minimax approximation.

In the context of adaptive remeshing, the virtual element method provides significant advantages over the finite element method. The attractive features of the virtual element method, such as the permission of arbitrary element geometries, and the seamless permission of 'hanging' nodes, have inspired many works concerning error estimation and adaptivity. However, these works have primarily focused on adaptive refinement techniques with little attention paid to adaptive coarsening (i.e. de-refinement) techniques that are required for the development of fully adaptive remeshing procedures. In this work novel indicators are proposed for the identification of patches/clusters of elements to be coarsened, along with a novel procedure to perform the coarsening. The indicators are computed over prospective patches of elements rather than on individual elements to identify the most suitable combinations of elements to coarsen. The coarsening procedure is robust and suitable for meshes of structured and unstructured/Voronoi elements. Numerical results demonstrate the high degree of efficacy of the proposed coarsening procedures and sensible mesh evolution during the coarsening process. It is demonstrated that critical mesh geometries, such as non-convex corners and holes, are preserved during coarsening, and that meshes remain fine in regions of interest to engineers, such as near singularities.

Kleene's computability theory based on the S1-S9 computation schemes constitutes a model for computing with objects of any finite type and extends Turing's 'machine model' which formalises computing with real numbers. A fundamental distinction in Kleene's framework is between normal and non-normal functionals where the former compute the associated Kleene quantifier $\exists^n$ and the latter do not. Historically, the focus was on normal functionals, but recently new non-normal functionals have been studied based on well-known theorems, the weakest among which seems to be the uncountability of the reals. These new non-normal functionals are fundamentally different from historical examples like Tait's fan functional: the latter is computable from $\exists^2$, while the former are computable in $\exists^3$ but not in weaker oracles. Of course, there is a great divide or abyss separating $\exists^2$ and $\exists^3$ and we identify slight variations of our new non-normal functionals that are again computable in $\exists^2$, i.e. fall on different sides of this abyss. Our examples are based on mainstream mathematical notions, like quasi-continuity, Baire classes, bounded variation, and semi-continuity from real analysis.

北京阿比特科技有限公司