亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we study semi-discrete and fully discrete evolving surface finite element schemes for the Cahn-Hilliard equation with a logarithmic potential. Specifically we consider linear finite elements discretising space and backward Euler time discretisation. Our analysis relies on a specific geometric assumption on the evolution of the surface. Our main results are $L^2_{H^1}$ error bounds for both the semi-discrete and fully discrete schemes, and we provide some numerical results.

相關內容

 Surface 是微軟公司( )旗下一系列使用 Windows 10(早期為 Windows 8.X)操作系統的電腦產品,目前有 Surface、Surface Pro 和 Surface Book 三個系列。 2012 年 6 月 18 日,初代 Surface Pro/RT 由時任微軟 CEO 史蒂夫·鮑爾默發布于在洛杉磯舉行的記者會,2012 年 10 月 26 日上市銷售。

This paper proves a homomorphism between extensional formal semantics and distributional vector space semantics, demonstrating structural compatibility. Formal semantics models meaning as reference, using logical structures to map linguistic expressions to truth conditions, while distributional semantics represents meaning through word vectors derived from contextual usage. By constructing injective mappings that preserve semantic relationships, we show that every semantic function in an extensional model corresponds to a compatible vector space operation. This result respects compositionality and extends to function compositions, constant interpretations, and $n$-ary relations. Rather than pursuing unification, we highlight a mathematical foundation for hybrid cognitive models that integrate symbolic and sub-symbolic reasoning and semantics. These findings support multimodal language processing, aligning `meaning as reference' (Frege, Tarski) with `meaning as use' (Wittgenstein, Firth).

This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of $B$. We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, showcasing its flexibility in incorporating prior knowledge into the regularization framework. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions.

In this paper, we develop bound-preserving techniques for the Runge--Kutta (RK) discontinuous Galerkin (DG) method with compact stencils (cRKDG method) for hyperbolic conservation laws. The cRKDG method was recently introduced in [Q. Chen, Z. Sun, and Y. Xing, SIAM J. Sci. Comput., 46: A1327--A1351, 2024]. It enhances the compactness of the standard RKDG method, resulting in reduced data communication, simplified boundary treatments, and improved suitability for local time marching. This work improves the robustness of the cRKDG method by enforcing desirable physical bounds while preserving its compactness, local conservation, and high-order accuracy. Our method is extended from the seminal work of [X. Zhang and C.-W. Shu, J. Comput. Phys., 229: 3091--3120, 2010]. We prove that the cell average of the cRKDG method at each RK stage preserves the physical bounds by expressing it as a convex combination of three types of forward-Euler solutions. A scaling limiter is then applied after each RK stage to enforce pointwise bounds. Additionally, we explore RK methods with less restrictive time step sizes. Because the cRKDG method does not rely on strong-stability-preserving RK time discretization, it avoids its order barriers, allowing us to construct a four-stage, fourth-order bound-preserving cRKDG method. Numerical tests on challenging benchmarks are provided to demonstrate the performance of the proposed method.

We consider estimators obtained by iterates of the conjugate gradient (CG) algorithm applied to the normal equation of prototypical statistical inverse problems. Stopping the CG algorithm early induces regularisation, and optimal convergence rates of prediction and reconstruction error are established in wide generality for an ideal oracle stopping time. Based on this insight, a fully data-driven early stopping rule $\tau$ is constructed, which also attains optimal rates, provided the error in estimating the noise level is not dominant. The error analysis of CG under statistical noise is subtle due to its nonlinear dependence on the observations. We provide an explicit error decomposition and identify two terms in the prediction error, which share important properties of classical bias and variance terms. Together with a continuous interpolation between CG iterates, this paves the way for a comprehensive error analysis of early stopping. In particular, a general oracle-type inequality is proved for the prediction error at $\tau$. For bounding the reconstruction error, a more refined probabilistic analysis, based on concentration of self-normalised Gaussian processes, is developed. The methodology also provides some new insights into early stopping for CG in deterministic inverse problems. A numerical study for standard examples shows good results in practice for early stopping at $\tau$.

In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD) problems, called JDSVD, a large symmetric and generally indefinite correction equation is solved iteratively at each outer iteration, which constitutes the inner iterations and dominates the overall efficiency of JDSVD. In this paper, a convergence analysis is made on the minimal residual (MINRES) method for the correction equation. Motivated by the results obtained, at each outer iteration a new correction equation is derived that extracts useful information from current subspaces to construct effective preconditioners for the correction equation and is proven to retain the same convergence of outer iterations of JDSVD.The resulting method is called inner preconditioned JDSVD (IPJDSVD) method; it is also a new JDSVD method, and any viable preconditioner for the correction equations in JDSVD is straightforwardly applicable to those in IPJDSVD. Convergence results show that MINRES for the new correction equation can converge much faster when there is a cluster of singular values closest to a given target. A new thick-restart IPJDSVD algorithm with deflation and purgation is proposed that simultaneously accelerates the outer and inner convergence of the standard thick-restart JDSVD and computes several singular triplets. Numerical experiments justify the theory and illustrate the considerable superiority of IPJDSVD to JDSVD, and demonstrate that a similar two-stage IPJDSVD algorithm substantially outperforms the most advanced PRIMME\_SVDS software nowadays for computing the smallest singular triplets.

This work presents a numerical analysis of a Discontinuous Galerkin (DG) method for a transformed master equation modeling an open quantum system: a quantum sub-system interacting with a noisy environment. It is shown that the presented transformed master equation has a reduced computational cost in comparison to a Wigner-Fokker-Planck model of the same system for the general case of non-harmonic potentials via DG schemes. Specifics of a Discontinuous Galerkin (DG) numerical scheme adequate for the system of convection-diffusion equations obtained for our Lindblad master equation in position basis are presented. This lets us solve computationally the transformed system of interest modeling our open quantum system problem. The benchmark case of a harmonic potential is then presented, for which the numerical results are compared against the analytical steady-state solution of this problem. Two non-harmonic cases are then presented: the linear and quartic potentials are modeled via our DG framework, for which we show our numerical results.

The continental plates of Earth are known to drift over a geophysical timescale, and their interactions have lead to some of the most spectacular geoformations of our planet while also causing natural disasters such as earthquakes and volcanic activity. Understanding the dynamics of interacting continental plates is thus significant. In this work, we present a fluid mechanical investigation of the plate motion, interaction, and dynamics. Through numerical experiments, we examine the coupling between a convective fluid and plates floating on top of it. With physical modeling, we show the coupling is both mechanical and thermal, leading to the thermal blanket effect: the floating plate is not only transported by the fluid flow beneath, it also prevents the heat from leaving the fluid, leading to a convective flow that further affects the plate motion. By adding several plates to such a coupled fluid-structure interaction, we also investigate how floating plates interact with each other and show that, under proper conditions, small plates can converge into a supercontinent.

In this manuscript we present the tensor-train reduced basis method, a novel projection-based reduced-order model for the efficient solution of parameterized partial differential equations. Despite their popularity and considerable computational advantages with respect to their full order counterparts, reduced-order models are typically characterized by a considerable offline computational cost. The proposed approach addresses this issue by efficiently representing high dimensional finite element quantities with the tensor train format. This method entails numerous benefits, namely, the smaller number of operations required to compute the reduced subspaces, the cheaper hyper-reduction strategy employed to reduce the complexity of the PDE residual and Jacobian, and the decreased dimensionality of the projection subspaces for a fixed accuracy. We provide a posteriori estimates that demonstrate the accuracy of the proposed method, we test its computational performance for the heat equation and transient linear elasticity on three-dimensional Cartesian geometries.

Gradient Descent (GD) and Conjugate Gradient (CG) methods are among the most effective iterative algorithms for solving unconstrained optimization problems, particularly in machine learning and statistical modeling, where they are employed to minimize cost functions. In these algorithms, tunable parameters, such as step sizes or conjugate parameters, play a crucial role in determining key performance metrics, like runtime and solution quality. In this work, we introduce a framework that models algorithm selection as a statistical learning problem, and thus learning complexity can be estimated by the pseudo-dimension of the algorithm group. We first propose a new cost measure for unconstrained optimization algorithms, inspired by the concept of primal-dual integral in mixed-integer linear programming. Based on the new cost measure, we derive an improved upper bound for the pseudo-dimension of gradient descent algorithm group by discretizing the set of step size configurations. Moreover, we generalize our findings from gradient descent algorithm to the conjugate gradient algorithm group for the first time, and prove the existence a learning algorithm capable of probabilistically identifying the optimal algorithm with a sufficiently large sample size.

We propose a procedure for the numerical approximation of invariance equations arising in the moment matching technique associated with reduced-order modeling of high-dimensional dynamical systems. The Galerkin residual method is employed to find an approximate solution to the invariance equation using a Newton iteration on the coefficients of a monomial basis expansion of the solution. These solutions to the invariance equations can then be used to construct reduced-order models. We assess the ability of the method to solve the invariance PDE system as well as to achieve moment matching and recover a system's steady-state behaviour for linear and nonlinear signal generators with system dynamics up to $n=1000$ dimensions.

北京阿比特科技有限公司