亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Miura surfaces are the solutions of a constrained nonlinear elliptic system of equations. This system is derived by homogenization from the Miura fold, which is a type of origami fold with multiple applications in engineering. A previous inquiry, gave suboptimal conditions for existence of solutions and proposed an $H^2$-conformal finite element method to approximate them. In this paper, the existence of Miura surfaces is studied using a mixed formulation. It is also proved that the constraints propagate from the boundary to the interior of the domain for well-chosen boundary conditions. Then, a numerical method based on a least-squares formulation, Taylor--Hood finite elements and a Newton method is introduced to approximate Miura surfaces. The numerical method is proved to converge at order one in space and numerical tests are performed to demonstrate its robustness.

相關內容

 Surface 是微軟公司( )旗下一系列使用 Windows 10(早期為 Windows 8.X)操作系統的電腦產品,目前有 Surface、Surface Pro 和 Surface Book 三個系列。 2012 年 6 月 18 日,初代 Surface Pro/RT 由時任微軟 CEO 史蒂夫·鮑爾默發布于在洛杉磯舉行的記者會,2012 年 10 月 26 日上市銷售。

Causal representation learning algorithms discover lower-dimensional representations of data that admit a decipherable interpretation of cause and effect; as achieving such interpretable representations is challenging, many causal learning algorithms utilize elements indicating prior information, such as (linear) structural causal models, interventional data, or weak supervision. Unfortunately, in exploratory causal representation learning, such elements and prior information may not be available or warranted. Alternatively, scientific datasets often have multiple modalities or physics-based constraints, and the use of such scientific, multimodal data has been shown to improve disentanglement in fully unsupervised settings. Consequently, we introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships. Our innovative algorithm utilizes a new differentiable parametrization to learn a directed acyclic graph (DAG) together with a latent space of a variational autoencoder in an end-to-end differentiable framework via a single, tractable evidence lower bound loss function. We place a Gaussian mixture prior on the latent space and identify each of the mixtures with an outcome of the DAG nodes; this novel identification enables feature discovery with causal relationships. Tested against a synthetic and a scientific dataset, our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.

We construct a Convolution Quadrature (CQ) scheme for the quasilinear subdiffusion equation and supply it with the fast and oblivious implementation. In particular we find a condition for the CQ to be admissible and discretize the spatial part of the equation with the Finite Element Method. We prove the unconditional stability and convergence of the scheme and find a bound on the error. As a passing result, we also obtain a discrete Gronwall inequality for the CQ, which is a crucial ingredient of our convergence proof based on the energy method. The paper is concluded with numerical examples verifying convergence and computation time reduction when using fast and oblivious quadrature.

We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump L\'evy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving L\'evy process, including the generalised hyperbolic, normal-Gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure L\'evy process, and to a Brownian-driven SDE in the case of the L\'evy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for L\'evy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo (MCMC), Expectation-Maximisation (EM), and sequential Monte Carlo.

Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex constrained optimization that sequentially minimizes majorizing surrogates of the objective function in each block coordinate while the other coordinates are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We establish that for general constrained nonconvex optimization, BMM with strongly convex surrogates can produce an $\epsilon$-stationary point within $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$ iterations and asymptotically converges to the set of stationary points. Furthermore, we propose a trust-region variant of BMM that can handle surrogates that are only convex and still obtain the same iteration complexity and asymptotic stationarity. These results hold robustly even when the convex sub-problems are inexactly solved as long as the optimality gaps are summable. As an application, we show that a regularized version of the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung has iteration complexity of $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$. The same result holds for a wide class of regularized nonnegative tensor decomposition algorithms as well as the classical block projected gradient descent algorithm. These theoretical results are validated through various numerical experiments.

A new linear relaxation system for nonconservative hyperbolic systems is introduced, in which a nonlocal source term accounts for the nonconservative product of the original system. Using an asymptotic analysis the relaxation limit and its stability are investigated. It is shown that the path-conservative Lax-Friedrichs scheme arises from a discrete limit of an implicit-explicit scheme for the relaxation system. The relaxation approach is further employed to couple two nonconservative systems at a static interface. A coupling strategy motivated from conservative Kirchhoff conditions is introduced and a corresponding Riemann solver provided. A fully discrete scheme for coupled nonconservative products is derived and studied in terms of path-conservation. Numerical experiments applying the approach to a coupled model of vascular blood flow are presented.

Nonlinear differential equations exhibit rich phenomena in many fields but are notoriously challenging to solve. Recently, Liu et al. [1] demonstrated the first efficient quantum algorithm for dissipative quadratic differential equations under the condition $R < 1$, where $R$ measures the ratio of nonlinearity to dissipation using the $\ell_2$ norm. Here we develop an efficient quantum algorithm based on [1] for reaction-diffusion equations, a class of nonlinear partial differential equations (PDEs). To achieve this, we improve upon the Carleman linearization approach introduced in [1] to obtain a faster convergence rate under the condition $R_D < 1$, where $R_D$ measures the ratio of nonlinearity to dissipation using the $\ell_{\infty}$ norm. Since $R_D$ is independent of the number of spatial grid points $n$ while $R$ increases with $n$, the criterion $R_D<1$ is significantly milder than $R<1$ for high-dimensional systems and can stay convergent under grid refinement for approximating PDEs. As applications of our quantum algorithm we consider the Fisher-KPP and Allen-Cahn equations, which have interpretations in classical physics. In particular, we show how to estimate the mean square kinetic energy in the solution by postprocessing the quantum state that encodes it to extract derivative information.

Using diffusion models to solve inverse problems is a growing field of research. Current methods assume the degradation to be known and provide impressive results in terms of restoration quality and diversity. In this work, we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the degradation model such as blur kernel. In particular, we designed an algorithm based on the well-known Expectation-Minimization (EM) estimation method and diffusion models. Our method alternates between approximating the expected log-likelihood of the inverse problem using samples drawn from a diffusion model and a maximization step to estimate unknown model parameters. For the maximization step, we also introduce a novel blur kernel regularization based on a Plug \& Play denoiser. Diffusion models are long to run, thus we provide a fast version of our algorithm. Extensive experiments on blind image deblurring demonstrate the effectiveness of our method when compared to other state-of-the-art approaches.

Thanks to the singularity of the solution of linear subdiffusion problems, most time-stepping methods on uniform meshes can result in $O(\tau)$ accuracy where $\tau$ denotes the time step. The present work aims to discover the reason why some type of Crank-Nicolson schemes (the averaging Crank-Nicolson scheme) for the subdiffusion can only yield $O(\tau^\alpha)$$(\alpha<1)$ accuracy, which is much lower than the desired. The existing well developed error analysis for the subdiffusion, which has been successfully applied to many time-stepping methods such as the fractional BDF-$p (1\leq p\leq 6)$, all requires singular points be out of the path of contour integrals involved. The averaging Crank-Nicolson scheme in this work is quite natural but fails to meet this requirement. By resorting to the residue theorem, some novel sharp error analysis is developed in this study, upon which correction methods are further designed to obtain the optimal $O(\tau^2)$ accuracy. All results are verified by numerical tests.

Implicit models for magnetic coenergy have been proposed by Pera et al. to describe the anisotropic nonlinear material behavior of electrical steel sheets. This approach aims at predicting magnetic response for any direction of excitation by interpolating measured of B--H curves in the rolling and transverse directions. In an analogous manner, an implicit model for magnetic energy is proposed. We highlight some mathematical properties of these implicit models and discuss their numerical realization, outline the computation of magnetic material laws via implicit differentiation, and discuss the potential use for finite element analysis in the context of nonlinear magnetostatics.

The notion of lacunary infinite numerical sequence is introduced. It is shown that for an arbitrary linear difference operator L with coefficients belonging to the set R of infinite numerical sequences, a criterion (i.e., a necessary and sufficient condition) for the infinite dimensionality of its space $V_L$ of solutions belonging to R is the presence of a lacunary sequence in $V_L$.

北京阿比特科技有限公司