亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose and analyse numerical schemes for a system of quasilinear, degenerate evolution equations modelling biofilm growth as well as other processes such as flow through porous media and the spreading of wildfires. The first equation in the system is parabolic and exhibits degenerate and singular diffusion, while the second is either uniformly parabolic or an ordinary differential equation. First, we introduce a semi-implicit time discretisation that has the benefit of decoupling the equations. We prove the positivity, boundedness, and convergence of the time-discrete solutions to the time-continuous solution. Then, we introduce an iterative linearisation scheme to solve the resulting nonlinear time-discrete problems. Under weak assumptions on the time-step size, we prove that the scheme converges irrespective of the space discretisation and mesh. Moreover, if the problem is non-degenerate, the convergence becomes faster as the time-step size decreases. Finally, employing the finite element method for the spatial discretisation, we study the behaviour of the scheme, and compare its performance to other commonly used schemes. These tests confirm that the proposed scheme is robust and fast.

相關內容

We advocate for a new paradigm of cosmological likelihood-based inference, leveraging recent developments in machine learning and its underlying technology, to accelerate Bayesian inference in high-dimensional settings. Specifically, we combine (i) emulation, where a machine learning model is trained to mimic cosmological observables, e.g. CosmoPower-JAX; (ii) differentiable and probabilistic programming, e.g. JAX and NumPyro, respectively; (iii) scalable Markov chain Monte Carlo (MCMC) sampling techniques that exploit gradients, e.g. Hamiltonian Monte Carlo; and (iv) decoupled and scalable Bayesian model selection techniques that compute the Bayesian evidence purely from posterior samples, e.g. the learned harmonic mean implemented in harmonic. This paradigm allows us to carry out a complete Bayesian analysis, including both parameter estimation and model selection, in a fraction of the time of traditional approaches. First, we demonstrate the application of this paradigm on a simulated cosmic shear analysis for a Stage IV survey in 37- and 39-dimensional parameter spaces, comparing $\Lambda$CDM and a dynamical dark energy model ($w_0w_a$CDM). We recover posterior contours and evidence estimates that are in excellent agreement with those computed by the traditional nested sampling approach while reducing the computational cost from 8 months on 48 CPU cores to 2 days on 12 GPUs. Second, we consider a joint analysis between three simulated next-generation surveys, each performing a 3x2pt analysis, resulting in 157- and 159-dimensional parameter spaces. Standard nested sampling techniques are simply not feasible in this high-dimensional setting, requiring a projected 12 years of compute time on 48 CPU cores; on the other hand, the proposed approach only requires 8 days of compute time on 24 GPUs. All packages used in our analyses are publicly available.

The simulation of nanophotonic structures relies on electromagnetic solvers, which play a crucial role in understanding their behavior. However, these solvers often come with a significant computational cost, making their application in design tasks, such as optimization, impractical. To address this challenge, machine learning techniques have been explored for accurate and efficient modeling and design of photonic devices. Deep neural networks, in particular, have gained considerable attention in this field. They can be used to create both forward and inverse models. An inverse modeling approach avoids the need for coupling a forward model with an optimizer and directly performs the prediction of the optimal design parameters values. In this paper, we propose an inverse modeling method for nanophotonic structures, based on a mixture density network model enhanced by transfer learning. Mixture density networks can predict multiple possible solutions at a time including their respective importance as Gaussian distributions. However, multiple challenges exist for mixture density network models. An important challenge is that an upper bound on the number of possible simultaneous solutions needs to be specified in advance. Also, another challenge is that the model parameters must be jointly optimized, which can result computationally expensive. Moreover, optimizing all parameters simultaneously can be numerically unstable and can lead to degenerate predictions. The proposed approach allows overcoming these limitations using transfer learning-based techniques, while preserving a high accuracy in the prediction capability of the design solutions given an optical response as an input. A dimensionality reduction step is also explored. Numerical results validate the proposed method.

This work develops a functional analytic framework for making computer assisted arguments involving transverse heteroclinic connecting orbits between hyperbolic periodic solutions of ordinary differential equations. We exploit a Fourier-Taylor approximation of the local stable/unstable manifold of the periodic orbit, combined with a numerical method for solving two point boundary value problems via Chebyshev series approximations. The a-posteriori analysis developed provides mathematically rigorous bounds on all approximation errors, providing both abstract existence results and quantitative information about the true heteroclinic solution. Example calculations are given for both the dissipative Lorenz system and the Hamiltonian Hill Restricted Four Body Problem.

Generative diffusion models apply the concept of Langevin dynamics in physics to machine leaning, attracting a lot of interest from industrial application, but a complete picture about inherent mechanisms is still lacking. In this paper, we provide a transparent physics analysis of the diffusion models, deriving the fluctuation theorem, entropy production, Franz-Parisi potential to understand the intrinsic phase transitions discovered recently. Our analysis is rooted in non-equlibrium physics and concepts from equilibrium physics, i.e., treating both forward and backward dynamics as a Langevin dynamics, and treating the reverse diffusion generative process as a statistical inference, where the time-dependent state variables serve as quenched disorder studied in spin glass theory. This unified principle is expected to guide machine learning practitioners to design better algorithms and theoretical physicists to link the machine learning to non-equilibrium thermodynamics.

A new model is presented to predict hydrogen-assisted fatigue. The model combines a phase field description of fracture and fatigue, stress-assisted hydrogen diffusion, and a toughness degradation formulation with cyclic and hydrogen contributions. Hydrogen-assisted fatigue crack growth predictions exhibit an excellent agreement with experiments over all the scenarios considered, spanning multiple load ratios, H2 pressures and loading frequencies. These are obtained without any calibration with hydrogen-assisted fatigue data, taking as input only mechanical and hydrogen transport material properties, the material's fatigue characteristics (from a single test in air), and the sensitivity of fracture toughness to hydrogen content. Furthermore, the model is used to determine: (i) what are suitable test loading frequencies to obtain conservative data, and (ii) the underestimation made when not pre-charging samples. The model can handle both laboratory specimens and large-scale engineering components, enabling the Virtual Testing paradigm in infrastructure exposed to hydrogen environments and cyclic loading.

In this manuscript, we present a collective multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty, and develop a novel convergence analysis of collective smoothers and collective two-level methods. The multigrid algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size is proportional to the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. The multigrid method is tested both as a stationary method and as a preconditioner for GMRES on three problems: a linear-quadratic problem, possibly with a local or a boundary control, for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used as an inner solver within a semismooth Newton iteration; a risk-averse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits excellent performances and robustness with respect to the parameters of interest.

We analyze a bilinear optimal control problem for the Stokes--Brinkman equations: the control variable enters the state equations as a coefficient. In two- and three-dimensional Lipschitz domains, we perform a complete continuous analysis that includes the existence of solutions and first- and second-order optimality conditions. We also develop two finite element methods that differ fundamentally in whether the admissible control set is discretized or not. For each of the proposed methods, we perform a convergence analysis and derive a priori error estimates; the latter under the assumption that the domain is convex. Finally, assuming that the domain is Lipschitz, we develop an a posteriori error estimator for each discretization scheme and obtain a global reliability bound.

We study the logical structure of Teichm{\"u}ller-Tukey lemma, a maximality principle equivalent to the axiom of choice and show that it corresponds to the generalisation to arbitrary cardinals of update induction, a well-foundedness principle from constructive mathematics classically equivalent to the axiom of dependent choice.From there, we state general forms of maximality and well-foundedness principles equivalent to the axiom of choice, including a variant of Zorn's lemma. A comparison with the general class of choice and bar induction principles given by Brede and the first author is initiated.

Maximal regularity is a kind of a priori estimates for parabolic-type equations and it plays an important role in the theory of nonlinear differential equations. The aim of this paper is to investigate the temporally discrete counterpart of maximal regularity for the discontinuous Galerkin (DG) time-stepping method. We will establish such an estimate without logarithmic factor over a quasi-uniform temporal mesh. To show the main result, we introduce the temporally regularized Green's function and then reduce the discrete maximal regularity to a weighted error estimate for its DG approximation. Our results would be useful for investigation of DG approximation of nonlinear parabolic problems.

We present a subspace method based on neural networks for solving the partial differential equation in weak form with high accuracy. The basic idea of our method is to use some functions based on neural networks as base functions to span a subspace, then find an approximate solution in this subspace. Training base functions and finding an approximate solution can be separated, that is different methods can be used to train these base functions, and different methods can also be used to find an approximate solution. In this paper, we find an approximate solution of the partial differential equation in the weak form. Our method can achieve high accuracy with low cost of training. Numerical examples show that the cost of training these base functions is low, and only one hundred to two thousand epochs are needed for most tests. The error of our method can fall below the level of $10^{-7}$ for some tests. The proposed method has the better performance in terms of the accuracy and computational cost.

北京阿比特科技有限公司