亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose high order numerical methods to solve a 2D advection diffusion equation, in the highly oscillatory regime. We use an integrator strategy that allows the construction of arbitrary high-order schemes {leading} to an accurate approximation of the solution without any time step-size restriction. This paper focuses on the multiscale challenges {in time} of the problem, that come from the velocity, an $\varepsilon-$periodic function, whose expression is explicitly known. $\varepsilon$-uniform third order in time numerical approximations are obtained. For the space discretization, this strategy is combined with high order finite difference schemes. Numerical experiments show that the proposed methods {achieve} the expected order of accuracy, and it is validated by several tests across diverse domains and boundary conditions. The novelty of the paper consists of introducing a numerical scheme that is high order accurate in space and time, with a particular attention to the dependency on a small parameter in the time scale. The high order in space is obtained enlarging the interpolation stencil already established in [44], and further refined in [46], with a special emphasis on the squared boundary, especially when a Dirichlet condition is assigned. In such case, we compute an \textit{ad hoc} Taylor expansion of the solution to ensure that there is no degradation of the accuracy order at the boundary. On the other hand, the high accuracy in time is obtained extending the work proposed in [19]. The combination of high-order accuracy in both space and time is particularly significant due to the presence of two small parameters-$\delta$ and $\varepsilon$-in space and time, respectively.

相關內容

機器學習系統設計系統評估標準

In this paper, a highly parallel and derivative-free martingale neural network learning method is proposed to solve Hamilton-Jacobi-Bellman (HJB) equations arising from stochastic optimal control problems (SOCPs), as well as general quasilinear parabolic partial differential equations (PDEs). In both cases, the PDEs are reformulated into a martingale formulation such that loss functions will not require the computation of the gradient or Hessian matrix of the PDE solution, while its implementation can be parallelized in both time and spatial domains. Moreover, the martingale conditions for the PDEs are enforced using a Galerkin method in conjunction with adversarial learning techniques, eliminating the need for direct computation of the conditional expectations associated with the martingale property. For SOCPs, a derivative-free implementation of the maximum principle for optimal controls is also introduced. The numerical results demonstrate the effectiveness and efficiency of the proposed method, which is capable of solving HJB and quasilinear parabolic PDEs accurately in dimensions as high as 10,000.

This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of $B$. We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, showcasing its flexibility in incorporating prior knowledge into the regularization framework. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions.

Measurement-based quantum computation (MBQC) offers a fundamentally unique paradigm to design quantum algorithms. Indeed, due to the inherent randomness of quantum measurements, the natural operations in MBQC are not deterministic and unitary, but are rather augmented with probabilistic byproducts. Yet, the main algorithmic use of MBQC so far has been to completely counteract this probabilistic nature in order to simulate unitary computations expressed in the circuit model. In this work, we propose designing MBQC algorithms that embrace this inherent randomness and treat the random byproducts in MBQC as a resource for computation. As a natural application where randomness can be beneficial, we consider generative modeling, a task in machine learning centered around generating complex probability distributions. To address this task, we propose a variational MBQC algorithm equipped with control parameters that allow one to directly adjust the degree of randomness to be admitted in the computation. Our algebraic and numerical findings indicate that this additional randomness can lead to significant gains in expressivity and learning performance for certain generative modeling tasks, respectively. These results highlight the potential advantages in exploiting the inherent randomness of MBQC and motivate further research into MBQC-based algorithms.

In this paper, we present the numerical analysis and simulations of a multi-dimensional memristive device model. Memristive devices and memtransistors based on two-dimensional (2D) materials have demonstrated promising potential as components for next-generation artificial intelligence (AI) hardware and information technology. Our charge transport model describes the drift-diffusion of electrons, holes, and ionic defects self-consistently in an electric field. We incorporate two types of boundary models: ohmic and Schottky contacts. The coupled drift-diffusion partial differential equations are discretized using a physics-preserving Voronoi finite volume method. It relies on an implicit time-stepping scheme and the excess chemical potential flux approximation. We demonstrate that the fully discrete nonlinear scheme is unconditionally stable, preserving the free-energy structure of the continuous system and ensuring the non-negativity of carrier densities. Novel discrete entropy-dissipation inequalities for both boundary condition types in multiple dimensions allow us to prove the existence of discrete solutions. We perform multi-dimensional simulations to understand the impact of electrode configurations and device geometries, focusing on the hysteresis behavior in lateral 2D memristive devices. Three electrode configurations -- side, top, and mixed contacts -- are compared numerically for different geometries and boundary conditions. These simulations reveal the conditions under which a simplified one-dimensional electrode geometry can well represent the three electrode configurations. This work lays the foundations for developing accurate, efficient simulation tools for 2D memristive devices and memtransistors, offering tools and guidelines for their design and optimization in future applications.

In this paper, we study the embedded feature selection problem in linear Support Vector Machines (SVMs), in which a cardinality constraint is employed, leading to an interpretable classification model. The problem is NP-hard due to the presence of the cardinality constraint, even though the original linear SVM amounts to a problem solvable in polynomial time. To handle the hard problem, we first introduce two mixed-integer formulations for which novel semidefinite relaxations are proposed. Exploiting the sparsity pattern of the relaxations, we decompose the problems and obtain equivalent relaxations in a much smaller cone, making the conic approaches scalable. To make the best usage of the decomposed relaxations, we propose heuristics using the information of its optimal solution. Moreover, an exact procedure is proposed by solving a sequence of mixed-integer decomposed semidefinite optimization problems. Numerical results on classical benchmarking datasets are reported, showing the efficiency and effectiveness of our approach.

In this manuscript we present the tensor-train reduced basis method, a novel projection-based reduced-order model for the efficient solution of parameterized partial differential equations. Despite their popularity and considerable computational advantages with respect to their full order counterparts, reduced-order models are typically characterized by a considerable offline computational cost. The proposed approach addresses this issue by efficiently representing high dimensional finite element quantities with the tensor train format. This method entails numerous benefits, namely, the smaller number of operations required to compute the reduced subspaces, the cheaper hyper-reduction strategy employed to reduce the complexity of the PDE residual and Jacobian, and the decreased dimensionality of the projection subspaces for a fixed accuracy. We provide a posteriori estimates that demonstrate the accuracy of the proposed method, we test its computational performance for the heat equation and transient linear elasticity on three-dimensional Cartesian geometries.

In this work we build optimal experimental designs for precise estimation of the functional coefficient of a function-on-function linear regression model where both the response and the factors are continuous functions of time. After obtaining the variance-covariance matrix of the estimator of the functional coefficient which minimizes the integrated sum of square of errors, we extend the classical definition of optimal design to this estimator, and we provide the expression of the A-optimal and of the D-optimal designs. Examples of optimal designs for dynamic experimental factors are then computed through a suitable algorithm, and we discuss different scenarios in terms of the set of basis functions used for their representation. Finally, we present an example with simulated data to illustrate the feasibility of our methodology.

Not accounting for competing events in survival analysis can lead to biased estimates, as individuals who die from other causes do not have the opportunity to develop the event of interest. Formal definitions and considerations for causal effects in the presence of competing risks have been published, but not for the mediation analysis setting. We propose, for the first time, an approach based on the path-specific effects framework to account for competing risks in longitudinal mediation analysis with time-to-event outcomes. We do so by considering the pathway through the competing event as another mediator, which is nested within our longitudinal mediator of interest. We provide a theoretical formulation and related definitions of the effects of interest based on the mediational g-formula, as well as a detailed description of the algorithm. We also present an application of our algorithm to data from the Strong Heart Study, a prospective cohort of American Indian adults. In this application, we evaluated the mediating role of the blood pressure trajectory (measured during three visits) on the association between arsenic and cadmium, in separate models, with time to cardiovascular disease, accounting for competing risks by death. Identifying the effects through different paths enables us to evaluate the impact of metals on the outcome of interest, as well as through competing risks, more transparently.

We propose a procedure for the numerical approximation of invariance equations arising in the moment matching technique associated with reduced-order modeling of high-dimensional dynamical systems. The Galerkin residual method is employed to find an approximate solution to the invariance equation using a Newton iteration on the coefficients of a monomial basis expansion of the solution. These solutions to the invariance equations can then be used to construct reduced-order models. We assess the ability of the method to solve the invariance PDE system as well as to achieve moment matching and recover a system's steady-state behaviour for linear and nonlinear signal generators with system dynamics up to $n=1000$ dimensions.

We prove, for stably computably enumerable formal systems, direct analogues of the first and second incompleteness theorems of G\"odel. A typical stably computably enumerable set is the set of Diophantine equations with no integer solutions, and in particular such sets are generally not computably enumerable. And so this gives the first extension of the second incompleteness theorem to non classically computable formal systems. Let's motivate this with a somewhat physical application. Let $\mathcal{H} $ be the suitable infinite time limit (stabilization in the sense of the paper) of the mathematical output of humanity, specializing to first order sentences in the language of arithmetic (for simplicity), and understood as a formal system. Suppose that all the relevant physical processes in the formation of $\mathcal{H} $ are Turing computable. Then as defined $\mathcal{H} $ may \emph{not} be computably enumerable, but it is stably computably enumerable. Thus, the classical G\"odel disjunction applied to $\mathcal{H} $ is meaningless, but applying our incompleteness theorems to $\mathcal{H} $ we then get a sharper version of G\"odel's disjunction: assume $\mathcal{H} \vdash PA$ then either $\mathcal{H} $ is not stably computably enumerable or $\mathcal{H} $ is not 1-consistent (in particular is not sound) or $\mathcal{H} $ cannot prove a certain true statement of arithmetic (and cannot disprove it if in addition $\mathcal{H} $ is 2-consistent).

北京阿比特科技有限公司