亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a subspace method based on neural networks (SNN) for solving the partial differential equation with high accuracy. The basic idea of our method is to use some functions based on neural networks as base functions to span a subspace, then find an approximate solution in this subspace. We design two special algorithms in the strong form of partial differential equation. One algorithm enforces the equation and initial boundary conditions to hold on some collocation points, and another algorithm enforces $L^2$-norm of the residual of the equation and initial boundary conditions to be $0$. Our method can achieve high accuracy with low cost of training. Moreover, our method is free of parameters that need to be artificially adjusted. Numerical examples show that the cost of training these base functions of subspace is low, and only one hundred to two thousand epochs are needed for most tests. The error of our method can even fall below the level of $10^{-10}$ for some tests. The performance of our method significantly surpasses the performance of PINN and DGM in terms of the accuracy and computational cost.

相關內容

We revisit the general framework introduced by Fazylab et al. (SIAM J. Optim. 28, 2018) to construct Lyapunov functions for optimization algorithms in discrete and continuous time. For smooth, strongly convex objective functions, we relax the requirements necessary for such a construction. As a result we are able to prove for Polyak's ordinary differential equations and for a two-parameter family of Nesterov algorithms rates of convergence that improve on those available in the literature. We analyse the interpretation of Nesterov algorithms as discretizations of the Polyak equation. We show that the algorithms are instances of Additive Runge-Kutta integrators and discuss the reasons why most discretizations of the differential equation do not result in optimization algorithms with acceleration. We also introduce a modification of Polyak's equation and study its convergence properties. Finally we extend the general framework to the stochastic scenario and consider an application to random algorithms with acceleration for overparameterized models; again we are able to prove convergence rates that improve on those in the literature.

Cardiocirculatory mathematical models serve as valuable tools for investigating physiological and pathological conditions of the circulatory system. To investigate the clinical condition of an individual, cardiocirculatory models need to be personalized by means of calibration methods. In this study we propose a new calibration method for a lumped-parameter cardiocirculatory model. This calibration method utilizes the correlation matrix between parameters and model outputs to calibrate the latter according to data. We test this calibration method and its combination with L-BFGS-B (Limited memory Broyden - Fletcher - Goldfarb - Shanno with Bound constraints) comparing them with the performances of L-BFGS-B alone. We show that the correlation matrix calibration method and the combined one effectively reduce the loss function of the associated optimization problem. In the case of in silico generated data, we show that the two new calibration methods are robust with respect to the initial guess of parameters and to the presence of noise in the data. Notably, the correlation matrix calibration method achieves the best results in estimating the parameters in the case of noisy data and it is faster than the combined calibration method and L-BFGS-B. Finally, we present real test case where the two new calibration methods yield results comparable to those obtained using L-BFGS-B in terms of minimizing the loss function and estimating the clinical data. This highlights the effectiveness of the new calibration methods for clinical applications.

We propose a particle method for numerically solving the Landau equation, inspired by the score-based transport modeling (SBTM) method for the Fokker-Planck equation. This method can preserve some important physical properties of the Landau equation, such as the conservation of mass, momentum, and energy, and decay of estimated entropy. We prove that matching the gradient of the logarithm of the approximate solution is enough to recover the true solution to the Landau equation with Maxwellian molecules. Several numerical experiments in low and moderately high dimensions are performed, with particular emphasis on comparing the proposed method with the traditional particle or blob method.

Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy, provided that the type II error probability is sufficiently small. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.

It is well-known that randomly initialized, push-forward, fully-connected neural networks weakly converge to isotropic Gaussian processes, in the limit where the width of all layers goes to infinity. In this paper, we propose to use the angular power spectrum of the limiting field to characterize the complexity of the network architecture. In particular, we define sequences of random variables associated with the angular power spectrum, and provide a full characterization of the network complexity in terms of the asymptotic distribution of these sequences as the depth diverges. On this basis, we classify neural networks as low-disorder, sparse, or high-disorder; we show how this classification highlights a number of distinct features for standard activation functions, and in particular, sparsity properties of ReLU networks. Our theoretical results are also validated by numerical simulations.

This paper presents the double-activation neural network (DANN), a novel network architecture designed for solving parabolic equations with time delay. In DANN, each neuron is equipped with two activation functions to augment the network's nonlinear expressive capacity. Additionally, a new parameter is introduced for the construction of the quadratic terms in one of two activation functions, which further enhances the network's ability to capture complex nonlinear relationships. To address the issue of low fitting accuracy caused by the discontinuity of solution's derivative, a piecewise fitting approach is proposed by dividing the global solving domain into several subdomains. The convergence of the loss function is proven. Numerical results are presented to demonstrate the superior accuracy and faster convergence of DANN compared to the traditional physics-informed neural network (PINN).

In this paper we propose an approach for solving systems of nonlinear equations without computing function derivatives. Motivated by the application area of tomographic absorption spectroscopy, which is a highly-nonlinear problem with variables coupling, we consider a situation where straightforward translation to a fixed point problem is not possible because the operators that represent the relevant systems of nonlinear equations are not self-mappings, i.e., they operate between spaces of different dimensions. To overcome this difficulty we suggest an "alternating common fixed points algorithm" that acts alternatingly on the different vector variables. This approach translates the original problem to a common fixed point problem for which iterative algorithms are abound and exhibits a viable alternative to translation to an optimization problem, which usually requires derivatives information. However, to apply any of these iterative algorithms requires to ascertain the conditions that appear in their convergence theorems. To circumvent the need to verify conditions for convergence, we propose and motivate a derivative-free algorithm that better suits the tomographic absorption spectroscopy problem at hand and is even further improved by applying to it the superiorization approach. This is presented along with experimental results that demonstrate our approach.

We present a subspace method based on neural networks for solving the partial differential equation in weak form with high accuracy. The basic idea of our method is to use some functions based on neural networks as base functions to span a subspace, then find an approximate solution in this subspace. Training base functions and finding an approximate solution can be separated, that is different methods can be used to train these base functions, and different methods can also be used to find an approximate solution. In this paper, we find an approximate solution of the partial differential equation in the weak form. Our method can achieve high accuracy with low cost of training. Numerical examples show that the cost of training these base functions is low, and only one hundred to two thousand epochs are needed for most tests. The error of our method can fall below the level of $10^{-7}$ for some tests. The proposed method has the better performance in terms of the accuracy and computational cost.

In this paper, we study numerical approximations for stochastic differential equations (SDEs) that use adaptive step sizes. In particular, we consider a general setting where decisions to reduce step sizes are allowed to depend on the future trajectory of the underlying Brownian motion. Since these adaptive step sizes may not be previsible, the standard mean squared error analysis cannot be directly applied to show that the numerical method converges to the solution of the SDE. Building upon the pioneering work of Gaines and Lyons, we shall instead use rough path theory to establish convergence for a wide class of adaptive numerical methods on general Stratonovich SDEs (with sufficiently smooth vector fields). To our knowledge, this is the first convergence guarantee that applies to standard solvers, such as the Milstein and Heun methods, with non-previsible step sizes. In our analysis, we require the sequence of adaptive step sizes to be nested and the SDE solver to have unbiased "L\'evy area" terms in its Taylor expansion. We conjecture that for adaptive SDE solvers more generally, convergence is still possible provided the method does not introduce "L\'evy area bias". We present a simple example where the step size control can skip over previously considered times, resulting in the numerical method converging to an incorrect limit (i.e. not the Stratonovich SDE). Finally, we conclude with an experiment demonstrating the accuracy of Heun's method and a newly introduced Splitting Path-based Runge-Kutta scheme (SPaRK) when used with adaptive step sizes.

We study the numerical approximation of the stochastic heat equation with a distributional reaction term. Under a condition on the Besov regularity of the reaction term, it was proven recently that a strong solution exists and is unique in the pathwise sense, in a class of H\"older continuous processes. For a suitable choice of sequence $(b^k)_{k\in \mathbb{N}}$ approximating $b$, we prove that the error between the solution $u$ of the SPDE with reaction term $b$ and its tamed Euler finite-difference scheme with mollified drift $b^k$, converges to $0$ in $L^m(\Omega)$ with a rate that depends on the Besov regularity of $b$. In particular, one can consider two interesting cases: first, even when $b$ is only a (finite) measure, a rate of convergence is obtained. On the other hand, when $b$ is a bounded measurable function, the (almost) optimal rate of convergence $(\frac{1}{2}-\varepsilon)$-in space and $(\frac{1}{4}-\varepsilon)$-in time is achieved. Stochastic sewing techniques are used in the proofs, in particular to deduce new regularising properties of the discrete Ornstein-Uhlenbeck process.

北京阿比特科技有限公司