亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the Bayesian inverse problem for inferring the log-normal slowness function of the eikonal equation given noisy observation data on its solution at a set of spatial points. We study approximation of the posterior probability measure by solving the truncated eikonal equation, which contains only a finite number of terms in the Karhunen-Loeve expansion of the slowness function, by the Fast Marching Method. The error of this approximation in the Hellinger metric is deduced in terms of the truncation level of the slowness and the grid size in the Fast Marching Method resolution. It is well known that the plain Markov Chain Monte Carlo procedure for sampling the posterior probability is highly expensive. We develop and justify the convergence of a Multilevel Markov Chain Monte Carlo method. Using the heap sort procedure in solving the forward eikonal equation by the Fast Marching Method, our Multilevel Markov Chain Monte Carlo method achieves a prescribed level of accuracy for approximating the posterior expectation of quantities of interest, requiring only an essentially optimal level of complexity. Numerical examples confirm the theoretical results.

相關內容

In this paper, we develop a new change detection algorithm for detecting a change in the Markov kernel over a metric space in which the post-change kernel is unknown. Under the assumption that the pre- and post-change Markov kernel is geometrically ergodic, we derive an upper bound on the mean delay and a lower bound on the mean time between false alarms.

Structural identifiability is a property of a differential model with parameters that allows for the parameters to be determined from the model equations in the absence of noise. The method of input-output equations is one method for verifying structural identifiability. This method stands out in its importance because the additional insights it provides can be used to analyze and improve models. However, its complete theoretical grounds and applicability are still to be established. A subtlety and key for this method to work correctly is knowing whether the coefficients of these equations are identifiable. In this paper, to address this, we prove identifiability of the coefficients of input-output equations for types of differential models that often appear in practice, such as linear models with one output and linear compartment models in which, from each compartment, one can reach either a leak or an input. This shows that checking identifiability via input-output equations for these models is legitimate and, as we prove, that the field of identifiable functions is generated by the coefficients of the input-output equations. Finally, we exploit a connection between input-output equations and the transfer function matrix to show that, for a linear compartment model with an input and strongly connected graph, the field of all identifiable functions is generated by the coefficients of the transfer function matrix even if the initial conditions are generic.

We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions. The zigzag process has the advantage of not requiring time discretization for implementation, and that each proposed bouncing event requires only one evaluation of partial derivative of the potential, while its convergence rate is dimension independent. Using these properties, we prove that the zigzag sampling algorithm achieves $\varepsilon$ error in chi-square divergence with a computational cost equivalent to $O\bigl(\kappa^2 d^\frac{1}{2}(\log\frac{1}{\varepsilon})^{\frac{3}{2}}\bigr)$ gradient evaluations in the regime $\kappa \ll \frac{d}{\log d}$ under a warm start assumption, where $\kappa$ is the condition number and $d$ is the dimension.

Ensemble Kalman inversion (EKI) is a technique for the numerical solution of inverse problems. A great advantage of the EKI's ensemble approach is that derivatives are not required in its implementation. But theoretically speaking, EKI's ensemble size needs to surpass the dimension of the problem. This is because of EKI's "subspace property", i.e., that the EKI solution is a linear combination of the initial ensemble it starts off with. We show that the ensemble can break out of this initial subspace when ``localization'' is applied. In essence, localization enforces an assumed correlation structure onto the problem, and is heavily used in ensemble Kalman filtering and data assimilation. We describe and analyze how to apply localization to the EKI, and how localization helps the EKI ensemble break out of the initial subspace. Specifically, we show that the localized EKI (LEKI) ensemble will collapse to a single point (as intended) and that the LEKI ensemble mean will converge to the global optimum at a sublinear rate. Under strict assumptions on the localization procedure and observation process, we further show that the data misfit decays uniformly. We illustrate our ideas and theoretical developments with numerical examples with simplified toy problems, a Lorenz model, and an inversion of electromagnetic data, where some of our mathematical assumptions may only be approximately valid.

Motivated by applications in reinforcement learning (RL), we study a nonlinear stochastic approximation (SA) algorithm under Markovian noise, and establish its finite-sample convergence bounds under various stepsizes. Specifically, we show that when using constant stepsize (i.e., $\alpha_k\equiv \alpha$), the algorithm achieves exponential fast convergence to a neighborhood (with radius $O(\alpha\log(1/\alpha))$) around the desired limit point. When using diminishing stepsizes with appropriate decay rate, the algorithm converges with rate $O(\log(k)/k)$. Our proof is based on Lyapunov drift arguments, and to handle the Markovian noise, we exploit the fast mixing of the underlying Markov chain. To demonstrate the generality of our theoretical results on Markovian SA, we use it to derive the finite-sample bounds of the popular $Q$-learning with linear function approximation algorithm, under a condition on the behavior policy. Importantly, we do not need to make the assumption that the samples are i.i.d., and do not require an artificial projection step in the algorithm to maintain the boundedness of the iterates. Numerical simulations corroborate our theoretical results.

We study the problem of constructing the control driving a controlled differential equation from discrete observations of the response. By restricting the control to the space of piecewise linear paths, we identify the assumptions that ensure uniqueness. The main contribution of this paper is the introduction of a novel numerical algorithm for the construction of the piecewise linear control, that converges uniformly in time. Uniform convergence is needed for many applications and it is achieved by approaching the problem through the signature representation of the paths, which allows us to work with the whole path simultaneously.

Numerical solving differential equations with fractional derivatives requires elimination of the singularity which is inherent in the standard definition of fractional derivatives. The method of integration by parts to eliminate this singularity is well known. It allows to solve some equations but increases the order of the equation and sometimes leads to wrong numerical results or instability. We suggest another approach: the elimination of singularity by substitution. It does not increase the order of equation and its numerical implementation provides the opportunity to define fractional derivative as the limit of discretization. We present a sufficient condition for the substitution-generated difference approximation to be well-conditioned. We demonstrate how some equations can be solved using this method with full confidence that the solution is accurate with at least second order of approximation.

Determining process-structure-property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process-structure and structure-property linkages. In this work, we seek to learn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model (CPFEM) matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification (UQ) framework based on push-forward probability measures, which combines techniques from measure theory and Bayes rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning (ML) Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure-property linkages.

We present a method for solving linear and nonlinear PDEs based on the variable projection (VarPro) framework and artificial neural networks (ANN). For linear PDEs, enforcing the boundary/initial value problem on the collocation points leads to a separable nonlinear least squares problem about the network coefficients. We reformulate this problem by the VarPro approach to eliminate the linear output-layer coefficients, leading to a reduced problem about the hidden-layer coefficients only. The reduced problem is solved first by the nonlinear least squares method to determine the hidden-layer coefficients, and then the output-layer coefficients are computed by the linear least squares method. For nonlinear PDEs, enforcing the boundary/initial value problem on the collocation points leads to a nonlinear least squares problem that is not separable, which precludes the VarPro strategy for such problems. To enable the VarPro approach for nonlinear PDEs, we first linearize the problem with a Newton iteration, using a particular form of linearization. The linearized system is solved by the VarPro framework together with ANNs. Upon convergence of the Newton iteration, the network coefficients provide the representation of the solution field to the original nonlinear problem. We present ample numerical examples with linear and nonlinear PDEs to demonstrate the performance of the method herein. For smooth field solutions, the errors of the current method decrease exponentially as the number of collocation points or the number of output-layer coefficients increases. We compare the current method with the ELM method from a previous work. Under identical conditions and network configurations, the current method exhibits an accuracy significantly superior to the ELM method.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

北京阿比特科技有限公司