亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We analyze the finite element discretization of distributed elliptic optimal control problems with variable energy regularization, where the usual $L^2(\Omega)$ norm regularization term with a constant regularization parameter $\varrho$ is replaced by a suitable representation of the energy norm in $H^{-1}(\Omega)$ involving a variable, mesh-dependent regularization parameter $\varrho(x)$. It turns out that the error between the computed finite element state $\widetilde{u}_{\varrho h}$ and the desired state $\bar{u}$ (target) is optimal in the $L^2(\Omega)$ norm provided that $\varrho(x)$ behaves like the local mesh size squared. This is especially important when adaptive meshes are used in order to approximate discontinuous target functions. The adaptive scheme can be driven by the computable and localizable error norm $\| \widetilde{u}_{\varrho h} - \bar{u}\|_{L^2(\Omega)}$ between the finite element state $\widetilde{u}_{\varrho h}$ and the target $\bar{u}$. The numerical results not only illustrate our theoretical findings, but also show that the iterative solvers for the discretized reduced optimality system are very efficient and robust.

相關內容

In practical applications, data is used to make decisions in two steps: estimation and optimization. First, a machine learning model estimates parameters for a structural model relating decisions to outcomes. Second, a decision is chosen to optimize the structural model's predicted outcome as if its parameters were correctly estimated. Due to its flexibility and simple implementation, this ``estimate-then-optimize'' approach is often used for data-driven decision-making. Errors in the estimation step can lead estimate-then-optimize to sub-optimal decisions that result in regret, i.e., a difference in value between the decision made and the best decision available with knowledge of the structural model's parameters. We provide a novel bound on this regret for smooth and unconstrained optimization problems. Using this bound, in settings where estimated parameters are linear transformations of sub-Gaussian random vectors, we provide a general procedure for experimental design to minimize the regret resulting from estimate-then-optimize. We demonstrate our approach on simple examples and a pandemic control application.

We apply the Monte Carlo method to solving the Dirichlet problem of linear parabolic equations with fractional Laplacian. This method exploit- s the idea of weak approximation of related stochastic differential equations driven by the symmetric stable L\'evy process with jumps. We utilize the jump- adapted scheme to approximate L\'evy process which gives exact exit time to the boundary. When the solution has low regularity, we establish a numeri- cal scheme by removing the small jumps of the L\'evy process and then show the convergence order. When the solution has higher regularity, we build up a higher-order numerical scheme by replacing small jumps with a simple process and then display the higher convergence order. Finally, numerical experiments including ten- and one hundred-dimensional cases are presented, which confirm the theoretical estimates and show the numerical efficiency of the proposed schemes for high dimensional parabolic equations.

Spatially inhomogeneous functions, which may be smooth in some regions and rough in other regions, are modelled naturally in a Bayesian manner using so-called Besov priors which are given by random wavelet expansions with Laplace-distributed coefficients. This paper studies theoretical guarantees for such prior measures - specifically, we examine their frequentist posterior contraction rates in the setting of non-linear inverse problems with Gaussian white noise. Our results are first derived under a general local Lipschitz assumption on the forward map. We then verify the assumption for two non-linear inverse problems arising from elliptic partial differential equations, the Darcy flow model from geophysics as well as a model for the Schr\"odinger equation appearing in tomography. In the course of the proofs, we also obtain novel concentration inequalities for penalized least squares estimators with $\ell^1$ wavelet penalty, which have a natural interpretation as maximum a posteriori (MAP) estimators. The true parameter is assumed to belong to some spatially inhomogeneous Besov class $B^{\alpha}_{11}$, $\alpha>0$. In a setting with direct observations, we complement these upper bounds with a lower bound on the rate of contraction for arbitrary Gaussian priors. An immediate consequence of our results is that while Laplace priors can achieve minimax-optimal rates over $B^{\alpha}_{11}$-classes, Gaussian priors are limited to a (by a polynomial factor) slower contraction rate. This gives information-theoretical justification for the intuition that Laplace priors are more compatible with $\ell^1$ regularity structure in the underlying parameter.

Tyler's M-estimator is a well known procedure for robust and heavy-tailed covariance estimation. Tyler himself suggested an iterative fixed-point algorithm for computing his estimator however, it requires super-linear (in the size of the data) runtime per iteration, which maybe prohibitive in large scale. In this work we propose, to the best of our knowledge, the first Frank-Wolfe-based algorithms for computing Tyler's estimator. One variant uses standard Frank-Wolfe steps, the second also considers \textit{away-steps} (AFW), and the third is a \textit{geodesic} version of AFW (GAFW). AFW provably requires, up to a log factor, only linear time per iteration, while GAFW runs in linear time (up to a log factor) in a large $n$ (number of data-points) regime. All three variants are shown to provably converge to the optimal solution with sublinear rate, under standard assumptions, despite the fact that the underlying optimization problem is not convex nor smooth. Under an additional fairly mild assumption, that holds with probability 1 when the (normalized) data-points are i.i.d. samples from a continuous distribution supported on the entire unit sphere, AFW and GAFW are proved to converge with linear rates. Importantly, all three variants are parameter-free and use adaptive step-sizes.

This paper is concerned with low-rank matrix optimization, which has found a wide range of applications in machine learning. This problem in the special case of matrix sensing has been studied extensively through the notion of Restricted Isometry Property (RIP), leading to a wealth of results on the geometric landscape of the problem and the convergence rate of common algorithms. However, the existing results can handle the problem in the case with a general objective function subject to noisy data only when the RIP constant is close to 0. In this paper, we develop a new mathematical framework to solve the above-mentioned problem with a far less restrictive RIP constant. We prove that as long as the RIP constant of the noiseless objective is less than $1/3$, any spurious local solution of the noisy optimization problem must be close to the ground truth solution. By working through the strict saddle property, we also show that an approximate solution can be found in polynomial time. We characterize the geometry of the spurious local minima of the problem in a local region around the ground truth in the case when the RIP constant is greater than $1/3$. Compared to the existing results in the literature, this paper offers the strongest RIP bound and provides a complete theoretical analysis on the global and local optimization landscapes of general low-rank optimization problems under random corruptions from any finite-variance family.

Informative interim adaptations lead to random sample sizes. The random sample size becomes a component of the sufficient statistic and estimation based solely on observed samples or on the likelihood function does not use all available statistical evidence. The total Fisher Information (FI) is decomposed into the design FI and a conditional-on-design FI. The FI unspent by the interim adaptation is used to determine the lower mean squared error in post-adaptation estimation. Theoretical results are illustrated with simple normal samples collected according to a two-stage design with a possibility of early stopping.

We consider energy stable summation by parts finite difference methods (SBP-FD) for the homogeneous and piecewise homogeneous dynamic beam equation (DBE). Previously the constant coefficient problem has been solved with SBP-FD together with penalty terms (SBP-SAT) to impose boundary conditions. In this work we revisit this problem and compare SBP-SAT to the projection method (SBP-P). We also consider the DBE with discontinuous coefficients and present novel SBP-SAT, SBP-P and hybrid SBP-SAT-P discretizations for imposing interface conditions. Numerical experiments show that all methods considered are similar in terms of accuracy, but that SBP-P can be more computationally efficient (less restrictive time step requirement for explicit time integration methods) for both the constant and piecewise constant coefficient problems.

We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of a \textit{convex-concave} unconstrained min-max optimization problem. Compared to their first-order counterparts, investigations of second-order methods for min-max optimization are relatively limited, as obtaining global rates of convergence with second-order information is much more involved. In this paper, we highlight how second-order information can be used to speed up the dynamics of dual extrapolation methods {despite inexactness}. Specifically, we show that the proposed algorithms generate iterates that remain within a bounded set and the averaged iterates converge to an $\epsilon$-saddle point within $O(\epsilon^{-2/3})$ iterations in terms of a gap function. Our algorithms match the theoretically established lower bound in this context and our analysis provides a simple and intuitive convergence analysis for second-order methods without requiring any compactness assumptions. Finally, we present a series of numerical experiments on synthetic and real data that demonstrate the efficiency of the proposed algorithms.

The existing discrete variational derivative method is only second-order accurate and fully implicit. In this paper, we propose a framework to construct an arbitrary high-order implicit (original) energy stable scheme and a second-order semi-implicit (modified) energy stable scheme. Combined with the Runge--Kutta process, we can build an arbitrary high-order and unconditionally (original) energy stable scheme based on the discrete variational derivative method. The new energy stable scheme is implicit and leads to a large sparse nonlinear algebraic system at each time step, which can be efficiently solved by using an inexact Newton type algorithm. To avoid solving nonlinear algebraic systems, we then present a relaxed discrete variational derivative method, which can construct second-order, linear, and unconditionally (modified) energy stable schemes. Several numerical simulations are performed to investigate the efficiency, stability, and accuracy of the newly proposed schemes.

Modified Patankar-Runge-Kutta (MPRK) methods preserve the positivity as well as conservativity of a production-destruction system (PDS) of ordinary differential equations for all time step sizes. As a result, higher order MPRK schemes do not belong to the class of general linear methods, i.e. the iterates are generated by a nonlinear map $\mathbf g$ even when the PDS is linear. Moreover, due to the conservativity of the method, the map $\mathbf g$ possesses non-hyperbolic fixed points. Recently, a new theorem for the investigation of stability properties of non-hyperbolic fixed points of a nonlinear iteration map was developed. We apply this theorem to understand the stability properties of a family of second order MPRK methods when applied to a nonlinear PDS of ordinary differential equations. It is shown that the fixed points are stable for all time step sizes and members of the MPRK family. Finally, experiments are presented to numerically support the theoretical claims.

北京阿比特科技有限公司