亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study Tikhonov regularization for possibly nonlinear inverse problems with weighted $\ell^1$-penalization. The forward operator, mapping from a sequence space to an arbitrary Banach space, typically an $L^2$-space, is assumed to satisfy a two-sided Lipschitz condition with respect to a weighted $\ell^2$-norm and the norm of the image space. We show that in this setting approximation rates of arbitrarily high H\"older-type order in the regularization parameter can be achieved, and we characterize maximal subspaces of sequences on which these rates are attained. On these subspaces the method also converges with optimal rates in terms of the noise level with the discrepancy principle as parameter choice rule. Our analysis includes the case that the penalty term is not finite at the exact solution ('oversmoothing'). As a standard example we discuss wavelet regularization in Besov spaces $B^r_{1,1}$. In this setting we demonstrate in numerical simulations for a parameter identification problem in a differential equation that our theoretical results correctly predict improved rates of convergence for piecewise smooth unknown coefficients.

相關內容

We investigate minimax testing for detecting local signals or linear combinations of such signals when only indirect data is available. Naturally, in the presence of noise, signals that are too small cannot be reliably detected. In a Gaussian white noise model, we discuss upper and lower bounds for the minimal size of the signal such that testing with small error probabilities is possible. In certain situations we are able to characterize the asymptotic minimax detection boundary. Our results are applied to inverse problems such as numerical differentiation, deconvolution and the inversion of the Radon transform.

Confidence intervals are a standard technique for analyzing data. When applied to time series, confidence intervals are computed for each time point separately. Alternatively, we can compute confidence bands, where we are required to find the smallest area enveloping $k$ time series, where $k$ is a user parameter. Confidence bands can be then used to detect abnormal time series, not just individual observations within the time series. We will show that despite being an NP-hard problem it is possible to find optimal confidence band for some $k$. We do this by considering a different problem: discovering regularized bands, where we minimize the envelope area minus the number of included time series weighted by a parameter $\alpha$. Unlike normal confidence bands we can solve the problem exactly by using a minimum cut. By varying $\alpha$ we can obtain solutions for various $k$. If we have a constraint $k$ for which we cannot find appropriate $\alpha$, we demonstrate a simple algorithm that yields $O(\sqrt{n})$ approximation guarantee by connecting the problem to a minimum $k$-union problem. This connection also implies that we cannot approximate the problem better than $O(n^{1/4})$ under some (mild) assumptions. Finally, we consider a variant where instead of minimizing the area we minimize the maximum width. Here, we demonstrate a simple 2-approximation algorithm and show that we cannot achieve better approximation guarantee.

In this article, the local convergence analysis of the multi-step seventh order method is presented for solving nonlinear equations. The point worth noting in our paper is that our analysis requires a weak hypothesis where the Fr\'echet derivative of the nonlinear operator satisfies the $\psi$-continuity condition and extends the applicability of the computation when both Lipschitz and H\"{o}lder conditions fail. The convergence in this study is shown under the hypotheses on the first order derivative without involving derivatives of the higher-order. To find a subset of the original convergence domain, a strategy is devised. As a result, the new Lipschitz constants are at least as tight as the old ones, allowing for a more precise convergence analysis in the local convergence case. Some numerical examples are provided to show the performance of the method presented in this contribution over some existing schemes.

Spatially inhomogeneous functions, which may be smooth in some regions and rough in other regions, are modelled naturally in a Bayesian manner using so-called Besov priors which are given by random wavelet expansions with Laplace-distributed coefficients. This paper studies theoretical guarantees for such prior measures - specifically, we examine their frequentist posterior contraction rates in the setting of non-linear inverse problems with Gaussian white noise. Our results are first derived under a general local Lipschitz assumption on the forward map. We then verify the assumption for two non-linear inverse problems arising from elliptic partial differential equations, the Darcy flow model from geophysics as well as a model for the Schr\"odinger equation appearing in tomography. In the course of the proofs, we also obtain novel concentration inequalities for penalized least squares estimators with $\ell^1$ wavelet penalty, which have a natural interpretation as maximum a posteriori (MAP) estimators. The true parameter is assumed to belong to some spatially inhomogeneous Besov class $B^{\alpha}_{11}$, $\alpha>0$. In a setting with direct observations, we complement these upper bounds with a lower bound on the rate of contraction for arbitrary Gaussian priors. An immediate consequence of our results is that while Laplace priors can achieve minimax-optimal rates over $B^{\alpha}_{11}$-classes, Gaussian priors are limited to a (by a polynomial factor) slower contraction rate. This gives information-theoretical justification for the intuition that Laplace priors are more compatible with $\ell^1$ regularity structure in the underlying parameter.

This article is concerned with the nonconforming finite element method for distributed elliptic optimal control problems with pointwise constraints on the control and gradient of the state variable. We reduce the minimization problem into a pure state constraint minimization problem. In this case, the solution of the minimization problem can be characterized as fourth-order elliptic variational inequalities of the first kind. To discretize the control problem we have used the bubble enriched Morley finite element method. To ensure the existence of the solution to discrete problems three bubble functions corresponding to the mean of the edge are added to the discrete space. We derive the error in the state variable in $H^2$-type energy norm. Numerical results are presented to illustrate our analytical findings.

Stochastic gradient descent with momentum (SGDM) is the dominant algorithm in many optimization scenarios, including convex optimization instances and non-convex neural network training. Yet, in the stochastic setting, momentum interferes with gradient noise, often leading to specific step size and momentum choices in order to guarantee convergence, set aside acceleration. Proximal point methods, on the other hand, have gained much attention due to their numerical stability and elasticity against imperfect tuning. Their stochastic accelerated variants though have received limited attention: how momentum interacts with the stability of (stochastic) proximal point methods remains largely unstudied. To address this, we focus on the convergence and stability of the stochastic proximal point algorithm with momentum (SPPAM), and show that SPPAM allows a faster linear convergence to a neighborhood compared to stochastic proximal point algorithm (SPPA) with a better contraction factor, under proper hyperparameter tuning. In terms of stability, we show that SPPAM depends on problem constants more favorably than SGDM, allowing a wider range of step size and momentum that lead to convergence.

The pressure correction scheme is combined with interior penalty discontinuous Galerkin method to solve the time-dependent Navier-Stokes equations. Optimal error estimates are derived for the velocity in the L$^2$ norm in time and in space. Error bounds for the discrete time derivative of the velocity and for the pressure are also established.

Focusing on hybrid diffusion dynamics involving continuous dynamics as well as discrete events, this article investigates the explicit approximations for nonlinear switching diffusion systems modulated by a Markov chain. Different kinds of easily implementable explicit schemes have been proposed to approximate the dynamical behaviors of switching diffusion systems with local Lipschitz continuous drift and diffusion coefficients in both finite and infinite intervals. Without additional restriction conditions except those which guarantee the exact solutions posses their dynamical properties, the numerical solutions converge strongly to the exact solutions in finite horizon, moreover, realize the approximation of long-time dynamical properties including the moment boundedness, stability and ergodicity. Some simulations and examples are provided to support the theoretical results and demonstrate the validity of the approach.

Many physical and mathematical models involve random fields in their input data. Examples are ordinary differential equations, partial differential equations and integro--differential equations with uncertainties in the coefficient functions described by random fields. They also play a dominant role in problems in machine learning. In this article, we do not assume to have knowledge of the moments or expansion terms of the random fields but we instead have only given discretized samples for them. We thus model some measurement process for this discrete information and then approximate the covariance operator of the original random field. Of course, the true covariance operator is of infinite rank and hence we can not assume to get an accurate approximation from a finite number of spatially discretized observations. On the other hand, smoothness of the true (unknown) covariance function results in effective low rank approximations to the true covariance operator. We derive explicit error estimates that involve the finite rank approximation error of the covariance operator, the Monte-Carlo-type errors for sampling in the stochastic domain and the numerical discretization error in the physical domain. This permits to give sufficient conditions on the three discretization parameters to guarantee that an error below a prescribed accuracy $\varepsilon$ is achieved.

In this paper, we propose a dual-mixed formulation for stationary viscoplastic flows with yield, such as the Bingham or the Herschel-Bulkley flow. The approach is based on a Huber regularization of the viscosity term and a two-fold saddle point nonlinear operator equation for the resulting weak formulation. We provide the uniqueness of solutions for the continuous formulation and propose a discrete scheme based on Arnold-Falk-Winther finite elements. The discretization scheme yields a system of slantly differentiable nonlinear equations, for which a semismooth Newton algorithm is proposed and implemented. Local superlinear convergence of the method is also proved. Finally, we perform several numerical experiments in two and three dimensions to investigate the behavior and efficiency of the method.

北京阿比特科技有限公司