亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Ensemble Kalman inversion (EKI) is a derivative-free optimizer aimed at solving inverse problems, taking motivation from the celebrated ensemble Kalman filter. The purpose of this article is to consider the introduction of adaptive Tikhonov strategies for EKI. This work builds upon Tikhonov EKI (TEKI) which was proposed for a fixed regularization constant. By adaptively learning the regularization parameter, this procedure is known to improve the recovery of the underlying unknown. For the analysis, we consider a continuous-time setting where we extend known results such as well-posdeness and convergence of various loss functions, but with the addition of noisy observations. Furthermore, we allow a time-varying noise and regularization covariance in our presented convergence result which mimic adaptive regularization schemes. In turn we present three adaptive regularization schemes, which are highlighted from both the deterministic and Bayesian approaches for inverse problems, which include bilevel optimization, the MAP formulation and covariance learning. We numerically test these schemes and the theory on linear and nonlinear partial differential equations, where they outperform the non-adaptive TEKI and EKI.

相關內容

We study a variant of the classical $k$-median problem known as diversity-aware $k$-median (introduced by Thejaswi et al. 2021), where we are given a collection of facility subsets, and a solution must contain at least a specified number of facilities from each subset.We investigate the fixed-parameter tractability of this problem and show several negative hardness and inapproximability results, even when we afford exponential running time with respect to some parameters of the problem. Motivated by these results we present a fixed parameter approximation algorithm with approximation ratio $(1 + \frac{2}{e} +\epsilon)$, and argue that this ratio is essentially tight assuming the gap-exponential time hypothesis. We also present a simple, practical local-search algorithm that gives a bicriteria $(2k, 3+\epsilon)$ approximation with better running time bounds.

We present a novel sampling-based method for estimating probabilities of rare or failure events. Our approach is founded on the Ensemble Kalman filter (EnKF) for inverse problems. Therefore, we reformulate the rare event problem as an inverse problem and apply the EnKF to generate failure samples. To estimate the probability of failure, we use the final EnKF samples to fit a distribution model and apply Importance Sampling with respect to the fitted distribution. This leads to an unbiased estimator if the density of the fitted distribution admits positive values within the whole failure domain. To handle multi-modal failure domains, we localise the covariance matrices in the EnKF update step around each particle and fit a mixture distribution model in the Importance Sampling step. For affine linear limit-state functions, we investigate the continuous-time limit and large time properties of the EnKF update. We prove that the mean of the particles converges to a convex combination of the most likely failure point and the mean of the optimal Importance Sampling density if the EnKF is applied without noise. We provide numerical experiments to compare the performance of the EnKF with Sequential Importance Sampling.

We study a variant of the classical $k$-median problem known as diversity-aware $k$-median (introduced by Thejaswi et al. 2021), where we are given a collection of facility subsets, and a solution must contain at least a specified number of facilities from each subset.We investigate the fixed-parameter tractability of this problem and show several negative hardness and inapproximability results, even when we afford exponential running time with respect to some parameters of the problem. Motivated by these results we present a fixed parameter approximation algorithm with approximation ratio $(1 + \frac{2}{e} +\epsilon)$, and argue that this ratio is essentially tight assuming the gap-exponential time hypothesis. We also present a simple, practical local-search algorithm that gives a bicriteria $(2k, 3+\epsilon)$ approximation with better running time bounds.

We formulate and analyze a goal-oriented adaptive finite element method (GOAFEM) for a semilinear elliptic PDE and a linear goal functional. The strategy involves the finite element solution of a linearized dual problem, where the linearization is part of the adaptive strategy. Linear convergence and optimal algebraic convergence rates are shown.

Spatially inhomogeneous functions, which may be smooth in some regions and rough in other regions, are modelled naturally in a Bayesian manner using so-called Besov priors which are given by random wavelet expansions with Laplace-distributed coefficients. This paper studies theoretical guarantees for such prior measures - specifically, we examine their frequentist posterior contraction rates in the setting of non-linear inverse problems with Gaussian white noise. Our results are first derived under a general local Lipschitz assumption on the forward map. We then verify the assumption for two non-linear inverse problems arising from elliptic partial differential equations, the Darcy flow model from geophysics as well as a model for the Schr\"odinger equation appearing in tomography. In the course of the proofs, we also obtain novel concentration inequalities for penalized least squares estimators with $\ell^1$ wavelet penalty, which have a natural interpretation as maximum a posteriori (MAP) estimators. The true parameter is assumed to belong to some spatially inhomogeneous Besov class $B^{\alpha}_{11}$, $\alpha>0$. In a setting with direct observations, we complement these upper bounds with a lower bound on the rate of contraction for arbitrary Gaussian priors. An immediate consequence of our results is that while Laplace priors can achieve minimax-optimal rates over $B^{\alpha}_{11}$-classes, Gaussian priors are limited to a (by a polynomial factor) slower contraction rate. This gives information-theoretical justification for the intuition that Laplace priors are more compatible with $\ell^1$ regularity structure in the underlying parameter.

In this paper, we develop a Monte Carlo algorithm named the Frozen Gaussian Sampling (FGS) to solve the semiclassical Schr\"odinger equation based on the frozen Gaussian approximation. Due to the highly oscillatory structure of the wave function, traditional mesh-based algorithms suffer from "the curse of dimensionality", which gives rise to more severe computational burden when the semiclassical parameter \(\ep\) is small. The Frozen Gaussian sampling outperforms the existing algorithms in that it is mesh-free in computing the physical observables and is suitable for high dimensional problems. In this work, we provide detailed procedures to implement the FGS for both Gaussian and WKB initial data cases, where the sampling strategies on the phase space balance the need of variance reduction and sampling convenience. Moreover, we rigorously prove that, to reach a certain accuracy, the number of samples needed for the FGS is independent of the scaling parameter \(\ep\). Furthermore, the complexity of the FGS algorithm is of a sublinear scaling with respect to the microscopic degrees of freedom and, in particular, is insensitive to the dimension number. The performance of the FGS is validated through several typical numerical experiments, including simulating scattering by the barrier potential, formation of the caustics and computing the high-dimensional physical observables without mesh.

We propose a novel and unified framework for change-point estimation in multivariate time series. The proposed method is fully nonparametric, enjoys effortless tuning and is robust to temporal dependence. One salient and distinct feature of the proposed method is its versatility, where it allows change-point detection for a broad class of parameters (such as mean, variance, correlation and quantile) in a unified fashion. At the core of our method, we couple the self-normalization (SN) based tests with a novel nested local-window segmentation algorithm, which seems new in the growing literature of change-point analysis. Due to the presence of an inconsistent long-run variance estimator in the SN test, non-standard theoretical arguments are further developed to derive the consistency and convergence rate of the proposed SN-based change-point detection method. Extensive numerical experiments and relevant real data analysis are conducted to illustrate the effectiveness and broad applicability of our proposed method in comparison with state-of-the-art approaches in the literature.

In the present study, we consider sparse representations of solutions to Dirichlet and heat equation problems with random boundary or initial conditions. To analyze the random signals, two types of sparse representations are developed, namely stochastic pre-orthogonal adaptive Fourier decomposition 1 and 2 (SPOAFD1 and SPOAFD2). Due to adaptive parameter selecting of SPOAFDs at each step, we obtain analytical sparse solutions of the SPDE problems with fast convergence.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, and Wide ResNet 28-10 architectures, our methodology improves upon both deep and batch ensembles.

北京阿比特科技有限公司