亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The transmission eigenvalue problem arising from the inverse scattering theory is of great importance in the theory of qualitative methods and in the practical applications. In this paper, we study the transmission eigenvalue problem for anisotropic inhomogeneous media in $\Omega\subset \mathbb{R}^d$,(d=2,3). Using the T-coercivity and the spectral approximation theory, we derive an a posteriori estimator of residual type and prove its effectiveness and reliability for eigenfunctions. In addition, we also prove the reliability of the estimator for transmission eigenvalues. The numerical experiments indicate our method is efficient and can reach the optimal order $DoF^{-2m/d}$ by using piecewise polynomials of degree $m$ for real eigenvalues.

相關內容

Solving inverse problems is central to a variety of important applications, such as biomedical image reconstruction and non-destructive testing. These problems are characterized by the sensitivity of direct solution methods with respect to data perturbations. To stabilize the reconstruction process, regularization methods have to be employed. Well-known regularization methods are based on frame expansions, such as the wavelet-vaguelette (WVD) decomposition, which are well adapted to the underlying signal class and the forward model and furthermore allow efficient implementation. However, it is well known that the lack of translational invariance of wavelets and related systems leads to specific artifacts in the reconstruction. To overcome this problem, in this paper we introduce and analyze the translation invariant diagonal frame decomposition (TI-DFD) of linear operators as a novel concept generalizing the SVD. We characterize ill-posedness via the TI-DFD and prove that a TI-DFD combined with a regularizing filter leads to a convergent regularization method with optimal convergence rates. As illustrative example, we construct a wavelet-based TI-DFD for one-dimensional integration, where we also investigate our approach numerically. The results indicate that filtered TI-DFDs eliminate the typical wavelet artifacts when using standard wavelets and provide a fast, accurate, and stable solution scheme for inverse problems.

The classical algorithms used in tabular reinforcement learning (Value Iteration and Policy Iteration) have been shown to converge linearly with a rate given by the discount factor $\gamma$ of a discounted Markov Decision Process. Recently, there has been an increased interest in the study of gradient based methods. In this work, we show that the dimension-free linear $\gamma$-rate of classical reinforcement learning algorithms can be achieved by a general family of unregularised Policy Mirror Descent (PMD) algorithms under an adaptive step-size. We also provide a matching worst-case lower-bound that demonstrates that the $\gamma$-rate is optimal for PMD methods. Our work offers a novel perspective on the convergence of PMD. We avoid the use of the performance difference lemma beyond establishing the monotonic improvement of the iterates, which leads to a simple analysis that may be of independent interest. We also extend our analysis to the inexact setting and establish the first dimension-free $\varepsilon$-optimal sample complexity for unregularised PMD under a generative model, improving upon the best-known result.

The recently emerged spectral clustering surpasses conventional clustering methods by detecting clusters of any shape without the convexity assumption. Unfortunately, with a computational complexity of $O(n^3)$, it was infeasible for multiple real applications, where $n$ could be large. This stimulates researchers to propose the approximate spectral clustering (ASC). However, most of ASC methods assumed that the number of clusters $k$ was known. In practice, manual setting of $k$ could be subjective or time consuming. The proposed algorithm has two relevance metrics for estimating $k$ in two vital steps of ASC. One for selecting the eigenvectors spanning the embedding space, and the other to discover the number of clusters in that space. The algorithm used a growing neural gas (GNG) approximation, GNG is superior in preserving input data topology. The experimental setup demonstrates the efficiency of the proposed algorithm and its ability to compete with similar methods where $k$ was set manually.

In this work, we study the problem of robustly estimating the mean/location parameter of distributions without moment bounds. For a large class of distributions satisfying natural symmetry constraints we give a sequence of algorithms that can efficiently estimate its location without incurring dimension-dependent factors in the error. Concretely, suppose an adversary can arbitrarily corrupt an $\varepsilon$-fraction of the observed samples. For every $k \in \mathbb{N}$, we design an estimator using time and samples $\tilde{O}({d^k})$ such that the dependence of the error on the corruption level $\varepsilon$ is an additive factor of $O(\varepsilon^{1-\frac{1}{2k}})$. The dependence on other problem parameters is also nearly optimal. Our class contains products of arbitrary symmetric one-dimensional distributions as well as elliptical distributions, a vast generalization of the Gaussian distribution. Examples include product Cauchy distributions and multi-variate $t$-distributions. In particular, even the first moment might not exist. We provide the first efficient algorithms for this class of distributions. Previously, such results where only known under boundedness assumptions on the moments of the distribution and in particular, are provably impossible in the absence of symmetry [KSS18, CTBJ22]. For the class of distributions we consider, all previous estimators either require exponential time or incur error depending on the dimension. Our algorithms are based on a generalization of the filtering technique [DK22]. We show how this machinery can be combined with Huber-loss-based approach to work with projections of the noise. Moreover, we show how sum-of-squares proofs can be used to obtain algorithmic guarantees even for distributions without first moment. We believe that this approach may find other application in future works.

In this work we investigate the numerical identification of the diffusion coefficient in elliptic and parabolic problems using neural networks. The numerical scheme is based on the standard output least-squares formulation where the Galerkin finite element method (FEM) is employed to approximate the state and neural networks (NNs) act as a smoothness prior to approximate the unknown diffusion coefficient. A projection operation is applied to the NN approximation in order to preserve the physical box constraint on the unknown coefficient. The hybrid approach enjoys both rigorous mathematical foundation of the FEM and inductive bias / approximation properties of NNs. We derive \textsl{a priori} error estimates in the standard $L^2(\Omega)$ norm for the numerical reconstruction, under a positivity condition which can be verified for a large class of problem data. The error bounds depend explicitly on the noise level, regularization parameter and discretization parameters (e.g., spatial mesh size, time step size, and depth, upper bound and number of nonzero parameters of NNs). We also provide extensive numerical experiments, indicating that the hybrid method is very robust for large noise when compared with the pure FEM approximation.

We study the problem of optimizing the decisions of a preemptively capable transmitter to minimize the Age of Incorrect Information (AoII) when the communication channel has a random delay. We consider a slotted-time system where a transmitter observes a Markovian source and makes decisions based on the system status. In each time slot, the transmitter decides whether to preempt or skip when the channel is busy. When the channel is idle, the transmitter decides whether to send a new update. A remote receiver estimates the state of the Markovian source based on the update it receives. We consider a generic transmission delay and assume that the transmission delay is independent and identically distributed for each update. This paper aims to optimize the transmitter's decision in each time slot to minimize the AoII with generic time penalty functions. To this end, we first use the Markov decision process to formulate the optimization problem and derive the analytical expressions of the expected AoIIs achieved by two canonical preemptive policies. Then, we prove the existence of the optimal policy and provide a feasible value iteration algorithm to approximate the optimal policy. However, the value iteration algorithm will be computationally expensive if we want considerable confidence in the approximation. Therefore, we analyze the system characteristics under two canonical delay distributions and theoretically obtain the corresponding optimal policies using the policy improvement theorem. Finally, numerical results are presented to illustrate the performance improvements brought about by the preemption capability.

The model-X conditional randomization test is a generic framework for conditional independence testing, unlocking new possibilities to discover features that are conditionally associated with a response of interest while controlling type-I error rates. An appealing advantage of this test is that it can work with any machine learning model to design powerful test statistics. In turn, the common practice in the model-X literature is to form a test statistic using machine learning models, trained to maximize predictive accuracy with the hope to attain a test with good power. However, the ideal goal here is to drive the model (during training) to maximize the power of the test, not merely the predictive accuracy. In this paper, we bridge this gap by introducing, for the first time, novel model-fitting schemes that are designed to explicitly improve the power of model-X tests. This is done by introducing a new cost function that aims at maximizing the test statistic used to measure violations of conditional independence. Using synthetic and real data sets, we demonstrate that the combination of our proposed loss function with various base predictive models (lasso, elastic net, and deep neural networks) consistently increases the number of correct discoveries obtained, while maintaining type-I error rates under control.

In many bandit problems, the maximal reward achievable by a policy is often unknown in advance. We consider the problem of estimating the optimal policy value in the sublinear data regime before the optimal policy is even learnable. We refer to this as $V^*$ estimation. It was recently shown that fast $V^*$ estimation is possible but only in disjoint linear bandits with Gaussian covariates. Whether this is possible for more realistic context distributions has remained an open and important question for tasks such as model selection. In this paper, we first provide lower bounds showing that this general problem is hard. However, under stronger assumptions, we give an algorithm and analysis proving that $\widetilde{\mathcal{O}}(\sqrt{d})$ sublinear estimation of $V^*$ is indeed information-theoretically possible, where $d$ is the dimension. We then present a more practical, computationally efficient algorithm that estimates a problem-dependent upper bound on $V^*$ that holds for general distributions and is tight when the context distribution is Gaussian. We prove our algorithm requires only $\widetilde{\mathcal{O}}(\sqrt{d})$ samples to estimate the upper bound. We use this upper bound and the estimator to obtain novel and improved guarantees for several applications in bandit model selection and testing for treatment effects.

We establish optimal error bounds for the exponential wave integrator (EWI) applied to the nonlinear Schr\"odinger equation (NLSE) with $ L^\infty $-potential and/or locally Lipschitz nonlinearity under the assumption of $ H^2 $-solution of the NLSE. For the semi-discretization in time by the first-order Gautschi-type EWI, we prove an optimal $ L^2 $-error bound at $ O(\tau) $ with $ \tau>0 $ being the time step size, together with a uniform $ H^2 $-bound of the numerical solution. For the full-discretization scheme obtained by using the Fourier spectral method in space, we prove an optimal $ L^2 $-error bound at $ O(\tau + h^2) $ without any coupling condition between $ \tau $ and $ h $, where $ h>0 $ is the mesh size. In addition, for $ W^{1, 4} $-potential and a little stronger regularity of the nonlinearity, under the assumption of $ H^3 $-solution, we obtain an optimal $ H^1 $-error bound. Furthermore, when the potential is of low regularity but the nonlinearity is sufficiently smooth, we propose an extended Fourier pseudospectral method which has the same error bound as the Fourier spectral method while its computational cost is similar to the standard Fourier pseudospectral method. Our new error bounds greatly improve the existing results for the NLSE with low regularity potential and/or nonlinearity. Extensive numerical results are reported to confirm our error estimates and to demonstrate that they are sharp.

When factorized approximations are used for variational inference (VI), they tend to understimate the uncertainty -- as measured in various ways -- of the distributions they are meant to approximate. We consider two popular ways to measure the uncertainty deficit of VI: (i) the degree to which it underestimates the componentwise variance, and (ii) the degree to which it underestimates the entropy. To better understand these effects, and the relationship between them, we examine an informative setting where they can be explicitly (and elegantly) analyzed: the approximation of a Gaussian,~$p$, with a dense covariance matrix, by a Gaussian,~$q$, with a diagonal covariance matrix. We prove that $q$ always underestimates both the componentwise variance and the entropy of $p$, \textit{though not necessarily to the same degree}. Moreover we demonstrate that the entropy of $q$ is determined by the trade-off of two competing forces: it is decreased by the shrinkage of its componentwise variances (our first measure of uncertainty) but it is increased by the factorized approximation which delinks the nodes in the graphical model of $p$. We study various manifestations of this trade-off, notably one where, as the dimension of the problem grows, the per-component entropy gap between $p$ and $q$ becomes vanishingly small even though $q$ underestimates every componentwise variance by a constant multiplicative factor. We also use the shrinkage-delinkage trade-off to bound the entropy gap in terms of the problem dimension and the condition number of the correlation matrix of $p$. Finally we present empirical results on both Gaussian and non-Gaussian targets, the former to validate our analysis and the latter to explore its limitations.

北京阿比特科技有限公司