亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study approximation by arbitrary linear combinations of $n$ translates of a single function of periodic functions. We construct some linear methods of this approximation for univariate functions in the class induced by the convolution with a single function, and prove upper bounds of the $L^p$-approximation convergence rate by these methods, when $n \to \infty$, for $1 \leq p \leq \infty$. We also generalize these results to classes of multivariate functions defined the convolution with the tensor product of a single function. In the case $p=2$, for this class, we also prove a lower bound of the quantity characterizing best approximation of by arbitrary linear combinations of $n$ translates of arbitrary function.

相關內容

Nash equilibrium is a central concept in game theory. Several Nash solvers exist, yet none scale to normal-form games with many actions and many players, especially those with payoff tensors too big to be stored in memory. In this work, we propose an approach that iteratively improves an approximation to a Nash equilibrium through joint play. It accomplishes this by tracing a previously established homotopy that defines a continuum of equilibria for the game regularized with decaying levels of entropy. This continuum asymptotically approaches the limiting logit equilibrium, proven by McKelvey and Palfrey (1995) to be unique in almost all games, thereby partially circumventing the well-known equilibrium selection problem of many-player games. To encourage iterates to remain near this path, we efficiently minimize average deviation incentive via stochastic gradient descent, intelligently sampling entries in the payoff tensor as needed. Monte Carlo estimates of the stochastic gradient from joint play are biased due to the appearance of a nonlinear max operator in the objective, so we introduce additional innovations to the algorithm to alleviate gradient bias. The descent process can also be viewed as repeatedly constructing and reacting to a polymatrix approximation to the game. In these ways, our proposed approach, average deviation incentive descent with adaptive sampling (ADIDAS), is most similar to three classical approaches, namely homotopy-type, Lyapunov, and iterative polymatrix solvers. The lack of local convergence guarantees for biased gradient descent prevents guaranteed convergence to Nash, however, we demonstrate through extensive experiments the ability of this approach to approximate a unique Nash in normal-form games with as many as seven players and twenty one actions (several billion outcomes) that are orders of magnitude larger than those possible with prior algorithms.

Recent advances in quantized compressed sensing and high-dimensional estimation have shown that signal recovery is even feasible under strong non-linear distortions in the observation process. An important characteristic of associated guarantees is uniformity, i.e., recovery succeeds for an entire class of structured signals with a fixed measurement ensemble. However, despite significant results in various special cases, a general understanding of uniform recovery from non-linear observations is still missing. This paper develops a unified approach to this problem under the assumption of i.i.d. sub-Gaussian measurement vectors. Our main result shows that a simple least-squares estimator with any convex constraint can serve as a universal recovery strategy, which is outlier robust and does not require explicit knowledge of the underlying non-linearity. Based on empirical process theory, a key technical novelty is an approximative increment condition that can be implemented for all common types of non-linear models. This flexibility allows us to apply our approach to a variety of problems in non-linear compressed sensing and high-dimensional statistics, leading to several new and improved guarantees. Each of these applications is accompanied by a conceptually simple and systematic proof, which does not rely on any deeper properties of the observation model. On the other hand, known local stability properties can be incorporated into our framework in a plug-and-play manner, thereby implying near-optimal error bounds.

We introduce a new method for Estimation of Signal Parameters based on Iterative Rational Approximation (ESPIRA) for sparse exponential sums. Our algorithm uses the AAA algorithm for rational approximation of the discrete Fourier transform of the given equidistant signal values. We show that ESPIRA can be interpreted as a matrix pencil method applied to Loewner matrices. These Loewner matrices are closely connected with the Hankel matrices which are usually employed for signal recovery. Due to the construction of the Loewner matrices via an adaptive selection of index sets, the matrix pencil method is stabilized. ESPIRA achieves similar recovery results for exact data as ESPRIT and the matrix pencil method but with less computational effort. Moreover, ESPIRA strongly outperforms ESPRIT and the matrix pencil method for noisy data and for signal approximation by short exponential sums.

In this paper, we consider an information update system where a wireless sensor sends timely updates to the destination over an erasure channel with the supply of harvested energy and reliable backup energy. The metric Age of Information(AoI) is adopted to measure the timeliness of the received updates at the destination. We aim to find the optimal information updating policy that minimizes the time-average weighted sum of the AoI and the reliable backup energy cost by formulating an infinite state Markov decision process(MDP). The optimal information updating policy is proved to have a threshold structure. Based on this special structure, an algorithm for efficiently computing the optimal policy is proposed. Numerical results show that the optimal updating policy proposed outperforms baseline policies.

We introduce Variational State-Space Filters (VSSF), a new method for unsupervised learning, identification, and filtering of latent Markov state space models from raw pixels. We present a theoretically sound framework for latent state space inference under heterogeneous sensor configurations. The resulting model can integrate an arbitrary subset of the sensor measurements used during training, enabling the learning of semi-supervised state representations, thus enforcing that certain components of the learned latent state space to agree with interpretable measurements. From this framework we derive L-VSSF, an explicit instantiation of this model with linear latent dynamics and Gaussian distribution parameterizations. We experimentally demonstrate L-VSSF's ability to filter in latent space beyond the sequence length of the training dataset across several different test environments.

Optimization under uncertainty and risk is indispensable in many practical situations. Our paper addresses stability of optimization problems using composite risk functionals which are subjected to measure perturbations. Our main focus is the asymptotic behavior of data-driven formulations with empirical or smoothing estimators such as kernels or wavelets applied to some or to all functions of the compositions. We analyze the properties of the new estimators and we establish strong law of large numbers, consistency, and bias reduction potential under fairly general assumptions. Our results are germane to risk-averse optimization and to data science in general.

We give two approximation algorithms solving the Stochastic Boolean Function Evaluation (SBFE) problem for symmetric Boolean functions. The first is an $O(\log n)$-approximation algorithm, based on the submodular goal-value approach of Deshpande, Hellerstein and Kletenik. Our second algorithm, which is simple, is based on the algorithm solving the SBFE problem for $k$-of-$n$ functions, due to Salloum, Breuer, and Ben-Dov. It achieves a $(B-1)$ approximation factor, where $B$ is the number of blocks of 0's and 1's in the standard vector representation of the symmetric Boolean function. As part of the design of the first algorithm, we prove that the goal value of any symmetric Boolean function is less than $n(n+1)/2$. Finally, we give an example showing that for symmetric Boolean functions, minimum expected verification cost and minimum expected evaluation cost are not necessarily equal. This contrasts with a previous result, given by Das, Jafarpour, Orlitsky, Pan and Suresh, which showed that equality holds in the unit-cost case.

We investigate the quality of space approximation of a class of stochastic integral equations of convolution type with Gaussian noise. Such equations arise, for example, when considering mild solutions of stochastic fractional order partial differential equations but also when considering mild solutions of classical stochastic partial differential equations. The key requirement for the equations is a smoothing property of the deterministic evolution operator which is typical in parabolic type problems. We show that if one has access to nonsmooth data estimates for the deterministic error operator together with its derivative of a space discretization procedure, then one obtains error estimates in pathwise H\"older norms with rates that can be read off the deterministic error rates. We illustrate the main result by considering a class of stochastic fractional order partial differential equations and space approximations performed by spectral Galerkin methods and finite elements. We also improve an existing result on the stochastic heat equation.

We study the off-policy evaluation (OPE) problem in reinforcement learning with linear function approximation, which aims to estimate the value function of a target policy based on the offline data collected by a behavior policy. We propose to incorporate the variance information of the value function to improve the sample efficiency of OPE. More specifically, for time-inhomogeneous episodic linear Markov decision processes (MDPs), we propose an algorithm, VA-OPE, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration. We show that our algorithm achieves a tighter error bound than the best-known result. We also provide a fine-grained characterization of the distribution shift between the behavior policy and the target policy. Extensive numerical experiments corroborate our theory.

Due to the falling costs of data acquisition and storage, researchers and industry analysts often want to find all instances of rare events in large datasets. For instance, scientists can cheaply capture thousands of hours of video, but are limited by the need to manually inspect long videos to identify relevant objects and events. To reduce this cost, recent work proposes to use cheap proxy models, such as image classifiers, to identify an approximate set of data points satisfying a data selection filter. Unfortunately, this recent work does not provide the statistical accuracy guarantees necessary in scientific and production settings. In this work, we introduce novel algorithms for approximate selection queries with statistical accuracy guarantees. Namely, given a limited number of exact identifications from an oracle, often a human or an expensive machine learning model, our algorithms meet a minimum precision or recall target with high probability. In contrast, existing approaches can catastrophically fail in satisfying these recall and precision targets. We show that our algorithms can improve query result quality by up to 30x for both the precision and recall targets in both real and synthetic datasets.

北京阿比特科技有限公司