亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the problem of best arm identification in $\textit{contaminated}$ stochastic multi-arm bandits. In this setting, the rewards obtained from any arm are replaced by samples from an adversarial model with probability $\varepsilon$. A fixed confidence (infinite-horizon) setting is considered, where the goal of the learner is to identify the arm with the largest mean. Owing to the adversarial contamination of the rewards, each arm's mean is only partially identifiable. This paper proposes two algorithms, a gap-based algorithm and one based on the successive elimination, for best arm identification in sub-Gaussian bandits. These algorithms involve mean estimates that achieve the optimal error guarantee on the deviation of the true mean from the estimate asymptotically. Furthermore, these algorithms asymptotically achieve the optimal sample complexity. Specifically, for the gap-based algorithm, the sample complexity is asymptotically optimal up to constant factors, while for the successive elimination-based algorithm, it is optimal up to logarithmic factors. Finally, numerical experiments are provided to illustrate the gains of the algorithms compared to the existing baselines.

相關內容

 安謀控股公司,又稱ARM公司,跨國性半導體設計與軟件公司,總部位于英國英格蘭劍橋。主要的產品是ARM架構處理器的設計,將其以知識產權的形式向客戶進行授權,同時也提供軟件開發工具。

In this paper, we prove a local limit theorem for the chi-square distribution with $r > 0$ degrees of freedom and noncentrality parameter $\lambda \geq 0$. We use it to develop refined normal approximations for the survival function. Our maximal errors go down to an order of $r^{-2}$, which is significantly smaller than the maximal error bounds of order $r^{-1/2}$ recently found by Horgan & Murphy (2013) and Seri (2015). Our results allow us to drastically reduce the number of observations required to obtain negligible errors in the energy detection problem, from $250$, as recommended in the seminal work of Urkowitz (1967), to only $8$ here with our new approximations. We also obtain an upper bound on several probability metrics between the central and noncentral chi-square distributions and the standard normal distribution, and we obtain an approximation for the median that improves the lower bound previously obtained by Robert (1990).

User dissatisfaction due to buffering pauses during streaming is a significant cost to the system, which we model as a non-decreasing function of the frequency of buffering pause. Minimization of total user dissatisfaction in a multi-channel cellular network leads to a non-convex problem. Utilizing a combinatorial structure in this problem, we first propose a polynomial time joint admission control and channel allocation algorithm which is provably (almost) optimal. This scheme assumes that the base station (BS) knows the frame statistics of the streams. In a more practical setting, where these statistics are not available a priori at the BS, a learning based scheme with provable guarantees is developed. This learning based scheme has relation to regret minimization in multi-armed bandits with non-i.i.d. and delayed reward (cost). All these algorithms require none to minimal feedback from the user equipment to the base station regarding the states of the media player buffer at the application layer, and hence, are of practical interest.

In this paper we analyze the behavior of the Oja's algorithm for online/streaming principal component subspace estimation. It is proved that with high probability it performs an efficient, gap-free, global convergence rate to approximate an principal component subspace for any sub-Gaussian distribution. Moreover, it is the first time to show that the convergence rate, namely the upper bound of the approximation, exactly matches the lower bound of an approximation obtained by the offline/classical PCA up to a constant factor.

In this paper we propose a deep learning based numerical scheme for strongly coupled FBSDE, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the means square error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDE, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method.

We consider stochastic differential equations (SDEs) driven by small L\'evy noise with some unknown parameters, and propose a new type of least squares estimators based on discrete samples from the SDEs. To approximate the increments of a process from the SDEs, we shall use not the usual Euler method, but the Adams method, that is, a well-known numerical approximation of the solution to the ordinary differential equation appearing in the limit of the SDE. We show the consistency of the proposed estimators as well as the asymptotic distribution in a suitable observation scheme. We also show that our estimators can be better than the usual LSE based on the Euler method in the finite sample performance.

We study the non-stationary stochastic multi-armed bandit problem, where the reward statistics of each arm may change several times during the course of learning. The performance of a learning algorithm is evaluated in terms of their dynamic regret, which is defined as the difference of the expected cumulative reward of an agent choosing the optimal arm in every round and the cumulative reward of the learning algorithm. One way to measure the hardness of such environments is to consider how many times the identity of the optimal arm can change. We propose a method that achieves, in $K$-armed bandit problems, a near-optimal $\widetilde O(\sqrt{K N(S+1)})$ dynamic regret, where $N$ is the number of rounds and $S$ is the number of times the identity of the optimal arm changes, without prior knowledge of $S$ and $N$. Previous works for this problem obtain regret bounds that scale with the number of changes (or the amount of change) in the reward functions, which can be much larger, or assume prior knowledge of $S$ to achieve similar bounds.

In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) and provide excess population risks for some special classes of functions that are faster than the previous results of general convex and strongly convex functions. In the first part of the paper, we study the case where the population risk function satisfies the Tysbakov Noise Condition (TNC) with some parameter $\theta>1$. Specifically, we first show that under some mild assumptions on the loss functions, there is an algorithm whose output could achieve an upper bound of $\tilde{O}((\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log \frac{1}{\delta}}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $(\epsilon, \delta)$-DP when $\theta\geq 2$, here $n$ is the sample size and $d$ is the dimension of the space. Then we address the inefficiency issue, improve the upper bounds by $\text{Poly}(\log n)$ factors and extend to the case where $\theta\geq \bar{\theta}>1$ for some known $\bar{\theta}$. Next we show that the excess population risk of population functions satisfying TNC with parameter $\theta\geq 2$ is always lower bounded by $\Omega((\frac{d}{n\epsilon})^\frac{\theta}{\theta-1}) $ and $\Omega((\frac{\sqrt{d\log \frac{1}{\delta}}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $\epsilon$-DP and $(\epsilon, \delta)$-DP, respectively. In the second part, we focus on a special case where the population risk function is strongly convex. Unlike the previous studies, here we assume the loss function is {\em non-negative} and {\em the optimal value of population risk is sufficiently small}. With these additional assumptions, we propose a new method whose output could achieve an upper bound of $O(\frac{d\log\frac{1}{\delta}}{n^2\epsilon^2}+\frac{1}{n^{\tau}})$ for any $\tau\geq 1$ in $(\epsilon,\delta)$-DP model if the sample size $n$ is sufficiently large.

This paper studies a new variant of the stochastic multi-armed bandits problem where auxiliary information about the arm rewards is available in the form of control variates. In many applications like queuing and wireless networks, the arm rewards are functions of some exogenous variables. The mean values of these variables are known a priori from historical data and can be used as control variates. Leveraging the theory of control variates, we obtain mean estimates with smaller variance and tighter confidence bounds. We develop an upper confidence bound based algorithm named UCB-CV and characterize the regret bounds in terms of the correlation between rewards and control variates when they follow a multivariate normal distribution. We also extend UCB-CV to other distributions using resampling methods like Jackknifing and Splitting. Experiments on synthetic problem instances validate performance guarantees of the proposed algorithms.

We develop an \textit{a posteriori} error analysis for the time of the first occurrence of an event, specifically, the time at which a functional of the solution to a partial differential equation (PDE) first achieves a threshold value on a given time interval. This novel quantity of interest (QoI) differs from classical QoIs which are modeled as bounded linear (or nonlinear) functionals. Taylor's theorem and an adjoint-based \textit{a posteriori} analysis is used to derive computable and accurate error estimates for semi-linear parabolic and hyperbolic PDEs. The accuracy of the error estimates is demonstrated through numerical solutions of the one-dimensional heat equation and linearized shallow water equations (SWE), representing parabolic and hyperbolic cases, respectively.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

北京阿比特科技有限公司