亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study sequential testing problems with \emph{overlapping} hypotheses. We first focus on the simple problem of assessing if the mean $\mu$ of a Gaussian distribution is smaller or larger than a fixed $\epsilon>0$; if $\mu\in(-\epsilon,\epsilon)$, both answers are considered to be correct. Then, we consider PAC-best arm identification in a bandit model: given $K$ probability distributions on $\mathbb{R}$ with means $\mu_1,\dots,\mu_K$, we derive the asymptotic complexity of identifying, with risk at most $\delta$, an index $I\in\{1,\dots,K\}$ such that $\mu_I\geq \max_i\mu_i -\epsilon$. We provide non-asymptotic bounds on the error of a parallel General Likelihood Ratio Test, which can also be used for more general testing problems. We further propose lower bound on the number of observation needed to identify a correct hypothesis. Those lower bounds rely on information-theoretic arguments, and specifically on two versions of a change of measure lemma (a high-level form, and a low-level form) whose relative merits are discussed.

相關內容

We propose the homotopic policy mirror descent (HPMD) method for solving discounted, infinite horizon MDPs with finite state and action space, and study its policy convergence. We report three properties that seem to be new in the literature of policy gradient methods: (1) The policy first converges linearly, then superlinearly with order $\gamma^{-2}$ to the set of optimal policies, after $\mathcal{O}(\log(1/\Delta^*))$ number of iterations, where $\Delta^*$ is defined via a gap quantity associated with the optimal state-action value function; (2) HPMD also exhibits last-iterate convergence, with the limiting policy corresponding exactly to the optimal policy with the maximal entropy for every state. No regularization is added to the optimization objective and hence the second observation arises solely as an algorithmic property of the homotopic policy gradient method. (3) For the stochastic HPMD method, we demonstrate a better than $\mathcal{O}(|\mathcal{S}| |\mathcal{A}| / \epsilon^2)$ sample complexity for small optimality gap $\epsilon$, when assuming a generative model for policy evaluation.

The saddlepoint approximation gives an approximation to the density of a random variable in terms of its moment generating function. When the underlying random variable is itself the sum of $n$ unobserved i.i.d. terms, the basic classical result is that the relative error in the density is of order $1/n$. If instead the approximation is interpreted as a likelihood and maximised as a function of model parameters, the result is an approximation to the maximum likelihood estimate (MLE) that can be much faster to compute than the true MLE. This paper proves the analogous basic result for the approximation error between the saddlepoint MLE and the true MLE: subject to certain explicit identifiability conditions, the error has asymptotic size $O(1/n^2)$ for some parameters, and $O(1/n^{3/2})$ or $O(1/n)$ for others. In all three cases, the approximation errors are asymptotically negligible compared to the inferential uncertainty. The proof is based on a factorisation of the saddlepoint likelihood into an exact and approximate term, along with an analysis of the approximation error in the gradient of the log-likelihood. This factorisation also gives insight into alternatives to the saddlepoint approximation, including a new and simpler saddlepoint approximation, for which we derive analogous error bounds. As a corollary of our results, we also obtain the asymptotic size of the MLE error approximation when the saddlepoint approximation is replaced by the normal approximation.

We propose Streaming Bandits, a Restless Multi Armed Bandit (RMAB) framework in which heterogeneous arms may arrive and leave the system after staying on for a finite lifetime. Streaming Bandits naturally capture the health intervention planning problem, where health workers must manage the health outcomes of a patient cohort while new patients join and existing patients leave the cohort each day. Our contributions are as follows: (1) We derive conditions under which our problem satisfies indexability, a precondition that guarantees the existence and asymptotic optimality of the Whittle Index solution for RMABs. We establish the conditions using a polytime reduction of the Streaming Bandit setup to regular RMABs. (2) We further prove a phenomenon that we call index decay, whereby the Whittle index values are low for short residual lifetimes driving the intuition underpinning our algorithm. (3) We propose a novel and efficient algorithm to compute the index-based solution for Streaming Bandits. Unlike previous methods, our algorithm does not rely on solving the costly finite horizon problem on each arm of the RMAB, thereby lowering the computational complexity compared to existing methods. (4) Finally, we evaluate our approach via simulations run on realworld data sets from a tuberculosis patient monitoring task and an intervention planning task for improving maternal healthcare, in addition to other synthetic domains. Across the board, our algorithm achieves a 2-orders-of-magnitude speed-up over existing methods while maintaining the same solution quality.

We present and investigate a new type of implicit fractional linear multistep method of order two for fractional initial value problems. The method is obtained from the second order super convergence of the Gr\"unwald-Letnikov approximation of the fractional derivative at a non-integer shift point. The proposed method is of order two consistency and coincides with the backward difference method of order two for classical initial value problems when the order of the derivative is one. The weight coefficients of the proposed method are obtained from the Gr\"unwald weights and hence computationally efficient compared with that of the fractional backward difference formula of order two. The stability properties are analyzed and shown that the stability region of the method is larger than that of the fractional Adams-Moulton method of order two and the fractional trapezoidal method. Numerical result and illustrations are presented to justify the analytical theories.

Let $P$ be a polyhedron, defined by a system $A x \leq b$, where $A \in Z^{m \times n}$, $rank(A) = n$, and $b \in Z^{m}$. In the Integer Feasibility Problem, we need to decide whether $P \cap Z^n = \emptyset$ or to find some $x \in P \cap Z^n$ in the opposite case. Currently, its state of the art algorithm, due to \cite{DadushDis,DadushFDim} (see also \cite{Convic,ConvicComp,DConvic} for more general formulations), has the complexity bound $O(n)^n \cdot poly(\phi)$, where $\phi = size(A,b)$. It is a long-standing open problem to break the $O(n)^n$ dimension-dependence in the complexity of ILP algorithms. We show that if the matrix $A$ has a small $l_1$ or $l_\infty$ norm, or $A$ is sparse and has bounded elements, then the integer feasibility problem can be solved faster. More precisely, we give the following complexity bounds \begin{gather*} \min\{\|A\|_{\infty}, \|A\|_1\}^{5 n} \cdot 2^n \cdot poly(\phi), \bigl( \|A\|_{\max} \bigr)^{5 n} \cdot \min\{cs(A),rs(A)\}^{3 n} \cdot 2^n \cdot poly(\phi). \end{gather*} Here $\|A\|_{\max}$ denotes the maximal absolute value of elements of $A$, $cs(A)$ and $rs(A)$ denote the maximal number of nonzero elements in columns and rows of $A$, respectively. We present similar results for the integer linear counting and optimization problems. Additionally, we apply the last result for multipacking and multicover problems on graphs and hypergraphs, where we need to choose a minimal/maximal multiset of vertices to cover/pack the edges by a prescribed number of times. For example, we show that the stable multiset and vertex multicover problems on simple graphs admit FPT-algorithms with the complexity bound $2^{O(|V|)} \cdot poly(\phi)$, where $V$ is the vertex set of a given graph.

The satisfaction probability $\sigma(\phi) := \Pr_{\beta:\mathrm{vars}(\phi) \to \{0,1\}}[\beta\models \phi]$ of a propositional formula $\phi$ is the likelihood that a random assignment $\beta$ makes the formula true. We study the complexity of the problem $k$sat-prob$_{>\delta} = \{ \phi$ is a $k\mathrm{cnf}$ formula $\mid \sigma(\phi) > \delta\}$ for fixed $k$ and $\delta$. While 3sat-prob$_{>0}$ = 3sat is NP-complete and sat-prob$_{>1/2}$ is PP-complete, Akmal and Williams recently showed 3sat-prob$_{>1/2} \in$ P and 4sat-prob$_{>1/2} \in$ NP-complete; but the methods used to prove these striking results stay silent about, say, 4sat-prob$_{>1/3}$, leaving the computational complexity of $k$sat-prob$_{>\delta}$ open for most $k$ and $\delta$. In the present paper we give a complete characterization in the form of a trichotomy: $k$sat-prob$_{>\delta}$ lies in AC$^0$, is NL-complete, or is NP-complete; and given $k$ and $\delta$ we can decide which of the three applies. The proof of the trichotomy hinges on a new order-theoretic insight: Every set of $k$cnf formulas contains a formula of maximum satisfaction probability. This deceptively simple result allows us to (1) kernelize $k$sat-prob$_{\ge \delta}$, (2) show that the variables of the kernel form a strong backdoor set when the trichotomy states membership in AC$^0$ or NL, and (3) prove a new locality property for the models of second-order formulas that describe problems like $k$sat-prob$_{\ge \delta}$. The locality property will allow us to prove a conjecture of Akmal and Williams: The majority-of-majority satisfaction problem for $k$cnfs lies in P for all $k$.

We consider the fixed-budget best arm identification problem in two-armed Gaussian bandits with unknown variances. The tightest lower bound on the complexity and an algorithm whose performance guarantee matches the lower bound have long been open problems when the variances are unknown and when the algorithm is agnostic to the optimal proportion of the arm draws. In this paper, we propose a strategy comprising a sampling rule with randomized sampling (RS) following the estimated target allocation probabilities of arm draws and a recommendation rule using the augmented inverse probability weighting (AIPW) estimator, which is often used in the causal inference literature. We refer to our strategy as the RS-AIPW strategy. In the theoretical analysis, we first derive a large deviation principle for martingales, which can be used when the second moment converges in mean, and apply it to our proposed strategy. Then, we show that the proposed strategy is asymptotically optimal in the sense that the probability of misidentification achieves the lower bound by Kaufmann et al. (2016) when the sample size becomes infinitely large and the gap between the two arms goes to zero.

This paper studies the spectral estimation problem of estimating the locations of a fixed number of point sources given multiple snapshots of Fourier measurements collected by a uniform array of sensors. We prove novel stability bounds for MUSIC and ESPRIT as a function of the noise standard deviation, number of snapshots, source amplitudes, and support. Our most general result is a perturbation bound of the signal space in terms of the minimum singular value of Fourier matrices. When the point sources are located in several separated clumps, we provide an explicit upper bound of the noise-space correlation perturbation error in MUSIC and the support error in ESPRIT in terms of a Super-Resolution Factor (SRF). The upper bound for ESPRIT is then compared with a new Cram\'er-Rao lower bound for the clumps model. As a result, we show that ESPRIT is comparable to that of the optimal unbiased estimator(s) in terms of the dependence on noise, number of snapshots and SRF. As a byproduct of our analysis, we discover several fundamental differences between the single-snapshot and multi-snapshot problems. Our theory is validated by numerical experiments.

We present a framework for a controlled Markov chain where the state of the chain is only given at chosen observation times and of a cost. Optimal strategies therefore involve the choice of observation times as well as the subsequent control values. We show that the corresponding value function satisfies a dynamic programming principle, which leads to a system of quasi-variational inequalities (QVIs). Next, we give an extension where the model parameters are not known a priori but are inferred from the costly observations by Bayesian updates. We then prove a comparison principle for a larger class of QVIs, which implies uniqueness of solutions to our proposed problem. We utilise penalty methods to obtain arbitrarily accurate solutions. Finally, we perform numerical experiments on three applications which illustrate our framework.

In this paper we study the frequentist convergence rate for the Latent Dirichlet Allocation (Blei et al., 2003) topic models. We show that the maximum likelihood estimator converges to one of the finitely many equivalent parameters in Wasserstein's distance metric at a rate of $n^{-1/4}$ without assuming separability or non-degeneracy of the underlying topics and/or the existence of more than three words per document, thus generalizing the previous works of Anandkumar et al. (2012, 2014) from an information-theoretical perspective. We also show that the $n^{-1/4}$ convergence rate is optimal in the worst case.

北京阿比特科技有限公司