亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the task of $(\epsilon, \delta)$-differentially private online convex optimization (OCO). In the online setting, the release of each distinct decision or iterate carries with it the potential for privacy loss. This problem has a long history of research starting with Jain et al. [2012] and the best known results for the regime of {\epsilon} being very small are presented in Agarwal et al. [2023]. In this paper we improve upon the results of Agarwal et al. [2023] in terms of the dimension factors as well as removing the requirement of smoothness. Our results are now the best known rates for DP-OCO in this regime. Our algorithms builds upon the work of [Asi et al., 2023] which introduced the idea of explicitly limiting the number of switches via rejection sampling. The main innovation in our algorithm is the use of sampling from a strongly log-concave density which allows us to trade-off the dimension factors better leading to improved results.

相關內容

This paper contains a recipe for deriving new PAC-Bayes generalisation bounds based on the $(f, \Gamma)$-divergence, and, in addition, presents PAC-Bayes generalisation bounds where we interpolate between a series of probability divergences (including but not limited to KL, Wasserstein, and total variation), making the best out of many worlds depending on the posterior distributions properties. We explore the tightness of these bounds and connect them to earlier results from statistical learning, which are specific cases. We also instantiate our bounds as training objectives, yielding non-trivial guarantees and practical performances.

Let $X$ be a $d$-dimensional simplicial complex. A function $F\colon X(k)\to \{0,1\}^k$ is said to be a direct product function if there exists a function $f\colon X(1)\to \{0,1\}$ such that $F(\sigma) = (f(\sigma_1), \ldots, f(\sigma_k))$ for each $k$-face $\sigma$. In an effort to simplify components of the PCP theorem, Goldreich and Safra introduced the problem of direct product testing, which asks whether one can test if $F\colon X(k)\to \{0,1\}^k$ is correlated with a direct product function by querying $F$ on only $2$ inputs. Dinur and Kaufman conjectured that there exist bounded degree complexes with a direct product test in the small soundness regime. We resolve their conjecture by showing that for all $\delta>0$, there exists a family of high-dimensional expanders with degree $O_{\delta}(1)$ and a $2$-query direct product tester with soundness $\delta$. We use the characterization given by a subset of the authors and independently by Dikstein and Dinur, who showed that some form of non-Abelian coboundary expansion (which they called "Unique-Games coboundary expansion") is a necessary and sufficient condition for a complex to admit such direct product testers. Our main technical contribution is a general technique for showing coboundary expansion of complexes with coefficients in a non-Abelian group. This allows us to prove that the high dimensional expanders constructed by Chapman and Lubotzky satisfies the necessary conditions, thus admitting a 2-query direct product tester with small soundness.

The frame scaling problem is: given vectors $U := \{u_{1}, ..., u_{n} \} \subseteq \mathbb{R}^{d}$, marginals $c \in \mathbb{R}^{n}_{++}$, and precision $\varepsilon > 0$, find left and right scalings $L \in \mathbb{R}^{d \times d}, r \in \mathbb{R}^n$ such that $(v_1,\dots,v_n) := (Lu_1 r_1,\dots,Lu_nr_n)$ simultaneously satisfies $\sum_{i=1}^n v_i v_i^{\mathsf{T}} = I_d$ and $\|v_{j}\|_{2}^{2} = c_{j}, \forall j \in [n]$, up to error $\varepsilon$. This problem has appeared in a variety of fields throughout linear algebra and computer science. In this work, we give a strongly polynomial algorithm for frame scaling with $\log(1/\varepsilon)$ convergence. This answers a question of Diakonikolas, Tzamos and Kane (STOC 2023), who gave the first strongly polynomial randomized algorithm with poly$(1/\varepsilon)$ convergence for the special case $c = \frac{d}{n} 1_{n}$. Our algorithm is deterministic, applies for general $c \in \mathbb{R}^{n}_{++}$, and requires $O(n^{3} \log(n/\varepsilon))$ iterations as compared to $O(n^{5} d^{11}/\varepsilon^{5})$ iterations of DTK. By lifting the framework of Linial, Samorodnitsky and Wigderson (Combinatorica 2000) for matrix scaling to frames, we are able to simplify both the algorithm and analysis. Our main technical contribution is to generalize the potential analysis of LSW to the frame setting and compute an update step in strongly polynomial time that achieves geometric progress in each iteration. In fact, we can adapt our results to give an improved analysis of strongly polynomial matrix scaling, reducing the $O(n^{5} \log(n/\varepsilon))$ iteration bound of LSW to $O(n^{3} \log(n/\varepsilon))$. Additionally, we prove a novel bound on the size of approximate frame scaling solutions, involving the condition measure $\bar{\chi}$ studied in the linear programming literature, which may be of independent interest.

On an orientable surface $S$, consider a collection $\Gamma$ of closed curves. The (geometric) intersection number $i_S(\Gamma)$ is the minimum number of self-intersections that a collection $\Gamma'$ can have, where $\Gamma'$ results from a continuous deformation (homotopy) of $\Gamma$. We provide algorithms that compute $i_S(\Gamma)$ and such a $\Gamma'$, assuming that $\Gamma$ is given by a collection of closed walks of length $n$ in a graph $M$ cellularly embedded on $S$, in $O(n \log n)$ time when $M$ and $S$ are fixed. The state of the art is a paper of Despr\'e and Lazarus [SoCG 2017, J. ACM 2019], who compute $i_S(\Gamma)$ in $O(n^2)$ time, and $\Gamma'$ in $O(n^4)$ time if $\Gamma$ is a single closed curve. Our result is more general since we can put an arbitrary number of closed curves in minimal position. Also, our algorithms are quasi-linear in $n$ instead of quadratic and quartic, and our proofs are simpler and shorter. We use techniques from two-dimensional topology and from the theory of hyperbolic surfaces. Most notably, we prove a new property of the reducing triangulations introduced by Colin de Verdi\`ere, Despr\'e, and Dubois [SODA 2024], reducing our problem to the case of surfaces with boundary. As a key subroutine, we rely on an algorithm of Fulek and T\'oth [JCO 2020].

We consider a high-dimensional stochastic contextual linear bandit problem when the parameter vector is $s_{0}$-sparse and the decision maker is subject to privacy constraints under both central and local models of differential privacy. We present PrivateLASSO, a differentially private LASSO bandit algorithm. PrivateLASSO is based on two sub-routines: (i) a sparse hard-thresholding-based privacy mechanism and (ii) an episodic thresholding rule for identifying the support of the parameter $\theta$. We prove minimax private lower bounds and establish privacy and utility guarantees for PrivateLASSO for the central model under standard assumptions.

Attention computation takes both the time complexity of $O(n^2)$ and the space complexity of $O(n^2)$ simultaneously, which makes deploying Large Language Models (LLMs) in streaming applications that involve long contexts requiring substantial computational resources. In recent OpenAI DevDay (Nov 6, 2023), OpenAI released a new model that is able to support a 128K-long document, in our paper, we focus on the memory-efficient issue when context length $n$ is much greater than 128K ($n \gg 2^d$). Considering a single-layer self-attention with Query, Key, and Value matrices $Q, K, V \in \mathbb{R}^{n \times d}$, the polynomial method approximates the attention output $T \in \mathbb{R}^{n \times d}$. It accomplishes this by constructing $U_1, U_2 \in \mathbb{R}^{n \times t}$ to expedite attention ${\sf Attn}(Q, K, V)$ computation within $n^{1+o(1)}$ time executions. Despite this, computing the approximated attention matrix $U_1U_2^\top \in \mathbb{R}^{n \times n}$ still necessitates $O(n^2)$ space, leading to significant memory usage. In response to these challenges, we introduce a new algorithm that only reads one pass of the data in a streaming fashion. This method employs sublinear space $o(n)$ to store three sketch matrices, alleviating the need for exact $K, V$ storage. Notably, our algorithm exhibits exceptional memory-efficient performance with super-long tokens. As the token length $n$ increases, our error guarantee diminishes while the memory usage remains nearly constant. This unique attribute underscores the potential of our technique in efficiently handling LLMs in streaming applications.

We consider approximating so-called tame functions, a class of nonsmooth, nonconvex functions, with piecewise polynomial functions. Tame functions appear in a wide range of applications: functions encountered in the training of deep neural networks with all common activations, value functions of mixed-integer programs, or wave functions of small molecules. We bound the quality of approximation of a tame function by a piecewise polynomial function with a given number of segments on any full-dimensional cube. We also present the first ever mixed-integer programming formulation of piecewise polynomial regression. Together, these can be used to estimate tame functions. We demonstrate promising computational results.

We propose a Riemannian gradient descent with the Poincar\'e metric to compute the order-$\alpha$ Augustin information, a widely used quantity for characterizing exponential error behaviors in information theory. We prove that the algorithm converges to the optimum at a rate of $\mathcal{O}(1 / T)$. As far as we know, this is the first algorithm with a non-asymptotic optimization error guarantee for all positive orders. Numerical experimental results demonstrate the empirical efficiency of the algorithm. Our result is based on a novel hybrid analysis of Riemannian gradient descent for functions that are geodesically convex in a Riemannian metric and geodesically smooth in another.

A popular and flexible time series model for counts is the generalized integer autoregressive process of order $p$, GINAR($p$). These Markov processes are defined using thinning operators evaluated on past values of the process along with a discretely-valued innovation process. This class includes the commonly used INAR($p$) process, defined with binomial thinning and Poisson innovations. GINAR processes can be used in a variety of settings, including modeling time series with low counts, and allow for more general mean-variance relationships, capturing both over- or under-dispersion. While there are many thinning operators and innovation processes given in the literature, less focus has been spent on comparing statistical inference and forecasting procedures over different choices of GINAR process. We provide an extensive study of exact and approximate inference and forecasting methods that can be applied to a wide class of GINAR($p$) processes with general thinning and innovation parameters. We discuss the challenges of exact estimation when $p$ is larger. We summarize and extend asymptotic results for estimators of process parameters, and present simulations to compare small sample performance, highlighting how different methods compare. We illustrate this methodology by fitting GINAR processes to a disease surveillance series.

We introduce a test for the conditional independence of random variables $X$ and $Y$ given a random variable $Z$, specifically by sampling from the joint distribution $(X,Y,Z)$, binning the support of the distribution of $Z$, and conducting multiple $p$-Wasserstein two-sample tests. Under a $p$-Wasserstein Lipschitz assumption on the conditional distributions $\mathcal{L}_{X|Z}$, $\mathcal{L}_{Y|Z}$, and $\mathcal{L}_{(X,Y)|Z}$, we show that it is possible to control the Type I and Type II error of this test, and give examples of explicit finite-sample error bounds in the case where the distribution of $Z$ has compact support.

北京阿比特科技有限公司