亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A multi-user private data compression problem is studied. A server has access to a database of $N$ files, $(Y_1,...,Y_N)$, each of size $F$ bits and is connected to an encoder. The encoder is connected through an unsecured link to a user. We assume that each file $Y_i$ is arbitrarily correlated with a private attribute $X$, which is assumed to be accessible by the encoder. Moreover, an adversary is assumed to have access to the link. The users and the encoder have access to a shared secret key $W$. We assume that at each time the user asks for a file $Y_{d_i}$, where $(d_1,\ldots,d_K)$ corresponds to the demand vector. The goal is to design the delivered message $\mathcal {C}=(\mathcal {C}_1,\ldots,\mathcal {C}_K)$ after the user send his demands to the encoder such that the average length of $\mathcal{C}$ is minimized, while satisfying: i. The message $\cal C$ does not reveal any information about $X$, i.e., $X$ and $\mathcal{C}$ are independent, which corresponds to the perfect privacy constraint; ii. The user is able to decode its demands, $Y_{d_i}$, by using $\cal C$, and the shared key $W$. Here, the encoder sequentially encode each demand $Y_{d_i}$ at time $i$, using the shared key and previous encoded messages. We propose a variable-length coding scheme that uses privacy-aware compression techniques. We study proposed upper and lower bounds on the average length of $\mathcal{C}$ in an example. Finally, we study an application considering cache-aided networks.

相關內容

代碼(Code)是專知網的一個重要知識資料文檔板塊,旨在整理收錄論文源代碼、復現代碼,經典工程代碼等,便于用戶查閱下載使用。

We study a federated linear bandits model, where $M$ clients communicate with a central server to solve a linear contextual bandits problem with finite adversarial action sets that may be different across clients. To address the unique challenges of adversarial finite action sets, we propose the FedSupLinUCB algorithm, which extends the principles of SupLinUCB and OFUL algorithms in linear contextual bandits. We prove that FedSupLinUCB achieves a total regret of $\tilde{O}(\sqrt{d T})$, where $T$ is the total number of arm pulls from all clients, and $d$ is the ambient dimension of the linear model. This matches the minimax lower bound and thus is order-optimal (up to polylog terms). We study both asynchronous and synchronous cases and show that the communication cost can be controlled as $O(d M^2 \log(d)\log(T))$ and $O(\sqrt{d^3 M^3} \log(d))$, respectively. The FedSupLinUCB design is further extended to two scenarios: (1) variance-adaptive, where a total regret of $\tilde{O} (\sqrt{d \sum \nolimits_{t=1}^{T} \sigma_t^2})$ can be achieved with $\sigma_t^2$ being the noise variance of round $t$; and (2) adversarial corruption, where a total regret of $\tilde{O}(\sqrt{dT} + d C_p)$ can be achieved with $C_p$ being the total corruption budget. Experiment results corroborate the theoretical analysis and demonstrate the effectiveness of FedSupLinUCB on both synthetic and real-world datasets.

We revisit the noisy binary search model of Karp and Kleinberg, in which we have $n$ coins with unknown probabilities $p_i$ that we can flip. The coins are sorted by increasing $p_i$, and we would like to find where the probability crosses (to within $\varepsilon$) of a target value $\tau$. This generalized the fixed-noise model of Burnashev and Zigangirov , in which $p_i = \frac{1}{2} \pm \varepsilon$, to a setting where coins near the target may be indistinguishable from it. Karp and Kleinberg showed that $\Theta(\frac{1}{\varepsilon^2} \log n)$ samples are necessary and sufficient for this task. We produce a practical algorithm by solving two theoretical challenges: high-probability behavior and sharp constants. We give an algorithm that succeeds with probability $1-\delta$ from \[ \frac{1}{C_{\tau, \varepsilon}} \cdot \left(\lg n + O(\log^{2/3} n \log^{1/3} \frac{1}{\delta} + \log \frac{1}{\delta})\right) \] samples, where $C_{\tau, \varepsilon}$ is the optimal such constant achievable. For $\delta > n^{-o(1)}$ this is within $1 + o(1)$ of optimal, and for $\delta \ll 1$ it is the first bound within constant factors of optimal.

Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but as LMs grow in size, backpropagation requires a prohibitively large amount of memory. Zeroth-order (ZO) methods can in principle estimate gradients using only two forward passes but are theorized to be catastrophically slow for optimizing large models. In this work, we propose a memory-efficient zerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operate in-place, thereby fine-tuning LMs with the same memory footprint as inference. For example, with a single A100 80GB GPU, MeZO can train a 30-billion parameter model, whereas fine-tuning with backpropagation can train only a 2.7B LM with the same budget. We conduct comprehensive experiments across model types (masked and autoregressive LMs), model scales (up to 66B), and downstream tasks (classification, multiple-choice, and generation). Our results demonstrate that (1) MeZO significantly outperforms in-context learning and linear probing; (2) MeZO achieves comparable performance to fine-tuning with backpropagation across multiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reduction in our implementation; (3) MeZO is compatible with both full-parameter and parameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZO can effectively optimize non-differentiable objectives (e.g., maximizing accuracy or F1). We support our empirical findings with theoretical insights, highlighting how adequate pre-training and task prompts enable MeZO to fine-tune huge models, despite classical ZO analyses suggesting otherwise.

Tensor decomposition is an important tool for multiway data analysis. In practice, the data is often sparse yet associated with rich temporal information. Existing methods, however, often under-use the time information and ignore the structural knowledge within the sparsely observed tensor entries. To overcome these limitations and to better capture the underlying temporal structure, we propose Dynamic EMbedIngs fOr dynamic Tensor dEcomposition (DEMOTE). We develop a neural diffusion-reaction process to estimate dynamic embeddings for the entities in each tensor mode. Specifically, based on the observed tensor entries, we build a multi-partite graph to encode the correlation between the entities. We construct a graph diffusion process to co-evolve the embedding trajectories of the correlated entities and use a neural network to construct a reaction process for each individual entity. In this way, our model can capture both the commonalities and personalities during the evolution of the embeddings for different entities. We then use a neural network to model the entry value as a nonlinear function of the embedding trajectories. For model estimation, we combine ODE solvers to develop a stochastic mini-batch learning algorithm. We propose a stratified sampling method to balance the cost of processing each mini-batch so as to improve the overall efficiency. We show the advantage of our approach in both simulation study and real-world applications. The code is available at //github.com/wzhut/Dynamic-Tensor-Decomposition-via-Neural-Diffusion-Reaction-Processes.

In a typical formulation of the private information retrieval (PIR) problem, a single user wishes to retrieve one out of $ K$ files from $N$ servers without revealing the demanded file index to any server. This paper formulates an extended model of PIR, referred to as multi-message private computation (MM-PC), where instead of retrieving a single file, the user wishes to retrieve $P>1$ linear combinations of files while preserving the privacy of the demand information. The MM-PC problem is a generalization of the private computation (PC) problem (where the user requests one linear combination of the files), and the multi-message private information retrieval (MM-PIR) problem (where the user requests $P>1$ files). A baseline achievable scheme repeats the optimal PC scheme by Sun and Jafar $P$ times, or treats each possible demanded linear combination as an independent file and then uses the near optimal MM-PIR scheme by Banawan and Ulukus. In this paper, we propose an achievable MM-PC scheme that significantly improves upon the baseline scheme. Doing so, we design the queries inspiring from the structure in the cache-aided scalar linear function retrieval scheme by Wan et al., where they leverage the dependency between messages to reduce the amount of communication. To ensure the decodability of our scheme, we propose a new method to benefit from the existing dependency, referred to as the sign assignment step. In the end, we use Maximum Distance Separable matrices to code the queries, which allows the reduction of download from the servers, while preserving privacy.

Federated learning (FL) aims at keeping client data local to preserve privacy. Instead of gathering the data itself, the server only collects aggregated gradient updates from clients. Following the popularity of FL, there has been considerable amount of work, revealing the vulnerability of FL approaches by reconstructing the input data from gradient updates. Yet, most existing works assume an FL setting with unrealistically small batch size, and have poor image quality when the batch size is large. Other works modify the neural network architectures or parameters to the point of being suspicious, and thus, can be detected by clients. Moreover, most of them can only reconstruct one sample input from a large batch. To address these limitations, we propose a novel and completely analytical approach, referred to as the maximum knowledge orthogonality reconstruction (MKOR), to reconstruct clients' input data. Our proposed method reconstructs a mathematically proven high quality image from large batches. MKOR only requires the server to send secretly modified parameters to clients and can efficiently and inconspicuously reconstruct the input images from clients' gradient updates. We evaluate MKOR's performance on the MNIST, CIFAR-100, and ImageNet dataset and compare it with the state-of-the-art works. The results show that MKOR outperforms the existing approaches, and draws attention to a pressing need for further research on the privacy protection of FL so that comprehensive defense approaches can be developed.

A private compression design problem is studied, where an encoder observes useful data $Y$, wishes to compress it using variable length code and communicates it through an unsecured channel. Since $Y$ is correlated with private attribute $X$, the encoder uses a private compression mechanism to design encoded message $\cal C$ and sends it over the channel. An adversary is assumed to have access to the output of the encoder, i.e., $\cal C$, and tries to estimate $X$. Furthermore, it is assumed that both encoder and decoder have access to a shared secret key $W$. The design goal is to encode message $\cal C$ with minimum possible average length that satisfies a perfect privacy constraint. To do so we first consider two different privacy mechanism design problems and find upper bounds on the entropy of the optimizers by solving a linear program. We use the obtained optimizers to design $\cal C$. In two cases we strengthen the existing bounds: 1. $|\mathcal{X}|\geq |\mathcal{Y}|$; 2. The realization of $(X,Y)$ follows a specific joint distribution. In particular, considering the second case we use two-part construction coding to achieve the upper bounds. Furthermore, in a numerical example we study the obtained bounds and show that they can improve the existing results.

A private compression design problem is studied, where an encoder observes useful data $Y$, wishes to compress it using variable length code and communicates it through an unsecured channel. Since $Y$ is correlated with private data $X$, the encoder uses a private compression mechanism to design encoded message $\cal C$ and sends it over the channel. An adversary is assumed to have access to the output of the encoder, i.e., $\cal C$, and tries to estimate $X$. Furthermore, it is assumed that both encoder and decoder have access to a shared secret key $W$. In this work, we generalize the perfect privacy (secrecy) assumption and consider a non-zero leakage between the private data $X$ and encoded message $\cal C$. The design goal is to encode message $\cal C$ with minimum possible average length that satisfies non-perfect privacy constraints. We find upper and lower bounds on the average length of the encoded message using different privacy metrics and study them in special cases. For the achievability we use two-part construction coding and extended versions of Functional Representation Lemma. Lastly, in an example we show that the bounds can be asymptotically tight.

In the theory of lossy compression, the rate-distortion (R-D) function $R(D)$ describes how much a data source can be compressed (in bit-rate) at any given level of fidelity (distortion). Obtaining $R(D)$ for a given data source establishes the fundamental performance limit for all compression algorithms. We propose a new method to estimate $R(D)$ from the perspective of optimal transport. Unlike the classic Blahut--Arimoto algorithm which fixes the support of the reproduction distribution in advance, our Wasserstein gradient descent algorithm learns the support of the optimal reproduction distribution by moving particles. We prove its local convergence and analyze the sample complexity of our R-D estimator based on a connection to entropic optimal transport. Experimentally, we obtain comparable or tighter bounds than state-of-the-art neural network methods on low-rate sources while requiring considerably less tuning and computation effort. We also highlight a connection to maximum-likelihood deconvolution and introduce a new class of sources that can be used as test cases with known solutions to the R-D problem.

We consider maximizing a monotonic, submodular set function $f: 2^{[n]} \rightarrow [0,1]$ under stochastic bandit feedback. Specifically, $f$ is unknown to the learner but at each time $t=1,\dots,T$ the learner chooses a set $S_t \subset [n]$ with $|S_t| \leq k$ and receives reward $f(S_t) + \eta_t$ where $\eta_t$ is mean-zero sub-Gaussian noise. The objective is to minimize the learner's regret over $T$ times with respect to ($1-e^{-1}$)-approximation of maximum $f(S_*)$ with $|S_*| = k$, obtained through greedy maximization of $f$. To date, the best regret bound in the literature scales as $k n^{1/3} T^{2/3}$. And by trivially treating every set as a unique arm one deduces that $\sqrt{ {n \choose k} T }$ is also achievable. In this work, we establish the first minimax lower bound for this setting that scales like $\mathcal{O}(\min_{i \le k}(in^{1/3}T^{2/3} + \sqrt{n^{k-i}T}))$. Moreover, we propose an algorithm that is capable of matching the lower bound regret.

北京阿比特科技有限公司