亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work addresses the problem of revenue maximization in a repeated, unlimited supply item-pricing auction while preserving buyer privacy. We present a novel algorithm that provides differential privacy with respect to the buyer's input pair: item selection and bid. Notably, our algorithm is the first to offer a sublinear $O(\sqrt{T}\log{T})$ regret with a privacy guarantee. Our method is based on an exponential weights meta-algorithm, and we mitigate the issue of discontinuities in revenue functions via small random perturbations. As a result of its structural similarity to the exponential mechanism, our method inherently secures differential privacy. We also extend our algorithm to accommodate scenarios where buyers strategically bid over successive rounds. The inherent differential privacy allows us to adapt our algorithm with minimal modification to ensure a sublinear regret in this setting.

相關內容

Federated Learning (FL) allows machine learning models to train locally on individual mobile devices, synchronizing model updates via a shared server. This approach safeguards user privacy; however, it also generates a heterogeneous training environment due to the varying performance capabilities across devices. As a result, straggler devices with lower performance often dictate the overall training time in FL. In this work, we aim to alleviate this performance bottleneck due to stragglers by dynamically balancing the training load across the system. We introduce Invariant Dropout, a method that extracts a sub-model based on the weight update threshold, thereby minimizing potential impacts on accuracy. Building on this dropout technique, we develop an adaptive training framework, Federated Learning using Invariant Dropout (FLuID). FLuID offers a lightweight sub-model extraction to regulate computational intensity, thereby reducing the load on straggler devices without affecting model quality. Our method leverages neuron updates from non-straggler devices to construct a tailored sub-model for each straggler based on client performance profiling. Furthermore, FLuID can dynamically adapt to changes in stragglers as runtime conditions shift. We evaluate FLuID using five real-world mobile clients. The evaluations show that Invariant Dropout maintains baseline model efficiency while alleviating the performance bottleneck of stragglers through a dynamic, runtime approach.

We study the measure of order-competitive ratio introduced by Ezra et al. [2023] for online algorithms in Bayesian combinatorial settings. In our setting, a decision-maker observes a sequence of elements that are associated with stochastic rewards that are drawn from known priors, but revealed one by one in an online fashion. The decision-maker needs to decide upon the arrival of each element whether to select it or discard it (according to some feasibility constraint), and receives the associated rewards of the selected elements. The order-competitive ratio is defined as the worst-case ratio (over all distribution sequences) between the performance of the best order-unaware and order-aware algorithms, and quantifies the loss incurred due to the lack of knowledge of the arrival order. Ezra et al. [2023] showed how to design algorithms that achieve better approximations with respect to the new benchmark (order-competitive ratio) in the single-choice setting, which raises the natural question of whether the same can be achieved in combinatorial settings. In particular, whether it is possible to achieve a constant approximation with respect to the best online algorithm for downward-closed feasibility constraints, whether $\omega(1/n)$-approximation is achievable for general (non-downward-closed) feasibility constraints, or whether a convergence rate to $1$ of $o(1/\sqrt{k})$ is achievable for the multi-unit setting. We show, by devising novel constructions that may be of independent interest, that for all three scenarios, the asymptotic lower bounds with respect to the old benchmark, also hold with respect to the new benchmark.

Observational studies are frequently used to estimate the effect of an exposure or treatment on an outcome. To obtain an unbiased estimate of the treatment effect, it is crucial to measure the exposure accurately. A common type of exposure misclassification is recall bias, which occurs in retrospective cohort studies when study subjects may inaccurately recall their past exposure. Specifically, differential recall bias can be problematic when examining the effect of a self-reported binary exposure since the magnitude of recall bias can differ between groups. In this paper, we provide the following contributions: 1) we derive bounds for the average treatment effect (ATE) in the presence of recall bias; 2) we develop several estimation approaches under different identification strategies; 3) we conduct simulation studies to evaluate their performance under several scenarios of model misspecification; 4) we propose a sensitivity analysis method that can examine the robustness of our results with respect to different assumptions; and 5) we apply the proposed framework to an observational study, estimating the effect of childhood physical abuse on adulthood mental health.

We establish a framework of random inverse problems with real-time observations over graphs, and present a decentralized online learning algorithm based on online data streams, which unifies the distributed parameter estimation in Hilbert space and the least mean square problem in reproducing kernel Hilbert space (RKHS-LMS). We transform the algorithm convergence into the asymptotic stability of randomly time-varying difference equations in Hilbert space with L2-bounded martingale difference terms and develop the L2 -asymptotic stability theory. It is shown that if the network graph is connected and the sequence of forward operators satisfies the infinitedimensional spatio-temporal persistence of excitation condition, then the estimates of all nodes are mean square and almost surely strongly consistent. By equivalently transferring the distributed learning problem in RKHS to the random inverse problem over graphs, we propose a decentralized online learning algorithm in RKHS based on non-stationary and non-independent online data streams, and prove that the algorithm is mean square and almost surely strongly consistent if the operators induced by the random input data satisfy the infinite-dimensional spatio-temporal persistence of excitation condition.

We consider the problem of private computation (PC) in a distributed storage system. In such a setting a user wishes to compute a function of $f$ messages replicated across $n$ noncolluding databases, while revealing no information about the desired function to the databases. We provide an information-theoretically accurate achievable PC rate, which is the ratio of the smallest desired amount of information and the total amount of downloaded information, for the scenario of nonlinear computation. For a large message size the rate equals the PC capacity, i.e., the maximum achievable PC rate, when the candidate functions are the $f$ independent messages and one arbitrary nonlinear function of these. When the number of messages grows, the PC rate approaches an outer bound on the PC capacity. As a special case, we consider private monomial computation (PMC) and numerically compare the achievable PMC rate to the outer bound for a finite number of messages.

Federated online learning to rank (FOLTR) aims to preserve user privacy by not sharing their searchable data and search interactions, while guaranteeing high search effectiveness, especially in contexts where individual users have scarce training data and interactions. For this, FOLTR trains learning to rank models in an online manner -- i.e. by exploiting users' interactions with the search systems (queries, clicks), rather than labels -- and federatively -- i.e. by not aggregating interaction data in a central server for training purposes, but by training instances of a model on each user device on their own private data, and then sharing the model updates, not the data, across a set of users that have formed the federation. Existing FOLTR methods build upon advances in federated learning. While federated learning methods have been shown effective at training machine learning models in a distributed way without the need of data sharing, they can be susceptible to attacks that target either the system's security or its overall effectiveness. In this paper, we consider attacks on FOLTR systems that aim to compromise their search effectiveness. Within this scope, we experiment with and analyse data and model poisoning attack methods to showcase their impact on FOLTR search effectiveness. We also explore the effectiveness of defense methods designed to counteract attacks on FOLTR systems. We contribute an understanding of the effect of attack and defense methods for FOLTR systems, as well as identifying the key factors influencing their effectiveness.

Let $G$ be an undirected graph, and $s,t$ distinguished vertices of $G$. A minimal $s,t$-separator is an inclusion-wise minimal vertex-set whose removal places $s$ and $t$ in distinct connected components. We present an algorithm for listing the minimal $s,t$-separators of a graph in non-decreasing order of cardinality, in polynomial-delay. This problem finds applications in various algorithms parameterized by treewidth, which include query evaluation in relational databases, probabilistic inference, and many more. In the process, we prove several results that are of independent interest. We establish a new island of tractability to the intensively studied 2-disjoint connected subgraphs problem, which is NP-complete even for restricted graph classes that include planar graphs, and prove new characterizations of minimal $s,t$-separators. Ours is the first to present a ranked enumeration algorithm for minimal separators where the delay is polynomial in the size of the input graph.

Recent data search platforms use ML task-based utility measures rather than metadata-based keywords, to search large dataset corpora. Requesters submit a training dataset and these platforms search for augmentations (join or union compatible datasets) that, when used to augment the requester's dataset, most improve model (e.g., linear regression) performance. Although effective, providers that manage personally identifiable data demand differential privacy (DP) guarantees before granting these platforms data access. Unfortunately, making data search differentially private is nontrivial, as a single search can involve training and evaluating datasets hundreds or thousands of times, quickly depleting privacy budgets. We present Saibot, a differentially private data search platform that employs Factorized Privacy Mechanism (FPM), a novel DP mechanism, to calculate sufficient semi-ring statistics for ML over different combinations of datasets. These statistics are privatized once, and can be freely reused for the search. This allows Saibot to scale to arbitrary numbers of datasets and requests, while minimizing the amount that DP noise affects search results. We optimize the sensitivity of FPM for common augmentation operations, and analyze its properties with respect to linear regression. Specifically, we develop an unbiased estimator for many-to-many joins, prove its bounds, and develop an optimization to redistribute DP noise to minimize the impact on the model. Our evaluation on a real-world dataset corpus of 329 datasets demonstrates that Saibot can return augmentations that achieve model accuracy within 50 to 90% of non-private search, while the leading alternative DP mechanisms (TPM, APM, shuffling) are several orders of magnitude worse.

Effective resistances are ubiquitous in graph algorithms and network analysis. In this work, we study sublinear time algorithms to approximate the effective resistance of an adjacent pair $s$ and $t$. We consider the classical adjacency list model for local algorithms. While recent works have provided sublinear time algorithms for expander graphs, we prove several lower bounds for general graphs of $n$ vertices and $m$ edges: 1.It needs $\Omega(n)$ queries to obtain $1.01$-approximations of the effective resistance of an adjacent pair $s$ and $t$, even for graphs of degree at most 3 except $s$ and $t$. 2.For graphs of degree at most $d$ and any parameter $\ell$, it needs $\Omega(m/\ell)$ queries to obtain $c \cdot \min\{d, \ell\}$-approximations where $c>0$ is a universal constant. Moreover, we supplement the first lower bound by providing a sublinear time $(1+\epsilon)$-approximation algorithm for graphs of degree 2 except the pair $s$ and $t$. One of our technical ingredients is to bound the expansion of a graph in terms of the smallest non-trivial eigenvalue of its Laplacian matrix after removing edges. We discover a new lower bound on the eigenvalues of perturbed graphs (resp. perturbed matrices) by incorporating the effective resistance of the removed edge (resp. the leverage scores of the removed rows), which may be of independent interest.

This paper studies an intelligent reflecting surface (IRS)-aided multi-antenna simultaneous wireless information and power transfer (SWIPT) system where an $M$-antenna access point (AP) serves $K$ single-antenna information users (IUs) and $J$ single-antenna energy users (EUs) with the aid of an IRS with phase errors. We explicitly concentrate on overloaded scenarios where $K + J > M$ and $K \geq M$. Our goal is to maximize the minimum throughput among all the IUs by optimizing the allocation of resources (including time, transmit beamforming at the AP, and reflect beamforming at the IRS), while guaranteeing the minimum amount of harvested energy at each EU. Towards this goal, we propose two user grouping (UG) schemes, namely, the non-overlapping UG scheme and the overlapping UG scheme, where the difference lies in whether identical IUs can exist in multiple groups. Different IU groups are served in orthogonal time dimensions, while the IUs in the same group are served simultaneously with all the EUs via spatial multiplexing. The two problems corresponding to the two UG schemes are mixed-integer non-convex optimization problems and difficult to solve optimally. We propose efficient algorithms for these two problems based on the big-M formulation, the penalty method, the block coordinate descent, and the successive convex approximation. Simulation results show that: 1) the non-robust counterparts of the proposed robust designs are unsuitable for practical IRS-aided SWIPT systems with phase errors since the energy harvesting constraints cannot be satisfied; 2) the proposed UG strategies can significantly improve the max-min throughput over the benchmark schemes without UG or adopting random UG; 3) the overlapping UG scheme performs much better than its non-overlapping counterpart when the absolute difference between $K$ and $M$ is small and the EH constraints are not stringent.

北京阿比特科技有限公司