We study the problem of allocating m indivisible items to n agents with additive utilities. It is desirable for the allocation to be both fair and efficient, which we formalize through the notions of envy-freeness and Pareto-optimality. While envy-free and Pareto-optimal allocations may not exist for arbitrary utility profiles, previous work has shown that such allocations exist with high probability assuming that all agents' values for all items are independently drawn from a common distribution. In this paper, we consider a generalization of this model with asymmetric agents, where an agent's utilities for the items are drawn independently from a distribution specific to the agent. We show that envy-free and Pareto-optimal allocations are likely to exist in this asymmetric model when $m=\Omega\left(n\, \log n\right)$, matching the best bounds known for the symmetric subsetting. Empirically, an algorithm based on Maximum Nash Welfare obtains envy-free and Pareto-optimal allocations for small numbers of items.
We give a direct product theorem for the entanglement-assisted interactive quantum communication complexity of an $l$-player predicate $\mathsf{V}$. In particular we show that for a distribution $p$ that is product across the input sets of the $l$ players, the success probability of any entanglement-assisted quantum communication protocol for computing $n$ copies of $\mathsf{V}$, whose communication is $o(\log(\mathrm{eff}^*(\mathsf{V},p))\cdot n)$, goes down exponentially in $n$. Here $\mathrm{eff}^*(\mathsf{V}, p)$ is a distributional version of the quantum efficiency or partition bound introduced by Laplante, Lerays and Roland (2014), which is a lower bound on the distributional quantum communication complexity of computing a single copy of $\mathsf{V}$ with respect to $p$. As an application of our result, we show that it is possible to do device-independent quantum key distribution (DIQKD) without the assumption that devices do not leak any information after inputs are provided to them. We analyze the DIQKD protocol given by Jain, Miller and Shi (2017), and show that when the protocol is carried out with devices that are compatible with $n$ copies of the Magic Square game, it is possible to extract $\Omega(n)$ bits of key from it, even in the presence of $O(n)$ bits of leakage. Our security proof is parallel, i.e., the honest parties can enter all their inputs into their devices at once, and works for a leakage model that is arbitrarily interactive, i.e., the devices of the honest parties Alice and Bob can exchange information with each other and with the eavesdropper Eve in any number of rounds, as long as the total number of bits or qubits communicated is bounded.
We propose a deterministic Kaczmarz algorithm for solving linear systems $A\x=\b$. Different from previous Kaczmarz algorithms, we use reflections in each step of the iteration. This generates a series of points distributed with patterns on a sphere centered at a solution. Firstly, we prove that taking the average of $O(\eta/\epsilon)$ points leads to an effective approximation of the solution up to relative error $\epsilon$, where $\eta$ is a parameter depending on $A$ and can be bounded above by the square of the condition number. We also show how to select these points efficiently. From the numerical tests, our Kaczmarz algorithm usually converges more quickly than the (block) randomized Kaczmarz algorithms. Secondly, when the linear system is consistent, the Kaczmarz algorithm returns the solution that has the minimal distance to the initial vector. This gives a method to solve the least-norm problem. Finally, we prove that our Kaczmarz algorithm indeed solves the linear system $A^TW^{-1}A \x = A^TW^{-1} \b$, where $W$ is the low-triangular matrix such that $W+W^T=2AA^T$. The relationship between this linear system and the original one is studied.
We are interested in the optimization of convex domains under a PDE constraint. Due to the difficulties of approximating convex domains in $\mathbb{R}^3$, the restriction to rotationally symmetric domains is used to reduce shape optimization problems to a two-dimensional setting. For the optimization of an eigenvalue arising in a problem of optimal insulation, the existence of an optimal domain is proven. An algorithm is proposed that can be applied to general shape optimization problems under the geometric constraints of convexity and rotational symmetry. The approximated optimal domains for the eigenvalue problem in optimal insulation are discussed.
We study the problem of \emph{dynamic regret minimization} in $K$-armed Dueling Bandits under non-stationary or time varying preferences. This is an online learning setup where the agent chooses a pair of items at each round and observes only a relative binary `win-loss' feedback for this pair, sampled from an underlying preference matrix at that round. We first study the problem of static-regret minimization for adversarial preference sequences and design an efficient algorithm with $O(\sqrt{KT})$ high probability regret. We next use similar algorithmic ideas to propose an efficient and provably optimal algorithm for dynamic-regret minimization under two notions of non-stationarities. In particular, we establish $\tO(\sqrt{SKT})$ and $\tO({V_T^{1/3}K^{1/3}T^{2/3}})$ dynamic-regret guarantees, $S$ being the total number of `effective-switches' in the underlying preference relations and $V_T$ being a measure of `continuous-variation' non-stationarity. The complexity of these problems have not been studied prior to this work despite the practicability of non-stationary environments in real world systems. We justify the optimality of our algorithms by proving matching lower bound guarantees under both the above-mentioned notions of non-stationarities. Finally, we corroborate our results with extensive simulations and compare the efficacy of our algorithms over state-of-the-art baselines.
We study the problem of multi-compression and reconstructing a stochastic signal observed by several independent sensors (or compressors) that transmit compressed information to a fusion center. { The key aspect of this problem is to find models of the sensors and fusion center that are optimized in the sense of an error minimization under a certain criterion, such as the mean square error (MSE).} { A novel technique to solve this problem is developed. The novelty is as follows. First, the multi-compressors are non-linear and modeled using second degree polynomials. This may increase the accuracy of the signal estimation through the optimization in a higher dimensional parameter space compared to the linear case. Second, the required models are determined by a method based on a combination of the second degree transform (SDT) with the maximum block improvement (MBI) method and the generalized rank-constrained matrix approximation. It allows us to use the advantages of known methods to further increase the estimation accuracy of the source signal. Third, the proposed method is justified in terms of pseudo-inverse matrices. As a result, the models of compressors and fusion center always exist and are numerically stable.} In other words, the proposed models may provide compression, de-noising and reconstruction of distributed signals in cases when known methods either are not applicable or may produce larger associated errors.
We consider a stochastic bandit problem with a possibly infinite number of arms. We write $p^*$ for the proportion of optimal arms and $\Delta$ for the minimal mean-gap between optimal and sub-optimal arms. We characterize the optimal learning rates both in the cumulative regret setting, and in the best-arm identification setting in terms of the problem parameters $T$ (the budget), $p^*$ and $\Delta$. For the objective of minimizing the cumulative regret, we provide a lower bound of order $\Omega(\log(T)/(p^*\Delta))$ and a UCB-style algorithm with matching upper bound up to a factor of $\log(1/\Delta)$. Our algorithm needs $p^*$ to calibrate its parameters, and we prove that this knowledge is necessary, since adapting to $p^*$ in this setting is impossible. For best-arm identification we also provide a lower bound of order $\Omega(\exp(-cT\Delta^2 p^*))$ on the probability of outputting a sub-optimal arm where $c>0$ is an absolute constant. We also provide an elimination algorithm with an upper bound matching the lower bound up to a factor of order $\log(T)$ in the exponential, and that does not need $p^*$ or $\Delta$ as parameter. Our results apply directly to the three related problems of competing against the $j$-th best arm, identifying an $\epsilon$ good arm, and finding an arm with mean larger than a quantile of a known order.
The common way to optimize auction and pricing systems is to set aside a small fraction of the traffic to run experiments. This leads to the question: how can we learn the most with the smallest amount of data? For truthful auctions, this is the \emph{sample complexity} problem. For posted price auctions, we no longer have access to samples. Instead, the algorithm is allowed to choose a price $p_t$; then for a fresh sample $v_t \sim \mathcal{D}$ we learn the sign $s_t = sign(p_t - v_t) \in \{-1,+1\}$. How many pricing queries are needed to estimate a given parameter of the underlying distribution? We give tight upper and lower bounds on the number of pricing queries required to find an approximately optimal reserve price for general, regular and MHR distributions. Interestingly, for regular distributions, the pricing query and sample complexities match. But for general and MHR distributions, we show a strict separation between them. All known results on sample complexity for revenue optimization follow from a variant of using the optimal reserve price of the empirical distribution. In the pricing query complexity setting, we show that learning the entire distribution within an error of $\epsilon$ in Levy distance requires strictly more pricing queries than to estimate the reserve. Instead, our algorithm uses a new property we identify called \emph{relative flatness} to quickly zoom into the right region of the distribution to get the optimal pricing query complexity.
We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.