亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a set $P$ of $n$ points and a set $S$ of $m$ weighted disks in the plane, the disk coverage problem asks for a subset of disks of minimum total weight that cover all points of $P$. The problem is NP-hard. In this paper, we consider a line-constrained version in which all disks are centered on a line $L$ (while points of $P$ can be anywhere in the plane). We present an $O((m+n)\log(m+n)+\kappa\log m)$ time algorithm for the problem, where $\kappa$ is the number of pairs of disks that intersect. Alternatively, we can also solve the problem in $O(nm\log(m+n))$ time. For the unit-disk case where all disks have the same radius, the running time can be reduced to $O((n+m)\log(m+n))$. In addition, we solve in $O((m+n)\log(m+n))$ time the $L_{\infty}$ and $L_1$ cases of the problem, in which the disks are squares and diamonds, respectively. As a by-product, the 1D version of the problem where all points of $P$ are on $L$ and the disks are line segments on $L$ is also solved in $O((m+n)\log(m+n))$ time. We also show that the problem has an $\Omega((m+n)\log (m+n))$ time lower bound even for the 1D case. We further demonstrate that our techniques can also be used to solve other geometric coverage problems. For example, given in the plane a set $P$ of $n$ points and a set $S$ of $n$ weighted half-planes, we solve in $O(n^4\log n)$ time the problem of finding a subset of half-planes to cover $P$ so that their total weight is minimized. This improves the previous best algorithm of $O(n^5)$ time by almost a linear factor. If all half-planes are lower ones, then our algorithm runs in $O(n^2\log n)$ time, which improves the previous best algorithm of $O(n^4)$ time by almost a quadratic factor.

相關內容

Some of the hardest problems in deep learning can be solved with the combined effort of many independent parties, as is the case for volunteer computing and federated learning. These setups rely on high numbers of peers to provide computational resources or train on decentralized datasets. Unfortunately, participants in such systems are not always reliable. Any single participant can jeopardize the entire training run by sending incorrect updates, whether deliberately or by mistake. Training in presence of such peers requires specialized distributed training algorithms with Byzantine tolerance. These algorithms often sacrifice efficiency by introducing redundant communication or passing all updates through a trusted server. As a result, it can be infeasible to apply such algorithms to large-scale distributed deep learning, where models can have billions of parameters. In this work, we propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency. We rigorously analyze this protocol: in particular, we provide theoretical bounds for its resistance against Byzantine and Sybil attacks and show that it has a marginal communication overhead. To demonstrate its practical effectiveness, we conduct large-scale experiments on image classification and language modeling in presence of Byzantine attackers.

There has been a recent surge of interest in nonparametric bandit algorithms based on subsampling. One drawback however of these approaches is the additional complexity required by random subsampling and the storage of the full history of rewards. Our first contribution is to show that a simple deterministic subsampling rule, proposed in the recent work of Baudry et al. (2020) under the name of ''last-block subsampling'', is asymptotically optimal in one-parameter exponential families. In addition, we prove that these guarantees also hold when limiting the algorithm memory to a polylogarithmic function of the time horizon. These findings open up new perspectives, in particular for non-stationary scenarios in which the arm distributions evolve over time. We propose a variant of the algorithm in which only the most recent observations are used for subsampling, achieving optimal regret guarantees under the assumption of a known number of abrupt changes. Extensive numerical simulations highlight the merits of this approach, particularly when the changes are not only affecting the means of the rewards.

We study the problem of selling $n$ heterogeneous items to a single buyer, whose values for different items are dependent. Under arbitrary dependence, Hart and Nisan show that no simple mechanism can achieve a non-negligible fraction of the optimal revenue even with only two items. We consider the setting where the buyer's type is drawn from a correlated distribution that can be captured by a Markov Random Field, one of the most prominent frameworks for modeling high-dimensional distributions with structure. If the buyer's valuation is additive or unit-demand, we extend the result to all MRFs and show that max(SRev,BRev) can achieve an $\Omega\left(\frac{1}{e^{O(\Delta)}}\right)$-fraction of the optimal revenue, where $\Delta$ is a parameter of the MRF that is determined by how much the value of an item can be influenced by the values of the other items. We further show that the exponential dependence on $\Delta$ is unavoidable for our approach and a polynomial dependence on $\Delta$ is unavoidable for any approach. When the buyer has a XOS valuation, we show that max(Srev,Brev) achieves at least an $\Omega\left(\frac{1}{e^{O(\Delta)}+\frac{1}{\sqrt{n\gamma}}}\right)$-fraction of the optimal revenue, where $\gamma$ is the spectral gap of the Glauber dynamics of the MRF. Note that the values of $\Delta$ and $\frac{1}{n\gamma}$ increase as the dependence between items strengthens. In the special case of independently distributed items, $\Delta=0$ and $\frac{1}{n\gamma}\geq 1$, and our results recover the known constant factor approximations for a XOS buyer. We further extend our parametric approximation to several other well-studied dependency measures such as the Dobrushin coefficient and the inverse temperature. Our results are based on the Duality-Framework by Cai et al. and a new concentration inequality for XOS functions over dependent random variables.

The Fisher market is one of the most fundamental models for resource allocation problems in economic theory, wherein agents spend a budget of currency to buy goods that maximize their utilities, while producers sell capacity constrained goods in exchange for currency. However, the consideration of only two types of constraints, i.e., budgets of individual buyers and capacities of goods, makes Fisher markets less amenable for resource allocation settings when agents have additional linear constraints, e.g., knapsack and proportionality constraints. In this work, we introduce a modified Fisher market, where each agent may have additional linear constraints and show that this modification to classical Fisher markets fundamentally alters the properties of the market equilibrium as well as the optimal allocations. These properties of the modified Fisher market prompt us to introduce a budget perturbed social optimization problem (BP-SOP) and set prices based on the dual variables of BP-SOP's capacity constraints. To compute the budget perturbations, we develop a fixed point iterative scheme and validate its convergence through numerical experiments. Since this fixed point iterative scheme involves solving a centralized problem at each step, we propose a new class of distributed algorithms to compute equilibrium prices. In particular, we develop an Alternating Direction Method of Multipliers (ADMM) algorithm with strong convergence guarantees for Fisher markets with homogeneous linear constraints as well as for classical Fisher markets. In this algorithm, the prices are updated based on the tatonnement process, with a step size that is completely independent of the utilities of individual agents. Thus, our mechanism, both theoretically and computationally, overcomes a fundamental limitation of classical Fisher markets, which only consider capacity and budget constraints.

Recent advances in the field of adiabatic quantum computing and the closely related field of quantum annealers has centered around using more advanced and novel Hamiltonian representations to solve optimization problems. One of these advances has centered around the development of driver Hamiltonians that commute with the constraints of an optimization problem - allowing for another avenue to satisfying those constraints instead of imposing penalty terms for each of them. In particular, the approach is able to use sparser connectivity to embed several practical problems on quantum devices than other common practices. However, designing the driver Hamiltonians that successfully commute with several constraints has largely been based on strong intuition for specific problems and with no simple general algorithm to generate them for arbitrary constraints. In this work, we develop a simple and intuitive algebraic framework for reasoning about the commutation of Hamiltonians with linear constraints - one that allows us to classify the complexity of finding a driver Hamiltonian for an arbitrary set of constraints as NP-Complete. Because unitary operators are exponentials of Hermitian operators, these results can also be applied to the construction of mixers in the Quantum Alternating Operator Ansatz (QAOA) framework.

The Bayesian persuasion paradigm of strategic communication models interaction between a privately-informed agent, called the sender, and an ignorant but rational agent, called the receiver. The goal is typically to design a (near-)optimal communication (or signaling) scheme for the sender. It enables the sender to disclose information to the receiver in a way as to incentivize her to take an action that is preferred by the sender. Finding the optimal signaling scheme is known to be computationally difficult in general. This hardness is further exacerbated when there is also a constraint on the size of the message space, leading to NP-hardness of approximating the optimal sender utility within any constant factor. In this paper, we show that in several natural and prominent cases the optimization problem is tractable even when the message space is limited. In particular, we study signaling under a symmetry or an independence assumption on the distribution of utility values for the actions. For symmetric distributions, we provide a novel characterization of the optimal signaling scheme. It results in a polynomial-time algorithm to compute an optimal scheme for many compactly represented symmetric distributions. In the independent case, we design a constant-factor approximation algorithm, which stands in marked contrast to the hardness of approximation in the general case.

This work proposes a novel approach to evaluate and analyze the behavior of multi-population parallel genetic algorithms (PGAs) when running on a cluster of multi-core processors. In particular, we deeply study their numerical and computational behavior by proposing a mathematical model representing the observed performance curves. In them, we discuss the emerging mathematical descriptions of PGA performance instead of, e.g., individual isolated results subject to visual inspection, for a better understanding of the effects of the number of cores used (scalability), their migration policy (the migration gap, in this paper), and the features of the solved problem (type of encoding and problem size). The conclusions based on the real figures and the numerical models fitting them represent a fresh way of understanding their speed-up, running time, and numerical effort, allowing a comparison based on a few meaningful numeric parameters. This represents a set of conclusions beyond the usual textual lessons found in past works on PGAs. It can be used as an estimation tool for the future performance of the algorithms and a way of finding out their limitations.

One of the most popular methods of the machine learning prediction explanation is the SHapley Additive exPlanations method (SHAP). An imprecise SHAP as a modification of the original SHAP is proposed for cases when the class probability distributions are imprecise and represented by sets of distributions. The first idea behind the imprecise SHAP is a new approach for computing the marginal contribution of a feature, which fulfils the important efficiency property of Shapley values. The second idea is an attempt to consider a general approach to calculating and reducing interval-valued Shapley values, which is similar to the idea of reachable probability intervals in the imprecise probability theory. A simple special implementation of the general approach in the form of linear optimization problems is proposed, which is based on using the Kolmogorov-Smirnov distance and imprecise contamination models. Numerical examples with synthetic and real data illustrate the imprecise SHAP.

In this paper we deal with a practical problem that arises in military situations. The problem is to plan a path for one (or more) agents to reach a target without being detected by enemy sensors. Agents are not passive, rather they can (within limits) initiate actions which aid evasion, namely knockout (completely disable sensors) and confusion (reduce sensor detection probabilities). Agent actions are path dependent and time limited. Here by path dependent we mean that an agent needs to be sufficiently close to a sensor to knock it out. By time limited we mean that a limit is imposed on how long a sensor is knocked out or confused before it reverts back to its original operating state. The approach adopted breaks the continuous space in which agents move into a discrete space. This enables the problem to be represented (formulated) mathematically as a zero-one integer program with linear constraints. The advantage of representing the problem in this manner is that powerful commercial software optimisation packages exist to solve the problem to proven global optimality. Computational results are presented for a number of randomly generated test problems.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司