亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of maximizing the Nash social welfare when allocating a set $G$ of indivisible goods to a set $N$ of agents. We study instances, in which all agents have 2-value additive valuations: The value of a good $g \in G$ for an agent $i \in N$ is either $1$ or $s$, where $s$ is an odd multiple of $\frac{1}{2}$ larger than one. We show that the problem is solvable in polynomial time. Akrami et at. showed that this problem is solvable in polynomial time if $s$ is integral and is NP-hard whenever $s = \frac{p}{q}$, $p \in \mathbb{N}$ and $q\in \mathbb{N}$ are co-prime and $p > q \ge 3$. For the latter situation, an approximation algorithm was also given. It obtains an approximation ratio of at most $1.0345$. Moreover, the problem is APX-hard, with a lower bound of $1.000015$ achieved at $\frac{p}{q} = \frac{5}{4}$. The case $q = 2$ and odd $p$ was left open. In the case of integral $s$, the problem is separable in the sense that the optimal allocation of the heavy goods (= value $s$ for some agent) is independent of the number of light goods (= value $1$ for all agents). This leads to an algorithm that first computes an optimal allocation of the heavy goods and then adds the light goods greedily. This separation no longer holds for $s = \frac{3}{2}$; a simple example is given in the introduction. Thus an algorithm has to consider heavy and light goods together. This complicates matters considerably. Our algorithm is based on a collection of improvement rules that transfers any allocation into an optimal allocation and exploits a connection to matchings with parity constraints.

相關內容

We consider the problem of finding a maximal independent set (MIS) in the shared blackboard communication model with vertex-partitioned inputs. There are $n$ players corresponding to vertices of an undirected graph, and each player sees the edges incident on its vertex -- this way, each edge is known by both its endpoints and is thus shared by two players. The players communicate in simultaneous rounds by posting their messages on a shared blackboard visible to all players, with the goal of computing an MIS of the graph. While the MIS problem is well studied in other distributed models, and while shared blackboard is, perhaps, the simplest broadcast model, lower bounds for our problem were only known against one-round protocols. We present a lower bound on the round-communication tradeoff for computing an MIS in this model. Specifically, we show that when $r$ rounds of interaction are allowed, at least one player needs to communicate $\Omega(n^{1/20^{r+1}})$ bits. In particular, with logarithmic bandwidth, finding an MIS requires $\Omega(\log\log{n})$ rounds. This lower bound can be compared with the algorithm of Ghaffari, Gouleakis, Konrad, Mitrovi\'c, and Rubinfeld [PODC 2018] that solves MIS in $O(\log\log{n})$ rounds but with a logarithmic bandwidth for an average player. Additionally, our lower bound further extends to the closely related problem of maximal bipartite matching. To prove our results, we devise a new round elimination framework, which we call partial-input embedding, that may also be useful in future work for proving round-sensitive lower bounds in the presence of edge-sharing between players. Finally, we discuss several implications of our results to multi-round (adaptive) distributed sketching algorithms, broadcast congested clique, and to the welfare maximization problem in two-sided matching markets.

In the $d$-dimensional cow-path problem, a cow living in $\mathbb{R}^d$ must locate a $(d - 1)$-dimensional hyperplane $H$ whose location is unknown. The only way that the cow can find $H$ is to roam $\mathbb{R}^d$ until it intersects $\mathcal{H}$. If the cow travels a total distance $s$ to locate a hyperplane $H$ whose distance from the origin was $r \ge 1$, then the cow is said to achieve competitive ratio $s / r$. It is a classic result that, in $\mathbb{R}^2$, the optimal (deterministic) competitive ratio is $9$. In $\mathbb{R}^3$, the optimal competitive ratio is known to be at most $\approx 13.811$. But in higher dimensions, the asymptotic relationship between $d$ and the optimal competitive ratio remains an open question. The best upper and lower bounds, due to Antoniadis et al., are $O(d^{3/2})$ and $\Omega(d)$, leaving a gap of roughly $\sqrt{d}$. In this note, we achieve a stronger lower bound of $\tilde{\Omega}(d^{3/2})$.

The significant presence of demand charges in electric bills motivates large-load customers to utilize energy storage to reduce the peak procurement from the grid. We herein study the problem of energy storage allocation for peak minimization, under the online setting where irrevocable decisions are sequentially made without knowing future demands. The problem is uniquely challenging due to (i) the coupling of online decisions across time imposed by the inventory constraints and (ii) the noncumulative nature of the peak procurement. We apply the CR-Pursuit framework and address the challenges unique to our minimization problem to design an online algorithm achieving the optimal competitive ratio (CR) among all online algorithms. We show that the optimal CR can be computed in polynomial time by solving a linear number of linear-fractional problems. More importantly, we generalize our approach to develop an \emph{anytime-optimal} online algorithm that achieves the best possible CR at any epoch, given the inputs and online decisions so far. The algorithm retains the optimal worst-case performance and attains adaptive average-case performance. Trace-driven simulations show that our algorithm can decrease the peak demand by an extra 19% compared to baseline alternatives under typical settings.

As a special infinite-order vector autoregressive (VAR) model, the vector autoregressive moving average (VARMA) model can capture much richer temporal patterns than the widely used finite-order VAR model. However, its practicality has long been hindered by its non-identifiability, computational intractability, and relative difficulty of interpretation. This paper introduces a novel infinite-order VAR model which, with only a little sacrifice of generality, inherits the essential temporal patterns of the VARMA model but avoids all of the above drawbacks. As another attractive feature, the temporal and cross-sectional dependence structures of this model can be interpreted separately, since they are characterized by different sets of parameters. For high-dimensional time series, this separation motivates us to impose sparsity on the parameters determining the cross-sectional dependence. As a result, greater statistical efficiency and interpretability can be achieved, while no loss of temporal information is incurred by the imposed sparsity. We introduce an $\ell_1$-regularized estimator for the proposed model and derive the corresponding nonasymptotic error bounds. An efficient block coordinate descent algorithm and a consistent model order selection method are developed. The merit of the proposed approach is supported by simulation studies and a real-world macroeconomic data analysis.

In this paper we study estimating Generalized Linear Models (GLMs) in the case where the agents (individuals) are strategic or self-interested and they concern about their privacy when reporting data. Compared with the classical setting, here we aim to design mechanisms that can both incentivize most agents to truthfully report their data and preserve the privacy of individuals' reports, while their outputs should also close to the underlying parameter. In the first part of the paper, we consider the case where the covariates are sub-Gaussian and the responses are heavy-tailed where they only have the finite fourth moments. First, motivated by the stationary condition of the maximizer of the likelihood function, we derive a novel private and closed form estimator. Based on the estimator, we propose a mechanism which has the following properties via some appropriate design of the computation and payment scheme for several canonical models such as linear regression, logistic regression and Poisson regression: (1) the mechanism is $o(1)$-jointly differentially private (with probability at least $1-o(1)$); (2) it is an $o(\frac{1}{n})$-approximate Bayes Nash equilibrium for a $(1-o(1))$-fraction of agents to truthfully report their data, where $n$ is the number of agents; (3) the output could achieve an error of $o(1)$ to the underlying parameter; (4) it is individually rational for a $(1-o(1))$ fraction of agents in the mechanism ; (5) the payment budget required from the analyst to run the mechanism is $o(1)$. In the second part, we consider the linear regression model under more general setting where both covariates and responses are heavy-tailed and only have finite fourth moments. By using an $\ell_4$-norm shrinkage operator, we propose a private estimator and payment scheme which have similar properties as in the sub-Gaussian case.

While there have been numerous sequential algorithms developed to estimate community structure in networks, there is little available guidance and study of what significance level or stopping parameter to use in these sequential testing procedures. Most algorithms rely on prespecifiying the number of communities or use an arbitrary stopping rule. We provide a principled approach to selecting a nominal significance level for sequential community detection procedures by controlling the tolerance ratio, defined as the ratio of underfitting and overfitting probability of estimating the number of clusters in fitting a network. We introduce an algorithm for specifying this significance level from a user-specified tolerance ratio, and demonstrate its utility with a sequential modularity maximization approach in a stochastic block model framework. We evaluate the performance of the proposed algorithm through extensive simulations and demonstrate its utility in controlling the tolerance ratio in single-cell RNA sequencing clustering by cell type and by clustering a congressional voting network.

Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.

Designing efficient algorithms to compute Nash equilibria poses considerable challenges in Algorithmic Game Theory and Optimization. In this work, we employ integer programming techniques to compute Nash equilibria in Integer Programming Games, a class of simultaneous and non-cooperative games where each player solves a parametrized integer program. We introduce ZERO Regrets, a general and efficient cutting plane algorithm to compute, enumerate, and select Nash equilibria. Our framework leverages the concept of equilibrium inequality, an inequality valid for any Nash equilibrium, and the associated equilibrium separation oracle. We evaluate our algorithmic framework on a wide range of practical and methodological problems from the literature, providing a solid benchmark against the existing approaches.

The processor failures in a multiprocessor system have a negative impact on its distributed computing efficiency. Because of the rapid expansion of multiprocessor systems, the importance of fault diagnosis is becoming increasingly prominent. The $h$-component diagnosability of $G$, denoted by $ct_{h}(G)$, is the maximum number of nodes of the faulty set $F$ that is correctly identified in a system, and the number of components in $G-F$ is at least $h$. In this paper, we determine the $(h+1)$-component diagnosability of general networks under the PMC model and MM$^{*}$ model. As applications, the component diagnosability is explored for some well-known networks, including complete cubic networks, hierarchical cubic networks, generalized exchanged hypercubes, dual-cube-like networks, hierarchical hypercubes, Cayley graphs generated by transposition trees (except star graphs), and DQcube as well. Furthermore, we provide some comparison results between the component diagnosability and other fault diagnosabilities.

We consider the problem of partitioning a line segment into two subsets, so that $n$ finite measures all has the same ratio of values for the subsets. Letting $\alpha\in[0,1]$ denote the desired ratio, this generalises the PPA-complete consensus-halving problem, in which $\alpha=\frac{1}{2}$. Stromquist and Woodall showed that for any $\alpha$, there exists a solution using $2n$ cuts of the segment. They also showed that if $\alpha$ is irrational, that upper bound is almost optimal. In this work, we elaborate the bounds for rational values $\alpha$. For $\alpha = \frac{\ell}{k}$, we show a lower bound of $\frac{k-1}{k} \cdot 2n - O(1)$ cuts; we also obtain almost matching upper bounds for a large subset of rational $\alpha$. On the computational side, we explore its dependence on the number of cuts available. More specifically, 1. when using the minimal number of cuts for each instance is required, the problem is NP-hard for any $\alpha$; 2. for a large subset of rational $\alpha = \frac{\ell}{k}$, when $\frac{k-1}{k} \cdot 2n$ cuts are available, the problem is in PPA-$k$ under Turing reduction; 3. when $2n$ cuts are allowed, the problem belongs to PPA for any $\alpha$; more generally, the problem belong to PPA-$p$ for any prime $p$ if $2(p-1)\cdot \frac{\lceil p/2 \rceil}{\lfloor p/2 \rfloor} \cdot n$ cuts are available.

北京阿比特科技有限公司