亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents theoretical and practical results for the bin packing problem with scenarios, a generalization of the classical bin packing problem which considers the presence of uncertain scenarios, of which only one is realized. For this problem, we propose an absolute approximation algorithm whose ratio is bounded by the square root of the number of scenarios times the approximation ratio for an algorithm for the vector bin packing problem. We also show how an asymptotic polynomial-time approximation scheme is derived when the number of scenarios is constant. As a practical study of the problem, we present a branch-and-price algorithm to solve an exponential model and a variable neighborhood search heuristic. To speed up the convergence of the exact algorithm, we also consider lower bounds based on dual feasible functions. Results of these algorithms show the competence of the branch-and-price in obtaining optimal solutions for about 59% of the instances considered, while the combined heuristic and branch-and-price optimally solved 62% of the instances considered.

相關內容

We consider a set reconciliation setting in which two parties hold similar sets which they would like to reconcile In particular, we focus on set reconciliation based on invertible Bloom lookup tables (IBLTs), a probabilistic data structure inspired by Bloom filters but allowing for more complex operations. IBLT-based set reconciliation schemes have the advantage of exhibiting a low complexity, however, the schemes available in literature are known to be far from optimal in terms of communication complexity (overhead). The inefficiency of IBLT-based set reconciliation can be attributed to two facts. First, it requires an estimate of the cardinality of the set difference between the sets, which implies an increase in overhead. Second, in order to cope with errors in the aforementioned estimation of the cardinality of the set difference, IBLT schemes in literature make a worst-case assumption and oversize the data structures, thus further increasing the overhead. In this work, we present a novel IBLT-based set reconciliation protocol that does not require estimating the cardinality of the set difference. The scheme we propose relies on what we term multi-edge-type (MET) IBLTs. The simulation results shown in this paper show that the novel scheme outperforms previous IBLT-based approaches to set reconciliation

A matching $M$ in a graph $G$ is an \emph{acyclic matching} if the subgraph of $G$ induced by the endpoints of the edges of $M$ is a forest. Given a graph $G$ and a positive integer $\ell$, Acyclic Matching asks whether $G$ has an acyclic matching of size (i.e., the number of edges) at least $\ell$. In this paper, we first prove that assuming $\mathsf{W[1]\nsubseteq FPT}$, there does not exist any $\mathsf{FPT}$-approximation algorithm for Acyclic Matching that approximates it within a constant factor when the parameter is the size of the matching. Our reduction is general in the sense that it also asserts $\mathsf{FPT}$-inapproximability for Induced Matching and Uniquely Restricted Matching as well. We also consider three below-guarantee parameters for Acyclic Matching, viz. $\frac{n}{2}-\ell$, $\mathsf{MM(G)}-\ell$, and $\mathsf{IS(G)}-\ell$, where $n$ is the number of vertices in $G$, $\mathsf{MM(G)}$ is the matching number of $G$, and $\mathsf{IS(G)}$ is the independence number of $G$. Furthermore, we show that Acyclic Matching does not exhibit a polynomial kernel with respect to vertex cover number (or vertex deletion distance to clique) plus the size of the matching unless $\mathsf{NP}\subseteq\mathsf{coNP}\slash\mathsf{poly}$.

In the spanning tree congestion problem, given a connected graph $G$, the objective is to compute a spanning tree $T$ in $G$ that minimizes its maximum edge congestion, where the congestion of an edge $e$ of $T$ is the number of edges in $G$ for which the unique path in $T$ between their endpoints traverses $e$. The problem is known to be $\mathbb{NP}$-hard, but its approximability is still poorly understood. In the decision version of this problem, denoted $K-\textsf{STC}$, we need to determine if $G$ has a spanning tree with congestion at most $K$. It is known that $K-\textsf{STC}$ is $\mathbb{NP}$-complete for $K\ge 8$. On the other hand, $3-\textsf{STC}$ can be solved in polynomial time, with the complexity status of this problem for $K\in \{4,5,6,7\}$ remaining an open problem. We substantially improve the earlier hardness results by proving that $K-\textsf{STC}$ is $\mathbb{NP}$-complete for $K\ge 5$. This leaves only the case $K=4$ open, and improves the lower bound on the approximation ratio to $1.2$. Motivated by evidence that minimizing congestion is hard even for graphs of small constant radius, we consider $K-\textsf{STC}$ restricted to graphs of radius $2$, and we prove that this variant is $\mathbb{NP}$-complete for all $K\ge 6$. Exploring further in this direction, we also examine the variant, denoted $K-\textsf{STC}D$, where the objective is to determine if the graph has a depth-$D$ spanning three of congestion at most $K$. We prove that $6-\textsf{STC}2$ is $\mathbb{NP}$-complete even for bipartite graphs. For bipartite graphs we establish a tight bound, by also proving that $5-\textsf{STC}2$ is polynomial-time solvable. Additionally, we complement this result with polynomial-time algorithms for two special cases that involve bipartite graphs and restrictions on vertex degrees.

A query game is a pair of a set $Q$ of queries and a set $\mathcal{F}$ of functions, or codewords $f:Q\rightarrow \mathbb{Z}.$ We think of this as a two-player game. One player, Codemaker, picks a hidden codeword $f\in \mathcal{F}$. The other player, Codebreaker, then tries to determine $f$ by asking a sequence of queries $q\in Q$, after each of which Codemaker must respond with the value $f(q)$. The goal of Codebreaker is to uniquely determine $f$ using as few queries as possible. Two classical examples of such games are coin-weighing with a spring scale, and Mastermind, which are of interest both as recreational games and for their connection to information theory. In this paper, we will present a general framework for finding short solutions to query games. As applications, we give new self-contained proofs of the query complexity of variations of the coin-weighing problems, and prove new results that the deterministic query complexity of Mastermind with $n$ positions and $k$ colors is $\Theta(n \log k/ \log n + k)$ if only black-peg information is provided, and $\Theta(n \log k / \log n + k/n)$ if both black- and white-peg information is provided. In the deterministic setting, these are the first up to constant factor optimal solutions to Mastermind known for any $k\geq n^{1-o(1)}$.

It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.

Consensus clustering (or clustering aggregation) inputs $k$ partitions of a given ground set $V$, and seeks to create a single partition that minimizes disagreement with all input partitions. State-of-the-art algorithms for consensus clustering are based on correlation clustering methods like the popular Pivot algorithm. Unfortunately these methods have not proved to be practical for consensus clustering instances where either $k$ or $V$ gets large. In this paper we provide practical run time improvements for correlation clustering solvers when $V$ is large. We reduce the time complexity of Pivot from $O(|V|^2 k)$ to $O(|V| k)$, and its space complexity from $O(|V|^2)$ to $O(|V| k)$ -- a significant savings since in practice $k$ is much less than $|V|$. We also analyze a sampling method for these algorithms when $k$ is large, bridging the gap between running Pivot on the full set of input partitions (an expected 1.57-approximation) and choosing a single input partition at random (an expected 2-approximation). We show experimentally that algorithms like Pivot do obtain quality clustering results in practice even on small samples of input partitions.

We consider the problem of uncertainty quantification in change point regressions, where the signal can be piecewise polynomial of arbitrary but fixed degree. That is we seek disjoint intervals which, uniformly at a given confidence level, must each contain a change point location. We propose a procedure based on performing local tests at a number of scales and locations on a sparse grid, which adapts to the choice of grid in the sense that by choosing a sparser grid one explicitly pays a lower price for multiple testing. The procedure is fast as its computational complexity is always of the order $\mathcal{O} (n \log (n))$ where $n$ is the length of the data, and optimal in the sense that under certain mild conditions every change point is detected with high probability and the widths of the intervals returned match the mini-max localisation rates for the associated change point problem up to log factors. A detailed simulation study shows our procedure is competitive against state of the art algorithms for similar problems. Our procedure is implemented in the R package ChangePointInference which is available via //github.com/gaviosha/ChangePointInference.

Cook and Reckhow 1979 pointed out that NP is not closed under complementation iff there is no propositional proof system that admits polynomial size proofs of all tautologies. Theory of proof complexity generators aims at constructing sets of tautologies hard for strong and possibly for all proof systems. We focus at a conjecture from K.2004 in foundations of the theory that there is a proof complexity generator hard for all proof systems. This can be equivalently formulated (for p-time generators) without a reference to proof complexity notions as follows: * There exist a p-time function $g$ stretching each input by one bit such that its range intersects all infinite NP sets. We consider several facets of this conjecture, including its links to bounded arithmetic (witnessing and independence results), to time-bounded Kolmogorov complexity, to feasible disjunction property of propositional proof systems and to complexity of proof search. We argue that a specific gadget generator from K.2009 is a good candidate for $g$. We define a new hardness property of generators, the $\bigvee$-hardness, and shows that one specific gadget generator is the $\bigvee$-hardest (w.r.t. any sufficiently strong proof system). We define the class of feasibly infinite NP sets and show, assuming a hypothesis from circuit complexity, that the conjecture holds for all feasibly infinite NP sets.

We present an information-theoretic approach to lower bound the oracle complexity of nonsmooth black box convex optimization, unifying previous lower bounding techniques by identifying a combinatorial problem, namely string guessing, as a single source of hardness. As a measure of complexity we use distributional oracle complexity, which subsumes randomized oracle complexity as well as worst-case oracle complexity. We obtain strong lower bounds on distributional oracle complexity for the box $[-1,1]^n$, as well as for the $L^p$-ball for $p \geq 1$ (for both low-scale and large-scale regimes), matching worst-case upper bounds, and hence we close the gap between distributional complexity, and in particular, randomized complexity, and worst-case complexity. Furthermore, the bounds remain essentially the same for high-probability and bounded-error oracle complexity, and even for combination of the two, i.e., bounded-error high-probability oracle complexity. This considerably extends the applicability of known bounds.

We present an efficient matrix-free geometric multigrid method for the elastic Helmholtz equation, and a suitable discretization. Many discretization methods had been considered in the literature for the Helmholtz equations, as well as many solvers and preconditioners, some of which are adapted for the elastic version of the equation. However, there is very little work considering the reciprocity of discretization and a solver. In this work, we aim to bridge this gap. By choosing an appropriate stencil for re-discretization of the equation on the coarse grid, we develop a multigrid method that can be easily implemented as matrix-free, relying on stencils rather than sparse matrices. This is crucial for efficient implementation on modern hardware. Using two-grid local Fourier analysis, we validate the compatibility of our discretization with our solver, and tune a choice of weights for the stencil for which the convergence rate of the multigrid cycle is optimal. It results in a scalable multigrid preconditioner that can tackle large real-world 3D scenarios.

北京阿比特科技有限公司