亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The fractional knapsack problem is one of the classical problems in combinatorial optimization, which is well understood in the offline setting. However, the corresponding online setting has been handled only briefly in the theoretical computer science literature so far, although it appears in several applications. Even the previously best known guarantee for the competitive ratio was worse than the best known for the integral problem in the popular random order model. We show that there is an algorithm for the online fractional knapsack problem that admits a competitive ratio of 4.39. Our result significantly improves over the previously best known competitive ratio of 9.37 and surpasses the current best 6.65-competitive algorithm for the integral case. Moreover, our algorithm is deterministic in contrast to the randomized algorithms achieving the results mentioned above.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成(cheng),VLSI雜志。 Publisher:Elsevier。 SIT:

We consider the problem of deterministically enumerating all minimum $k$-cut-sets in a given hypergraph for any fixed $k$. The input here is a hypergraph $G = (V, E)$ with non-negative hyperedge costs. A subset $F$ of hyperedges is a $k$-cut-set if the number of connected components in $G - F$ is at least $k$ and it is a minimum $k$-cut-set if it has the least cost among all $k$-cut-sets. For fixed $k$, we call the problem of finding a minimum $k$-cut-set as Hypergraph-$k$-Cut and the problem of enumerating all minimum $k$-cut-sets as Enum-Hypergraph-$k$-Cut. The special cases of Hypergraph-$k$-Cut and Enum-Hypergraph-$k$-Cut restricted to graph inputs are well-known to be solvable in (randomized as well as deterministic) polynomial time. In contrast, it is only recently that polynomial-time algorithms for Hypergraph-$k$-Cut were developed. The randomized polynomial-time algorithm for Hypergraph-$k$-Cut that was designed in 2018 (Chandrasekaran, Xu, and Yu, SODA 2018) showed that the number of minimum $k$-cut-sets in a hypergraph is $O(n^{2k-2})$, where $n$ is the number of vertices in the input hypergraph, and that they can all be enumerated in randomized polynomial time, thus resolving Enum-Hypergraph-$k$-Cut in randomized polynomial time. A deterministic polynomial-time algorithm for Hypergraph-$k$-Cut was subsequently designed in 2020 (Chandrasekaran and Chekuri, FOCS 2020), but it is not guaranteed to enumerate all minimum $k$-cut-sets. In this work, we give the first deterministic polynomial-time algorithm to solve Enum-Hypergraph-$k$-Cut (this is non-trivial even for $k = 2$). Our algorithms are based on new structural results that allow for efficient recovery of all minimum $k$-cut-sets by solving minimum $(S,T)$-terminal cuts. Our techniques give new structural insights even for enumerating all minimum cut-sets (i.e., minimum 2-cut-sets) in a given hypergraph.

In this work, we consider the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method (SPM) of Kileel and Pereira. Unlike popular alternative functionals for tensor decomposition, the SPM objective function has the desirable properties that its maximal value is known in advance, and its global optima are exactly the rank-1 components of the tensor when the input is sufficiently low-rank. We analyze the non-convex optimization landscape associated with the SPM objective. Our analysis accounts for working with noisy tensors. We derive quantitative bounds such that any second-order critical point with SPM objective value exceeding the bound must equal a tensor component in the noiseless case, and must approximate a tensor component in the noisy case. For decomposing tensors of size $D^{\times m}$, we obtain a near-global guarantee up to rank $\widetilde{o}(D^{\lfloor m/2 \rfloor})$ under a random tensor model, and a global guarantee up to rank $\mathcal{O}(D)$ assuming deterministic frame conditions. This implies that SPM with suitable initialization is a provable, efficient, robust algorithm for low-rank symmetric tensor decomposition. We conclude with numerics that show a practical preferability for using the SPM functional over a more established counterpart.

We consider the problem of approximating the arboricity of a graph $G= (V,E)$, which we denote by $\mathsf{arb}(G)$, in sublinear time, where the arboricity of a graph is the minimal number of forests required to cover its edges. An algorithm for this problem may perform degree and neighbor queries, and is allowed a small error probability. We design an algorithm that outputs an estimate $\hat{\alpha}$, such that with probability $1-1/\textrm{poly}(n)$, $\mathsf{arb}(G)/c\log^2 n \leq \hat{\alpha} \leq \mathsf{arb}(G)$, where $n=|V|$ and $c$ is a constant. The expected query complexity and running time of the algorithm are $O(n/\mathsf{arb}(G))\cdot \textrm{poly}(\log n)$, and this upper bound also holds with high probability. %($\widetilde{O}(\cdot)$ is used to suppress $\textrm{poly}(\log n)$ dependencies). This bound is optimal for such an approximation up to a $\textrm{poly}(\log n)$ factor.

Online allocation problems with resource constraints have a rich history in operations research. In this paper, we introduce the \emph{regularized online allocation problem}, a variant that includes a non-linear regularizer acting on the total resource consumption. In this problem, requests repeatedly arrive over time and, for each request, a decision maker needs to take an action that generates a reward and consumes resources. The objective is to simultaneously maximize additively separable rewards and the value of a non-separable regularizer subject to the resource constraints. Our primary motivation is allowing decision makers to trade off separable objectives such as the economic efficiency of an allocation with ancillary, non-separable objectives such as the fairness or equity of an allocation. We design an algorithm that is simple, fast, and attains good performance with both stochastic i.i.d.~and adversarial inputs. In particular, our algorithm is asymptotically optimal under stochastic i.i.d. input models and attains a fixed competitive ratio that depends on the regularizer when the input is adversarial. Furthermore, the algorithm and analysis do not require convexity or concavity of the reward function and the consumption function, which allows more model flexibility. Numerical experiments confirm the effectiveness of the proposed algorithm and of regularization in an internet advertising application.

As we are entering the era of real-world small quantum computers, finding applications for these limited devices is a key challenge. In this vein, it was recently shown that a hybrid classical-quantum method can help provide polynomial speed-ups to classical divide-and-conquer algorithms, even when only given access to a quantum computer much smaller than the problem itself. In this work we study the hybrid divide-and-conquer method in the context of tree search algorithms, and extend it by including quantum backtracking, which allows better results than previous Grover-based methods. Further, we provide general criteria for polynomial speed-ups in the tree search context, and provide a number of examples where polynomial speed ups, using arbitrarily smaller quantum computers, can still be obtained. This study possible speed-ups for the well known algorithm of DPLL and prove threshold-free speed-ups for the tree search subroutines of the so-called PPSZ algorithm - which is the core of the fastest exact Boolean satisfiability solver - for certain classes of formulas. We also provide a simple example where speed-ups can be obtained in an algorithm-independent fashion, under certain well-studied complexity-theoretical assumptions. Finally, we briefly discuss the fundamental limitations of hybrid methods in providing speed-ups for larger problems.

We revisit the problem of computing an optimal partial cover of points by intervals. We show that the greedy algorithm computes a permutation $\Pi = \pi_1, \pi_2,\ldots$ of the intervals that is $3/4$-competitive for any prefix of $k$ intervals. That is, for any $k$, the intervals $\pi_1 \cup \cdots \cup \pi_k$ covers at least $3/4$-fraction of the points covered by the optimal solution using $k$ intervals. We also provide an approximation algorithm that, in $O(n + m/\varepsilon)$ time, computes a cover by $(1+\varepsilon)k$ intervals that is as good as the optimal solution using $k$ intervals, where $n$ is the number of input points, and $m$ is the number of intervals (we assume here the input is presorted). Finally, we show a counter example illustrating that the optimal solutions for set cover do not have the diminishing return property -- that is, the marginal benefit from using more sets is not monotonically decreasing. Fortunately, the diminishing returns does hold for intervals.

Makespan minimization on identical machines is a fundamental problem in online scheduling. The goal is to assign a sequence of jobs to $m$ identical parallel machines so as to minimize the maximum completion time of any job. Already in the 1960s, Graham showed that Greedy is $(2-1/m)$-competitive. The best deterministic online algorithm currently known achieves a competitive ratio of 1.9201. No deterministic online strategy can obtain a competitiveness smaller than 1.88. In this paper, we study online makespan minimization in the popular random-order model, where the jobs of a given input arrive as a random permutation. It is known that Greedy does not attain a competitive factor asymptotically smaller than 2 in this setting. We present the first improved performance guarantees. Specifically, we develop a deterministic online algorithm that achieves a competitive ratio of 1.8478. The result relies on a new analysis approach. We identify a set of properties that a random permutation of the input jobs satisfies with high probability. Then we conduct a worst-case analysis of our algorithm, for the respective class of permutations. The analysis implies that the stated competitiveness holds not only in expectation but with high probability. Moreover, it provides mathematical evidence that job sequences leading to higher performance ratios are extremely rare, pathological inputs. We complement the results by lower bounds, for the random-order model. We show that no deterministic online algorithm can achieve a competitive ratio smaller than 4/3. Moreover, no deterministic online algorithm can attain a competitiveness smaller than 3/2 with high probability.

This paper studies Makespan Minimization in the secretary model. Formally, jobs, specified by their processing times, are presented in a uniformly random order. An online algorithm has to assign each job permanently and irrevocably to one of m parallel and identical machines such that the expected time it takes to process them all, the makespan, is minimized. We give two deterministic algorithms. First, a straightforward adaptation of the semi-online strategy LightLoad provides a very simple algorithm retaining its competitive ratio of 1.75. A new and sophisticated algorithm is 1.535-competitive. These competitive ratios are not only obtained in expectation but, in fact, for all but a very tiny fraction of job orders. Classically, online makespan minimization only considers the worst-case order. Here, no competitive ratio below 1.885 for deterministic algorithms and 1.581 using randomization is possible. The best randomized algorithm so far is 1.916-competitive. Our results show that classical worst-case orders are quite rare and pessimistic for many applications. They also demonstrate the power of randomization when compared to much stronger deterministic reordering models. We complement our results by providing first lower bounds. A competitive ratio obtained on nearly all possible job orders must be at least 1.257. This implies a lower bound of 1.043 for both deterministic and randomized algorithms in the general model.

The Quantum Approximate Optimization Algorithm (QAOA) finds approximate solutions to combinatorial optimization problems. Its performance monotonically improves with its depth $p$. We apply the QAOA to MaxCut on large-girth $D$-regular graphs. We give an iterative formula to evaluate performance for any $D$ at any depth $p$. Looking at random $D$-regular graphs, at optimal parameters and as $D$ goes to infinity, we find that the $p=11$ QAOA beats all classical algorithms (known to the authors) that are free of unproven conjectures. While the iterative formula for these $D$-regular graphs is derived by looking at a single tree subgraph, we prove that it also gives the ensemble-averaged performance of the QAOA on the Sherrington-Kirkpatrick (SK) model. Our iteration is a compact procedure, but its computational complexity grows as $O(p^2 4^p)$. This iteration is more efficient than the previous procedure for analyzing QAOA performance on the SK model, and we are able to numerically go to $p=20$. Encouraged by our findings, we make the optimistic conjecture that the QAOA, as $p$ goes to infinity, will achieve the Parisi value. We analyze the performance of the quantum algorithm, but one needs to run it on a quantum computer to produce a string with the guaranteed performance.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

北京阿比特科技有限公司