亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given $(a_1, \dots, a_n, t) \in \mathbb{Z}_{\geq 0}^{n + 1}$, the Subset Sum problem ($\mathsf{SSUM}$) is to decide whether there exists $S \subseteq [n]$ such that $\sum_{i \in S} a_i = t$. There is a close variant of the $\mathsf{SSUM}$, called $\mathsf{Subset~Product}$. Given positive integers $a_1, ..., a_n$ and a target integer $t$, the $\mathsf{Subset~Product}$ problem asks to determine whether there exists a subset $S \subseteq [n]$ such that $\prod_{i \in S} a_i=t$. There is a pseudopolynomial time dynamic programming algorithm, due to Bellman (1957) which solves the $\mathsf{SSUM}$ and $\mathsf{Subset~Product}$ in $O(nt)$ time and $O(t)$ space. In the first part, we present {\em search} algorithms for variants of the Subset Sum problem. Our algorithms are parameterized by $k$, which is a given upper bound on the number of realisable sets (i.e.,~number of solutions, summing exactly $t$). We show that $\mathsf{SSUM}$ with a unique solution is already NP-hard, under randomized reduction. This makes the regime of parametrized algorithms, in terms of $k$, very interesting. Subsequently, we present an $\tilde{O}(k\cdot (n+t))$ time deterministic algorithm, which finds the hamming weight of all the realisable sets for a subset sum instance. We also give a poly$(knt)$-time and $O(\log(knt))$-space deterministic algorithm that finds all the realisable sets for a subset sum instance. In the latter part, we present a simple and elegant randomized $\tilde{O}(n + t)$ time algorithm for $\mathsf{Subset~Product}$. Moreover, we also present a poly$(nt)$ time and $O(\log^2 (nt))$ space deterministic algorithm for the same. We study these problems in the unbounded setting as well. Our algorithms use multivariate FFT, power series and number-theoretic techniques, introduced by Jin and Wu (SOSA'19) and Kane (2010).

相關內容

Heged\H{u}s's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field $\F$ of characteristic $p > 0$ and for $q$ a power of $p$, the lemma says that any multilinear polynomial $P\in \F[x_1,\ldots,x_n]$ of degree less than $q$ that vanishes at all points in $\{0,1\}^n$ of Hamming weight $k\in [q,n-q]$ must also vanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used by Heged\H{u}s (2009) to give a solution to \emph{Galvin's problem}, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrube\v{s}, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$ threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Heged\H{u}s's lemma. Informally, this version says that if a polynomial of degree $o(q)$ vanishes at most points of weight $k$, then it vanishes at many points of weight $k+q$. We prove this lemma and give three different applications.

We consider random-design linear prediction and related questions on the lower tail of random matrices. It is known that, under boundedness constraints, the minimax risk is of order $d/n$ in dimension $d$ with $n$ samples. Here, we study the minimax expected excess risk over the full linear class, depending on the distribution of covariates. First, the least squares estimator is exactly minimax optimal in the well-specified case, for every distribution of covariates. We express the minimax risk in terms of the distribution of statistical leverage scores of individual samples, and deduce a minimax lower bound of $d/(n-d+1)$ for any covariate distribution, nearly matching the risk for Gaussian design. We then obtain sharp nonasymptotic upper bounds for covariates that satisfy a "small ball"-type regularity condition in both well-specified and misspecified cases. Our main technical contribution is the study of the lower tail of the smallest singular value of empirical covariance matrices at small values. We establish a lower bound on this lower tail, valid for any distribution in dimension $d \geq 2$, together with a matching upper bound under a necessary regularity condition. Our proof relies on the PAC-Bayes technique for controlling empirical processes, and extends an analysis of Oliveira devoted to a different part of the lower tail.

The segment number of a planar graph $G$ is the smallest number of line segments needed for a planar straight-line drawing of $G$. Dujmovi\'c, Eppstein, Suderman, and Wood [CGTA'07] introduced this measure for the visual complexity of graphs. There are optimal algorithms for trees and worst-case optimal algorithms for outerplanar graphs, 2-trees, and planar 3-trees. It is known that every cubic triconnected planar $n$-vertex graph (except $K_4$) has segment number $n/2+3$, which is the only known universal lower bound for a meaningful class of planar graphs. We show that every triconnected planar 4-regular graph can be drawn using at most $n+3$ segments. This bound is tight up to an additive constant, improves a previous upper bound of $7n/4+2$ implied by a more general result of Dujmovi\'c et al., and supplements the result for cubic graphs. We also give a simple optimal algorithm for cactus graphs, generalizing the above-mentioned result for trees. We prove the first linear universal lower bounds for outerpaths, maximal outerplanar graphs, 2-trees, and planar 3-trees. This shows that the existing algorithms for these graph classes are constant-factor approximations. For maximal outerpaths, our bound is best possible and can be generalized to circular arcs.

Stochastic dual dynamic programming is a cutting plane type algorithm for multi-stage stochastic optimization originated about 30 years ago. In spite of its popularity in practice, there does not exist any analysis on the convergence rates of this method. In this paper, we first establish the number of iterations, i.e., iteration complexity, required by a basic dynamic cutting plane method for solving relatively simple multi-stage optimization problems, by introducing novel mathematical tools including the saturation of search points. We then refine these basic tools and establish the iteration complexity for both deterministic and stochastic dual dynamic programming methods for solving more general multi-stage stochastic optimization problems under the standard stage-wise independence assumption. Our results indicate that the complexity of some deterministic variants of these methods mildly increases with the number of stages $T$, in fact linearly dependent on $T$ for discounted problems. Therefore, they are efficient for strategic decision making which involves a large number of stages, but with a relatively small number of decision variables in each stage. Without explicitly discretizing the state and action spaces, these methods might also be pertinent to the related reinforcement learning and stochastic control areas.

We consider the complexity of computing Chow forms and their generalization to multiprojective spaces. We develop a deterministic algorithm using resultants, and obtain a single exponential complexity upper bound. Earlier computational results for Chow forms were in the arithmetic complexity model; our result represents the first Boolean complexity bound. We also extend our algorithm to multiprojective Chow forms and obtain the first computational result in this setting. The motivation for our work comes from incidence geometry where intriguing problems for computational algebraists remain open.

We present a new generalization of the bin covering problem that is known to be a strongly NP-hard problem. In our generalization there is a positive constant $\Delta$, and we are given a set of items each of which has a positive size. We would like to find a partition of the items into bins. We say that a bin is near exact covered if the total size of items packed into the bin is between $1$ and $1+\Delta$. Our goal is to maximize the number of near exact covered bins. If $\Delta=0$ or $\Delta>0$ is given as part of the input, our problem is shown here to have no approximation algorithm with a bounded asymptotic approximation ratio (assuming that $P\neq NP$). However, for the case where $\Delta>0$ is seen as a constant, we present an asymptotic fully polynomial time approximation scheme (AFPTAS) that is our main contribution.

In machine learning, correlation clustering is an important problem whose goal is to partition the individuals into groups that correlate with their pairwise similarities as much as possible. In this work, we revisit the correlation clustering under the differential privacy constraints. Particularly, we improve previous results and achieve an $\Tilde{O}(n^{1.5})$ additive error compared to the optimal cost in expectation on general graphs. As for unweighted complete graphs, we improve the results further and propose a more involved algorithm which achieves $\Tilde{O}(n \sqrt{\Delta^*})$ additive error, where $\Delta^*$ is the maximum degrees of positive edges among all nodes.

Gossip algorithms and their accelerated versions have been studied exclusively in discrete time on graphs. In this work, we take a different approach, and consider the scaling limit of gossip algorithms in both large graphs and large number of iterations. These limits lead to well-known partial differential equations (PDEs) with insightful properties. On lattices, we prove that the non-accelerated gossip algorithm of Boyd et al. [2006] converges to the heat equation, and the accelerated Jacobi polynomial iteration of Berthier et al. [2020] converges to the Euler-Poisson-Darboux (EPD) equation - a damped wave equation. Remarkably, with appropriate parameters, the fundamental solution of the EPD equation has the ideal gossip behaviour: a uniform density over an ellipsoid, whose radius increases at a rate proportional to t - the fastest possible rate for locally communicating gossip algorithms. This is in contrast with the heat equation where the density spreads on a typical scale of $\sqrt{t}$. Additionally, we provide simulations demonstrating that the gossip algorithms are accurately approximated by their limiting PDEs.

We introduce the problem of determining if the mode of the output distribution of a quantum circuit (given as a black-box) is larger than a given threshold, named HighDist, and a similar problem based on the absolute values of the amplitudes, named HighAmp. We design quantum algorithms for promised versions of these problems whose space complexities are logarithmic in the size of the domain of the distribution, but the query complexities are independent. Using these, we further design algorithms to estimate the largest probability and the largest amplitude among the output distribution of a quantum black-box. All of these allow us to improve the query complexity of a few recently studied problems, namely, $k$-distinctness and its gapped version, estimating the largest frequency in an array, estimating the min-entropy of a distribution, and the non-linearity of a Boolean function, in the $\tilde{O}(1)$-qubits scenario. The time-complexities of almost all of our algorithms have a small overhead over their query complexities making them efficiently implementable on currently available quantum backends.

$\newcommand{\eps}{\varepsilon} $We prove that for any $\eps > 0$ it is $\textsf{NP}$-hard to approximate the non-commutative Grothendieck problem to within a factor $1/2 + \eps$, which matches the approximation ratio of the algorithm of Naor, Regev, and Vidick (STOC'13). Our proof uses an embedding of $\ell_2$ into the space of matrices endowed with the trace norm with the property that the image of standard basis vectors is longer than that of unit vectors with no large coordinates. We also observe that one can obtain a tight $\textsf{NP}$-hardness result for the commutative Little Grothendieck problem; previously, this was only known based on the Unique Games Conjecture (Khot and Naor, Mathematika 2009).

北京阿比特科技有限公司