亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a finite set, $A \subseteq \mathbb{R}^2$, and a subset, $B \subseteq A$, the \emph{MST-ratio} is the combined length of the minimum spanning trees of $B$ and $A \setminus B$ divided by the length of the minimum spanning tree of $A$. The question of the supremum, over all sets $A$, of the maximum, over all subsets $B$, is related to the Steiner ratio, and we prove this sup-max is between $2.154$ and $2.427$. Restricting ourselves to $2$-dimensional lattices, we prove that the sup-max is $2.0$, while the inf-max is $1.25$. By some margin the most difficult of these results is the upper bound for the inf-max, which we prove by showing that the hexagonal lattice cannot have MST-ratio larger than $1.25$.

相關內容

We give a new algorithm for learning mixtures of $k$ Gaussians (with identity covariance in $\mathbb{R}^n$) to TV error $\varepsilon$, with quasi-polynomial ($O(n^{\text{poly log}\left(\frac{n+k}{\varepsilon}\right)})$) time and sample complexity, under a minimum weight assumption. Unlike previous approaches, most of which are algebraic in nature, our approach is analytic and relies on the framework of diffusion models. Diffusion models are a modern paradigm for generative modeling, which typically rely on learning the score function (gradient log-pdf) along a process transforming a pure noise distribution, in our case a Gaussian, to the data distribution. Despite their dazzling performance in tasks such as image generation, there are few end-to-end theoretical guarantees that they can efficiently learn nontrivial families of distributions; we give some of the first such guarantees. We proceed by deriving higher-order Gaussian noise sensitivity bounds for the score functions for a Gaussian mixture to show that that they can be inductively learned using piecewise polynomial regression (up to poly-logarithmic degree), and combine this with known convergence results for diffusion models. Our results extend to continuous mixtures of Gaussians where the mixing distribution is supported on a union of $k$ balls of constant radius. In particular, this applies to the case of Gaussian convolutions of distributions on low-dimensional manifolds, or more generally sets with small covering number.

A dominating set of a graph $G=(V,E)$ is a subset of vertices $S\subseteq V$ such that every vertex $v\in V\setminus S$ has at least one neighbor in set $S$. The corresponding optimization problem is known to be NP-hard. The best known polynomial time approximation algorithm for the problem separates the solution process in two stages applying first a fast greedy algorithm to obtain an initial dominating set, and then it uses an iterative procedure to reduce (purify) this dominating set. The purification stage turned out to be practically efficient. Here we further strengthen the purification stage presenting four new purification algorithms. All four purification procedures outperform the earlier purification procedure. The algorithms were tested for over 1300 benchmark problem instances. Compared to the known upper bounds, the obtained solutions were about 7 times better. Remarkably, for the 500 benchmark instances for which the optimum is known, the optimal solutions were obtained for 46.33\% of the tested instances, whereas the average error for the remaining instances was about 1.01.

Exponential histograms, with bins of the form $\left\{ \left(\rho^{k-1},\rho^{k}\right]\right\} _{k\in\mathbb{Z}}$, for $\rho>1$, straightforwardly summarize the quantiles of streaming data sets (Masson et al. 2019). While they guarantee the relative accuracy of their estimates, they appear to use only $\log n$ values to summarize $n$ inputs. We study four aspects of exponential histograms -- size, accuracy, occupancy, and largest gap size -- when inputs are i.i.d. $\mathrm{Exp}\left(\lambda\right)$ or i.i.d. $\mathrm{Pareto}\left(\nu,\beta\right)$, taking $\mathrm{Exp}\left(\lambda\right)$ (or, $\mathrm{Pareto}\left(\nu,\beta\right)$) to represent all light- (or, heavy-) tailed distributions. We show that, in these settings, size grows like $\log n$ and takes on a Gumbel distribution as $n$ grows large. We bound the missing mass to the right of the histogram and the mass of its final bin and show that occupancy grows apace with size. Finally, we approximate the size of the largest number of consecutive, empty bins. Our study gives a deeper and broader view of this low-memory approach to quantile estimation.

An aperiodic binary sequence of length $\ell$ is a doubly infinite sequence $f=\ldots,f_{-1},f_0,f_1,\ldots$ with $f_j \in \{-1,1\}$ when $0 \leq j < \ell$ and and $f_j=0$ otherwise. Various problems in engineering and natural science demand binary sequences that do not resemble translates of themselves. The autocorrelation of $f$ at shift $s$ is the dot product of $f$ with the sequence obtained by translating $f$ by $s$ places. The demerit factor of $f$ is the sum of the squares of the autocorrelations at all nonzero shifts for the sequence obtained by normalizing $f$ to unit Euclidean norm. Low demerit factor therefore indicates low self-similarity under translation. We endow the $2^\ell$ binary sequences of length $\ell$ with uniform probability measure and consider the distribution of their demerit factors. Earlier works used combinatorial techniques to find exact formulas for the mean, variance, skewness, and kurtosis of the distribution as a function of $\ell$. These revealed that for $\ell \geq 4$, the $p$th central moment of this distribution is strictly positive for every $p \geq 2$. This article shows that every $p$th central moment is a quasi-polynomial function of $\ell$ with rational coefficients divided by $\ell^{2 p}$. It also shows that, in the limit as $\ell$ tends to infinity, the $p$th standardized moment is the same as that of the standard normal distribution.

Let $S$ be a set of $n$ points in general position in $\mathbb{R}^d$. The order-$k$ Voronoi diagram of $S$, $V_k(S)$, is a subdivision of $\mathbb{R}^d$ into cells whose points have the same $k$ nearest points of $S$. Sibson, in his seminal paper from 1980 (A vector identity for the Dirichlet tessellation), gives a formula to express a point $Q$ of $S$ as a convex combination of other points of $S$ by using ratios of volumes of the intersection of cells of $V_2(S)$ and the cell of $Q$ in $V_1(S)$. The natural neighbour interpolation method is based on Sibson's formula. We generalize his result to express $Q$ as a convex combination of other points of $S$ by using ratios of volumes from Voronoi diagrams of any given order.

Goemans and Rothvoss (SODA'14) gave a framework for solving problems in time $enc(P)^{2^{O(N)}}enc(Q)^{O(1)}$ that can be described as finding a point in $\text{int.cone}(P\cap\mathbb{Z}^N)\cap Q$, where $P,Q\subset\mathbb{R}^N$ are (bounded) polyhedra. This framework can be used to solve various scheduling problems, but the encoding length $enc(P)$ usually involves large parameters like the makespan. We describe three tools to improve the framework by Goemans and Rothvoss: Problem-specific preprocessing, LP relaxation techniques and a new bound for the number of vertices of the integer hull. In particular, applied to the classical scheduling problem $P||C_{\max}$, these tools each improve the running time from $(\log(C_{\max}))^{2^{O(d)}} enc(I)^{O(1)}$ to the possibly much better $(\log(p_{\max}))^{2^{O(d)}}enc(I)^{O(1)}$. Here, $p_{\max}$ is the largest processing time, $d$ is the number of different processing times, $C_{\max}$ is the makespan and $enc(I)$ is the encoding length of the instance. This running time is FPT w.r.t. parameter $d$ if $p_{\max}$ is given in unary. We obtain similar results for various other problems. Moreover, we show how a balancing result by Govzmann et al. can be used to speed up an additive approximation scheme by Buchem et al. (ICALP'21) in the high-multiplicity setting. On the complexity side, we use reductions from the literature to provide new parameterized lower bounds for $P||C_{\max}$ and to show that the improved running time of the additive approximation algorithm is probably optimal. Finally, we show that the big open question asked by Mnich and van Bevern (Comput. Oper. Res. '18) whether $P||C_{\max}$ is FPT w.r.t. the number of job types $d$ has the same answer as the question whether $Q||C_{\max}$ is FPT w.r.t. the number of job and machine types $d+\tau$ (all in high-multiplicity encoding). The same holds for objective $C_{\min}$.

An additive code is an $\mathbb{F}_q$-linear subspace of $\mathbb{F}_{q^m}^n$ over $\mathbb{F}_{q^m}$, which is not a linear subspace over $\mathbb{F}_{q^m}$. Linear complementary pairs(LCP) of codes have important roles in cryptography, such as increasing the speed and capacity of digital communication and strengthening security by improving the encryption necessities to resist cryptanalytic attacks. This paper studies an algebraic structure of additive complementary pairs (ACP) of codes over $\mathbb{F}_{q^m}$. Further, we characterize an ACP of codes in analogous generator matrices and parity check matrices. Additionally, we identify a necessary condition for an ACP of codes. Besides, we present some constructions of an ACP of codes over $\mathbb{F}_{q^m}$ from LCP codes over $\mathbb{F}_{q^m}$ and also from an LCP of codes over $\mathbb{F}_q$. Finally, we study the constacyclic ACP of codes over $\mathbb{F}_{q^m}$ and the counting of the constacyclic ACP of codes. As an application of our study, we consider a class of quantum codes called Entanglement Assisted Quantum Error Correcting Code (EAQEC codes). As a consequence, we derive some EAQEC codes.

We propose an exact algorithm for the Graph Burning Problem ($\texttt{GBP}$), an NP-hard optimization problem that models the spread of influence on social networks. Given a graph $G$ with vertex set $V$, the objective is to find a sequence of $k$ vertices in $V$, namely, $v_1, v_2, \dots, v_k$, such that $k$ is minimum and $\bigcup_{i = 1}^{k} \{u\! \in\! V\! : d(u, v_i) \leq k - i\} = V$, where $d(u,v)$ denotes the distance between $u$ and $v$. We formulate the problem as a set covering integer programming model and design a row generation algorithm for the $\texttt{GBP}$. Our method exploits the fact that a very small number of covering constraints is often sufficient for solving the integer model, allowing the corresponding rows to be generated on demand. To date, the most efficient exact algorithm for the $\texttt{GBP}$, denoted here by $\texttt{GDCA}$, is able to obtain optimal solutions for graphs with up to 14,000 vertices within two hours of execution. In comparison, our algorithm finds provably optimal solutions approximately 236 times faster, on average, than $\texttt{GDCA}$. For larger graphs, memory space becomes a limiting factor for $\texttt{GDCA}$. Our algorithm, however, solves real-world instances with almost 200,000 vertices in less than 35 seconds, increasing the size of graphs for which optimal solutions are known by a factor of 14.

We prove that Sherali-Adams with polynomially bounded coefficients requires proofs of size $n^{\Omega(d)}$ to rule out the existence of an $n^{\Theta(1)}$-clique in Erd\H{o}s-R\'{e}nyi random graphs whose maximum clique is of size $d\leq 2\log n$. This lower bound is tight up to the multiplicative constant in the exponent. We obtain this result by introducing a technique inspired by pseudo-calibration which may be of independent interest. The technique involves defining a measure on monomials that precisely captures the contribution of a monomial to a refutation. This measure intuitively captures progress and should have further applications in proof complexity.

Let $G=(V, E)$ be a graph and let each vertex of $G$ has a lamp and a button. Each button can be of $\sigma^+$-type or $\sigma$-type. Assume that initially some lamps are on and others are off. The button on vertex $x$ is of $\sigma^+$-type ($\sigma$-type, respectively) if pressing the button changes the lamp states on $x$ and on its neighbors in $G$ (the lamp states on the neighbors of $x$ only, respectively). Assume that there is a set $X\subseteq V$ such that pressing buttons on vertices of $X$ lights all lamps on vertices of $G$. In particular, it is known to hold when initially all lamps are off and all buttons are of $\sigma^+$-type. Finding such a set $X$ of the smallest size is NP-hard even if initially all lamps are off and all buttons are of $\sigma^+$-type. Using a linear algebraic approach we design a polynomial-time approximation algorithm for the problem such that for the set $X$ constructed by the algorithm, we have $|X|\le \min\{r,(|V|+{\rm opt})/2\},$ where $r$ is the rank of a (modified) adjacent matrix of $G$ and ${\rm opt}$ is the size of an optimal solution to the problem. To the best of our knowledge, this is the first polynomial-time approximation algorithm for the problem with a nontrivial approximation guarantee.

北京阿比特科技有限公司