亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In a seminal paper, Kannan and Lov\'asz (1988) considered a quantity $\mu_{KL}(\Lambda,K)$ which denotes the best volume-based lower bound on the covering radius $\mu(\Lambda,K)$ of a convex body $K$ with respect to a lattice $\Lambda$. Kannan and Lov\'asz proved that $\mu(\Lambda,K) \leq n \cdot \mu_{KL}(\Lambda,K)$ and the Subspace Flatness Conjecture by Dadush (2012) claims a $O(\log n)$ factor suffices, which would match the lower bound from the work of Kannan and Lov\'asz. We settle this conjecture up to a constant in the exponent by proving that $\mu(\Lambda,K) \leq O(\log^{7}(n)) \cdot \mu_{KL} (\Lambda,K)$. Our proof is based on the Reverse Minkowski Theorem due to Regev and Stephens-Davidowitz (2017). Following the work of Dadush (2012, 2019), we obtain a $(\log n)^{O(n)}$-time randomized algorithm to solve integer programs in $n$ variables. Another implication of our main result is a near-optimal flatness constant of $O(n \log^{8}(n))$.

相關內容

We present quantitative logics with two-step semantics based on the framework of quantitative logics introduced by Arenas et al. (2020) and the two-step semantics defined in the context of weighted logics by Gastin & Monmege (2018). We show that some of the fragments of our logics augmented with a least fixed point operator capture interesting classes of counting problems. Specifically, we answer an open question in the area of descriptive complexity of counting problems by providing logical characterizations of two subclasses of #P, namely SpanL and TotP, that play a significant role in the study of approximable counting problems. Moreover, we define logics that capture FPSPACE and SpanPSPACE, which are counting versions of PSPACE.

Many variations of the classical graph coloring model have been intensively studied due to their multiple applications; scheduling problems and aircraft assignments, for instance, motivate the robust coloring problem. This model gets to capture natural constraints of those optimization problems by combining the information provided by two colorings: a vertex coloring of a graph and the induced edge coloring on a subgraph of its complement; the goal is to minimize, among all proper colorings of the graph for a fixed number of colors, the number of edges in the subgraph with the endpoints of the same color. The study of the robust coloring model has been focused on the search for heuristics due to its NP-hard character when using at least three colors, but little progress has been made in other directions. We present a new approach on the problem obtaining the first collection of non-heuristic results for general graphs; among them, we prove that robust coloring is the model that better approaches the equitable partition of the vertex set, even when the graph does not admit a so-called \emph{equitable coloring}. We also show the NP-completeness of its decision problem for the unsolved case of two colors, obtain bounds on the associated robust coloring parameter, and solve a conjecture on paths that illustrates the complexity of studying this coloring model.

We consider problems of minimizing functionals $\mathcal{F}$ of probability measures on the Euclidean space. To propose an accelerated gradient descent algorithm for such problems, we consider gradient flow of transport maps that give push-forward measures of an initial measure. Then we propose a deterministic accelerated algorithm by extending Nesterov's acceleration technique with momentum. This algorithm do not based on the Wasserstein geometry. Furthermore, to estimate the convergence rate of the accelerated algorithm, we introduce new convexity and smoothness for $\mathcal{F}$ based on transport maps. As a result, we can show that the accelerated algorithm converges faster than a normal gradient descent algorithm. Numerical experiments support this theoretical result.

Discrepancy theory provides powerful tools for producing higher-quality objects which "beat the union bound" in fundamental settings throughout combinatorics and computer science. However, this quality has often come at the price of more expensive algorithms. We introduce a new framework for bridging this gap, by allowing for the efficient implementation of discrepancy-theoretic primitives. Our framework repeatedly solves regularized optimization problems to low accuracy to approximate the partial coloring method of [Rot17], and simplifies and generalizes recent work of [JSS23] on fast algorithms for Spencer's theorem. In particular, our framework only requires that the discrepancy body of interest has exponentially large Gaussian measure and is expressible as a sublevel set of a symmetric, convex function. We combine this framework with new tools for proving Gaussian measure lower bounds to give improved algorithms for a variety of sparsification and coloring problems. As a first application, we use our framework to obtain an $\widetilde{O}(m \cdot \epsilon^{-3.5})$ time algorithm for constructing an $\epsilon$-approximate spectral sparsifier of an $m$-edge graph, matching the sparsity of [BSS14] up to constant factors and improving upon the $\widetilde{O}(m \cdot \epsilon^{-6.5})$ runtime of [LeeS17]. We further give a state-of-the-art algorithm for constructing graph ultrasparsifiers and an almost-linear time algorithm for constructing linear-sized degree-preserving sparsifiers via discrepancy theory; in the latter case, such sparsifiers were not known to exist previously. We generalize these results to their analogs in sparsifying isotropic sums of positive semidefinite matrices. Finally, to demonstrate the versatility of our technique, we obtain a nearly-input-sparsity time constructive algorithm for Spencer's theorem (where we recover a recent result of [JSS23]).

Online weighted matching problem is a fundamental problem in machine learning due to its numerous applications. Despite many efforts in this area, existing algorithms are either too slow or don't take $\mathrm{deadline}$ (the longest time a node can be matched) into account. In this paper, we introduce a market model with $\mathrm{deadline}$ first. Next, we present our two optimized algorithms (\textsc{FastGreedy} and \textsc{FastPostponedGreedy}) and offer theoretical proof of the time complexity and correctness of our algorithms. In \textsc{FastGreedy} algorithm, we have already known if a node is a buyer or a seller. But in \textsc{FastPostponedGreedy} algorithm, the status of each node is unknown at first. Then, we generalize a sketching matrix to run the original and our algorithms on both real data sets and synthetic data sets. Let $\epsilon \in (0,0.1)$ denote the relative error of the real weight of each edge. The competitive ratio of original \textsc{Greedy} and \textsc{PostponedGreedy} is $\frac{1}{2}$ and $\frac{1}{4}$ respectively. Based on these two original algorithms, we proposed \textsc{FastGreedy} and \textsc{FastPostponedGreedy} algorithms and the competitive ratio of them is $\frac{1 - \epsilon}{2}$ and $\frac{1 - \epsilon}{4}$ respectively. At the same time, our algorithms run faster than the original two algorithms. Given $n$ nodes in $\mathbb{R} ^ d$, we decrease the time complexity from $O(nd)$ to $\widetilde{O}(\epsilon^{-2} \cdot (n + d))$.

We propose an algorithm whose input are parameters $k$ and $r$ and a hypergraph $H$ of rank at most $r$. The algorithm either returns a tree decomposition of $H$ of generalized hypertree width at most $4k$ or 'NO'. In the latter case, it is guaranteed that the hypertree width of $H$ is greater than $k$. Most importantly, the runtime of the algorithm is \emph{FPT} in $k$ and $r$. The approach extends to fractional hypertree width with a slightly worse approximation ($4k+1$ instead of $4k$). We hope that the results of this paper will give rise to a new research direction whose aim is design of FPT algorithms for computation and approximation of hypertree width parameters for restricted classes of hypergraphs.

The weighted $3$-Set Packing problem is defined as follows: As input, we are given a collection $\mathcal{S}$ of sets, each of cardinality at most $3$ and equipped with a positive weight. The task is to find a disjoint sub-collection of maximum total weight. Already the special case of unit weights is known to be NP-hard, and the state-of-the-art are $\frac{4}{3}+\epsilon$-approximations by Cygan and F\"urer and Yu. In this paper, we study the $2$-$3$-Set Packing problem, a generalization of the unweighted $3$-Set Packing problem, where our set collection may contain sets of cardinality $3$ and weight $2$, as well as sets of cardinality $2$ and weight $1$. Building upon the state-of-the-art works in the unit weight setting, we manage to provide a $\frac{4}{3}+\epsilon$-approximation also for the more general $2$-$3$-Set Packing problem. We believe that this result can be a good starting point to identify classes of weight functions to which the techniques used for unit weights can be generalized. Using a reduction by Fernandes and Lintzmayer, our result further implies a $\frac{4}{3}+\epsilon$-approximation for the Maximum Leaf Spanning Arborescence problem (MLSA) in rooted directed acyclic graphs, improving on the previously known $\frac{7}{5}$-approximation by Fernandes and Lintzmayer. By exploiting additional structural properties of the instance constructed in their reduction, we can further get the approximation guarantee for the MLSA down to $\frac{4}{3}$. The MLSA has applications in broadcasting where a message needs to be transferred from a source node to all other nodes along the arcs of an arborescence in a given network.

We develop a numerical method for computing with orthogonal polynomials that are orthogonal on multiple, disjoint intervals for which analytical formulae are currently unknown. Our approach exploits the Fokas--Its--Kitaev Riemann--Hilbert representation of the orthogonal polynomials to produce an $\text{O}(N)$ method to compute the first $N$ recurrence coefficients. The method can also be used for pointwise evaluation of the polynomials and their Cauchy transforms throughout the complex plane. The method encodes the singularity behavior of weight functions using weighted Cauchy integrals of Chebyshev polynomials. This greatly improves the efficiency of the method, outperforming other available techniques. We demonstrate the fast convergence of our method and present applications to integrable systems and approximation theory.

Inspired by certain regularization techniques for linear inverse problems, in this work we investigate the convergence properties of the Levenberg-Marquardt method using singular scaling matrices. Under a completeness condition, we show that the method is well-defined and establish its local quadratic convergence under an error bound assumption. We also prove that the search directions are gradient-related allowing us to show that limit points of the sequence generated by a line-search version of the method are stationary for the sum-of-squares function. The usefulness of the method is illustrated with some examples of parameter identification in heat conduction problems for which specific singular scaling matrices can be used to improve the quality of approximate solutions.

This paper is concerned with the multi-frequency factorization method for imaging the support of a wave-number-dependent source function. It is supposed that the source function is given by the Fourier transform of some time-dependent source with a priori given radiating period. Using the multi-frequency far-field data at a fixed observation direction, we provide a computational criterion for characterizing the smallest strip containing the support and perpendicular to the observation direction. The far-field data from sparse observation directions can be used to recover a $\Theta$-convex polygon of the support. The inversion algorithm is proven valid even with multi-frequency near-field data in three dimensions. The connections to time-dependent inverse source problems are discussed in the near-field case. We also comment on possible extensions to source functions with two disconnected supports. Numerical tests in both two and three dimensions are implemented to show effectiveness and feasibility of the approach. This paper provides numerical analysis for a frequency-domain approach to recover the support of an admissible class of time-dependent sources.

北京阿比特科技有限公司