亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Following the breakthrough work of Tardos in the bit-complexity model, Vavasis and Ye gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) $\max\, c^\top x,\: Ax = b,\: x \geq 0,\: A \in \mathbb{R}^{m \times n}$, Vavasis and Ye developed a primal-dual interior point method using a 'layered least squares' (LLS) step, and showed that $O(n^{3.5} \log (\bar{\chi}_A+n))$ iterations suffice to solve (LP) exactly, where $\bar{\chi}_A$ is a condition measure controlling the size of solutions to linear systems related to $A$. Monteiro and Tsuchiya, noting that the central path is invariant under rescalings of the columns of $A$ and $c$, asked whether there exists an LP algorithm depending instead on the measure $\bar{\chi}^*_A$, defined as the minimum $\bar{\chi}_{AD}$ value achievable by a column rescaling $AD$ of $A$, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an $O(m^2 n^2 + n^3)$ time algorithm which works on the linear matroid of $A$ to compute a nearly optimal diagonal rescaling $D$ satisfying $\bar{\chi}_{AD} \leq n(\bar{\chi}^*)^3$. This algorithm also allows us to approximate the value of $\bar{\chi}_A$ up to a factor $n (\bar{\chi}^*)^2$. As our second main contribution, we develop a scaling invariant LLS algorithm, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved $O(n^{2.5} \log n\log (\bar{\chi}^*_A+n))$ iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor $n/\log n$ improvement on the iteration complexity bound of the original Vavasis-Ye algorithm.

相關內容

We present a deterministic algorithm for solving a wide range of dynamic programming problems in trees in $O(\log D)$ rounds in the massively parallel computation model (MPC), with $O(n^\delta)$ words of local memory per machine, for any given constant $0 < \delta < 1$. Here $D$ is the diameter of the tree and $n$ is the number of nodes--we emphasize that our running time is independent of $n$. Our algorithm can solve many classical graph optimization problems such as maximum weight independent set, maximum weight matching, minimum weight dominating set, and minimum weight vertex cover. It can also be used to solve many accumulation tasks in which some aggregate information is propagated upwards or downwards in the tree--this includes, for example, computing the sum, minimum, or maximum of the input labels in each subtree, as well as many inference tasks commonly solved with belief propagation. Our algorithm can also solve any locally checkable labeling problem (LCLs) in trees. Our algorithm works for any reasonable representation of the input tree; for example, the tree can be represented as a list of edges or as a string with nested parentheses or tags. The running time of $O(\log D)$ rounds is also known to be necessary, assuming the widely-believed $2$-cycle conjecture. Our algorithm strictly improves on two prior algorithms: (i) Bateni, Behnezhad, Derakhshan, Hajiaghayi, and Mirrokni [ICALP'18] solve problems of these flavors in $O(\log n)$ rounds, while our algorithm is much faster in low-diameter trees. Furthermore, their algorithm also uses randomness, while our algorithm is deterministic. (ii) Balliu, Latypov, Maus, Olivetti, and Uitto [SODA'23] solve only locally checkable labeling problems in $O(\log D)$ rounds, while our algorithm can be applied to a much broader family of problems.

We consider \emph{Gibbs distributions}, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The \emph{partition function} is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

We describe a practical algorithm for computing normal forms for semigroups and monoids with finite presentations satisfying so-called small overlap conditions. Small overlap conditions are natural conditions on the relations in a presentation, which were introduced by J. H. Remmers and subsequently studied extensively by M. Kambites. Presentations satisfying these conditions are ubiquitous; Kambites showed that a randomly chosen finite presentation satisfies the $C(4)$ condition with probability tending to 1 as the sum of the lengths of relation words tends to infinity. Kambites also showed that several key problems for finitely presented semigroups and monoids are tractable in $C(4)$ monoids: the word problem is solvable in $O(\min\{|u|, |v|\})$ time in the size of the input words $u$ and $v$; the uniform word problem for $\langle A|R\rangle$ is solvable in $O(N ^ 2 \min\{|u|, |v|\})$ where $N$ is the sum of the lengths of the words in $R$; and a normal form for any given word $u$ can be found in $O(|u|)$ time. Although Kambites' algorithm for solving the word problem in $C(4)$ monoids is highly practical, it appears that the coefficients in the linear time algorithm for computing normal forms are too large in practice. In this paper, we present an algorithm for computing normal forms in $C(4)$ monoids that has time complexity $O(|u| ^ 2)$ for input word $u$, but where the coefficients are sufficiently small to allow for practical computation. Additionally, we show that the uniform word problem for small overlap monoids can be solved in $O(N \min\{|u|, |v|\})$ time.

We propose an extension of the input-output feedback linearization for a class of multivariate systems that are not input-output linearizable in a classical manner. The key observation is that the usual input-output linearization problem can be interpreted as the problem of solving simultaneous linear equations associated with the input gain matrix: thus, even at points where the input gain matrix becomes singular, it is still possible to solve a part of linear equations, by which a subset of input-output relations is made linear or close to be linear. Based on this observation, we adopt the task priority-based approach in the input-output linearization problem. First, we generalize the classical Byrnes-Isidori normal form to a prioritized normal form having a triangular structure, so that the singularity of a subblock of the input gain matrix related to lower-priority tasks does not directly propagate to higher-priority tasks. Next, we present a prioritized input-output linearization via the multi-objective optimization with the lexicographical ordering, resulting in a prioritized semilinear form that establishes input output relations whose subset with higher priority is linear or close to be linear. Finally, Lyapunov analysis on ultimate boundedness and task achievement is provided, particularly when the proposed prioritized input-output linearization is applied to the output tracking problem. This work introduces a new control framework for complex systems having critical and noncritical control issues, by assigning higher priority to the critical ones.

Score-based generative modeling (SGM) is a highly successful approach for learning a probability distribution from data and generating further samples. We prove the first polynomial convergence guarantees for the core mechanic behind SGM: drawing samples from a probability density $p$ given a score estimate (an estimate of $\nabla \ln p$) that is accurate in $L^2(p)$. Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality. Our guarantee works for any smooth distribution and depends polynomially on its log-Sobolev constant. Using our guarantee, we give a theoretical analysis of score-based generative modeling, which transforms white-noise input into samples from a learned data distribution given score estimates at different noise scales. Our analysis gives theoretical grounding to the observation that an annealed procedure is required in practice to generate good samples, as our proof depends essentially on using annealing to obtain a warm start at each step. Moreover, we show that a predictor-corrector algorithm gives better convergence than using either portion alone.

Our work concerns algorithms for an unweighted variant of Maximum Flow. In the All-Pairs Connectivity (APC) problem, we are given a graph $G$ on $n$ vertices and $m$ edges, and are tasked with computing the maximum number of edge-disjoint paths from $s$ to $t$ (equivalently, the size of a minimum $(s,t)$-cut) in $G$, for all pairs of vertices $(s,t)$. Although over undirected graphs APC can be solved in essentially optimal $n^{2+o(1)}$ time, the true time complexity of APC over directed graphs remains open: this problem can be solved in $\tilde{O}(m^\omega)$ time, where $\omega \in [2, 2.373)$ is the exponent of matrix multiplication, but no matching conditional lower bound is known. We study a variant of APC called the $k$-Bounded All Pairs Connectivity ($k$-APC) problem. In this problem, we are given an integer $k$ and graph $G$, and are tasked with reporting the size of a minimum $(s,t)$-cut only for pairs $(s,t)$ of vertices with a minimum cut size less than $k$ (if the minimum $(s,t)$-cut has size at least $k$, we just report it is "large" instead of computing the exact value). We present an algorithm solving $k$-APC in directed graphs in $\tilde{O}((kn)^\omega)$ time. This runtime is $\tilde O(n^\omega)$ for all $k$ polylogarithmic in $n$, which is essentially optimal under popular conjectures from fine-grained complexity. Previously, this runtime was only known for $k\le 2$ [Georgiadis et al., ICALP 2017]. We also study a variant of $k$-APC, the $k$-Bounded All-Pairs Vertex Connectivity ($k$-APVC) problem, which considers internally vertex-disjoint paths instead of edge-disjoint paths. We present an algorithm solving $k$-APVC in directed graphs in $\tilde{O}(k^2n^\omega)$ time. Previous work solved an easier version of the $k$-APVC problem in $\tilde O((kn)^\omega)$ time [Abboud et al, ICALP 2019].

This article introduces new multiplicative updates for nonnegative matrix factorization with the $\beta$-divergence and sparse regularization of one of the two factors (say, the activation matrix). It is well known that the norm of the other factor (the dictionary matrix) needs to be controlled in order to avoid an ill-posed formulation. Standard practice consists in constraining the columns of the dictionary to have unit norm, which leads to a nontrivial optimization problem. Our approach leverages a reparametrization of the original problem into the optimization of an equivalent scale-invariant objective function. From there, we derive block-descent majorization-minimization algorithms that result in simple multiplicative updates for either $\ell_{1}$-regularization or the more "aggressive" log-regularization. In contrast with other state-of-the-art methods, our algorithms are universal in the sense that they can be applied to any $\beta$-divergence (i.e., any value of $\beta$) and that they come with convergence guarantees. We report numerical comparisons with existing heuristic and Lagrangian methods using various datasets: face images, an audio spectrogram, hyperspectral data, and song play counts. We show that our methods obtain solutions of similar quality at convergence (similar objective values) but with significantly reduced CPU times.

We study the time complexity of the discrete $k$-center problem and related (exact) geometric set cover problems when $k$ or the size of the cover is small. We obtain a plethora of new results: - We give the first subquadratic algorithm for rectilinear discrete 3-center in 2D, running in $\widetilde{O}(n^{3/2})$ time. - We prove a lower bound of $\Omega(n^{4/3-\delta})$ for rectilinear discrete 3-center in 4D, for any constant $\delta>0$, under a standard hypothesis about triangle detection in sparse graphs. - Given $n$ points and $n$ weighted axis-aligned unit squares in 2D, we give the first subquadratic algorithm for finding a minimum-weight cover of the points by 3 unit squares, running in $\widetilde{O}(n^{8/5})$ time. We also prove a lower bound of $\Omega(n^{3/2-\delta})$ for the same problem in 2D, under the well-known APSP Hypothesis. For arbitrary axis-aligned rectangles in 2D, our upper bound is $\widetilde{O}(n^{7/4})$. - We prove a lower bound of $\Omega(n^{2-\delta})$ for Euclidean discrete 2-center in 13D, under the Hyperclique Hypothesis. This lower bound nearly matches the straightforward upper bound of $\widetilde{O}(n^\omega)$, if the matrix multiplication exponent $\omega$ is equal to 2. - We similarly prove an $\Omega(n^{k-\delta})$ lower bound for Euclidean discrete $k$-center in $O(k)$ dimensions for any constant $k\ge 3$, under the Hyperclique Hypothesis. This lower bound again nearly matches known upper bounds if $\omega=2$. - We also prove an $\Omega(n^{2-\delta})$ lower bound for the problem of finding 2 boxes to cover the largest number of points, given $n$ points and $n$ boxes in 12D. This matches the straightforward near-quadratic upper bound.

The paper introduces a novel non-parametric Riemannian regression method using Isometric Riemannian Manifolds (IRMs). The proposed technique, Intrinsic Local Polynomial Regression on IRMs (ILPR-IRMs), enables global data mapping between Riemannian manifolds while preserving underlying geometries. The ILPR method is generalized to handle multivariate covariates on any Riemannian manifold and isometry. Specifically, for manifolds equipped with Euclidean Pullback Metrics (EPMs), a closed analytical formula is derived for the multivariate ILPR (ILPR-EPM). Asymptotic statistical properties of the ILPR-EPM for the multivariate local linear case are established, including a formula for the asymptotic bias, establishing estimator consistency. The paper showcases possible applications of the method by focusing on a group of Riemannian metrics on the Symmetric Positive Definite (SPD) manifold, which arises in machine learning and neuroscience. It is demonstrated that several metrics on the SPD manifold are EPMs, resulting in a closed analytical expression for the multivariate ILPR estimator on the SPD manifold. The paper evaluates the ILPR estimator's performance under two specific EPMs, Log-Cholesky and Log-Euclidean, on simulated data on the SPD manifold and compares it with extrinsic LPR using the Affine-Invariant when scaling the manifold and covariate dimension. The results show that the ILPR using the Log-Cholesky metric is computationally faster and provides a better trade-off between error and time than other metrics. Finally, the Log-Cholesky metric on the SPD manifold is employed to implement an efficient and intrinsic version of Rie-SNE for visualizing high-dimensional SPD data. The code for implementing ILPR-EPMs and other relevant calculations is available on the GitHub page.

Non-convex optimization is ubiquitous in modern machine learning. Researchers devise non-convex objective functions and optimize them using off-the-shelf optimizers such as stochastic gradient descent and its variants, which leverage the local geometry and update iteratively. Even though solving non-convex functions is NP-hard in the worst case, the optimization quality in practice is often not an issue -- optimizers are largely believed to find approximate global minima. Researchers hypothesize a unified explanation for this intriguing phenomenon: most of the local minima of the practically-used objectives are approximately global minima. We rigorously formalize it for concrete instances of machine learning problems.

北京阿比特科技有限公司