亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a bipartite graph $G(V= (A \cup B),E)$ with $n$ vertices and $m$ edges and a function $b \colon V \to \mathbb{Z}_+$, a $b$-matching is a subset of edges such that every vertex $v \in V$ is incident to at most $b(v)$ edges in the subset. When we are also given edge weights, the Max Weight $b$-Matching problem is to find a $b$-matching of maximum weight, which is a fundamental combinatorial optimization problem with many applications. Extending on the recent work of Zheng and Henzinger (IPCO, 2023) on standard bipartite matching problems, we develop a simple auction algorithm to approximately solve Max Weight $b$-Matching. Specifically, we present a multiplicative auction algorithm that gives a $(1 - \varepsilon)$-approximation in $O(m \varepsilon^{-1} \log \varepsilon^{-1} \log \beta)$ worst case time, where $\beta$ the maximum $b$-value. Although this is a $\log \beta$ factor greater than the current best approximation algorithm by Huang and Pettie (Algorithmica, 2022), it is considerably simpler to present, analyze, and implement.

相關內容

Given functions $f$ and $g$ defined on the subset lattice of order $n$, their min-sum subset convolution, defined for all $S \subseteq [n]$ as \[ (f \star g)(S) = \min_{T \subseteq S}\:\big(f(T) + g(S \setminus T)\big), \] lies at the heart of several NP-hard optimization problems, such as minimum-cost $k$-coloring, the prize-collecting Steiner tree, and many others in computational biology. Despite its importance, its na\"ive $O(3^n)$-time evaluation remains the fastest known, the other alternative being an $\tilde O(2^n M)$-time algorithm for instances where the input functions have a bounded integer range $\{-M, \ldots, M\}$. We study for the first time the $(1 + \varepsilon)$-approximate min-sum subset convolution and present both a weakly- and strongly-polynomial approximation algorithm, running in time $\tilde O(2^n \log M / \varepsilon)$ and $\tilde O(2^\frac{3n}{2} / \sqrt{\varepsilon})$, respectively. To demonstrate the applicability of our work, we present the first exponential-time $(1 + \varepsilon)$-approximation schemes for the above optimization problems. Our algorithms lie at the intersection of two lines of research that have been so far considered separately: $\textit{sequence}$ and $\textit{subset}$ convolutions in semi-rings. We also extend the recent framework of Bringmann, K\"unnemann, and W\k{e}grzycki [STOC 2019] to the context of subset convolutions.

An $\mathsf{F}_{d}$ upper bound for the reachability problem in vector addition systems with states (VASS) in fixed dimension is given, where $\mathsf{F}_d$ is the $d$-th level of the Grzegorczyk hierarchy of complexity classes. The new algorithm combines the idea of the linear path scheme characterization of the reachability in the $2$-dimension VASSes with the general decomposition algorithm by Mayr, Kosaraju and Lambert. The result improves the $\mathsf{F}_{d + 4}$ upper bound due to Leroux and Schmitz (LICS 2019).

We give a simple, greedy $O(n^{\omega+0.5})=O(n^{2.872})$-time algorithm to list-decode planted cliques in a semirandom model introduced in [CSV17] (following [FK01]) that succeeds whenever the size of the planted clique is $k\geq O(\sqrt{n} \log^2 n)$. In the model, the edges touching the vertices in the planted $k$-clique are drawn independently with probability $p=1/2$ while the edges not touching the planted clique are chosen by an adversary in response to the random choices. Our result shows that the computational threshold in the semirandom setting is within a $O(\log^2 n)$ factor of the information-theoretic one [Ste17] thus resolving an open question of Steinhardt. This threshold also essentially matches the conjectured computational threshold for the well-studied special case of fully random planted clique. All previous algorithms [CSV17, MMT20, BKS23] in this model are based on rather sophisticated rounding algorithms for entropy-constrained semidefinite programming relaxations and their sum-of-squares strengthenings and the best known guarantee is a $n^{O(1/\epsilon)}$-time algorithm to list-decode planted cliques of size $k \geq \tilde{O}(n^{1/2+\epsilon})$. In particular, the guarantee trivializes to quasi-polynomial time if the planted clique is of size $O(\sqrt{n} \operatorname{polylog} n)$. Our algorithm achieves an almost optimal guarantee with a surprisingly simple greedy algorithm. The prior state-of-the-art algorithmic result above is based on a reduction to certifying bounds on the size of unbalanced bicliques in random graphs -- closely related to certifying the restricted isometry property (RIP) of certain random matrices and known to be hard in the low-degree polynomial model. Our key idea is a new approach that relies on the truth of -- but not efficient certificates for -- RIP of a new class of matrices built from the input graphs.

Given a simple weighted directed graph $G = (V, E, \omega)$ on $n$ vertices as well as two designated terminals $s, t\in V$, our goal is to compute the shortest path from $s$ to $t$ avoiding any pair of presumably failed edges $f_1, f_2\in E$, which is a natural generalization of the classical replacement path problem which considers single edge failures only. This dual failure replacement paths problem was recently studied by Vassilevska Williams, Woldeghebriel and Xu [FOCS 2022] who designed a cubic time algorithm for general weighted digraphs which is conditionally optimal; in the same paper, for unweighted graphs where $\omega \equiv 1$, the authors presented an algebraic algorithm with runtime $\tilde{O}(n^{2.9146})$, as well as a conditional lower bound of $n^{8/3-o(1)}$ against combinatorial algorithms. However, it was unknown in their work whether fast matrix multiplication is necessary for a subcubic runtime in unweighted digraphs. As our primary result, we present the first truly subcubic combinatorial algorithm for dual failure replacement paths in unweighted digraphs. Our runtime is $\tilde{O}(n^{3-1/18})$. Besides, we also study algebraic algorithms for digraphs with small integer edge weights from $\{-M, -M+1, \cdots, M-1, M\}$. As our secondary result, we obtained a runtime of $\tilde{O}(Mn^{2.8716})$, which is faster than the previous bound of $\tilde{O}(M^{2/3}n^{2.9144} + Mn^{2.8716})$ from [Vassilevska Williams, Woldeghebriela and Xu, 2022].

An $(m,n,R)$-de Bruijn covering array (dBCA) is a doubly periodic $M \times N$ array over an alphabet of size $q$ such that the set of all its $m \times n$ windows form a covering code with radius $R$. An upper bound of the smallest array area of an $(m,n,R)$-dBCA is provided using a probabilistic technique which is similar to the one that was used for an upper bound on the length of a de Bruijn covering sequence. A folding technique to construct a dBCA from a de Bruijn covering sequence or de Bruijn covering sequences code is presented. Several new constructions that yield shorter de Bruijn covering sequences and $(m,n,R)$-dBCAs with smaller areas are also provided. These constructions are mainly based on sequences derived from cyclic codes, self-dual sequences, primitive polynomials, an interleaving technique, folding, and mutual shifts of sequences with the same covering radius. Finally, constructions of de Bruijn covering sequences codes are also discussed.

We give a non-adaptive algorithm that makes $2^{\tilde{O}(\sqrt{k\log(1/\varepsilon_2 - \varepsilon_1)})}$ queries to a Boolean function $f:\{\pm 1\}^n \rightarrow \{\pm 1\}$ and distinguishes between $f$ being $\varepsilon_1$-close to some $k$-junta versus $\varepsilon_2$-far from every $k$-junta. At the heart of our algorithm is a local mean estimation procedure for Boolean functions that may be of independent interest. We complement our upper bound with a matching lower bound, improving a recent lower bound obtained by Chen et al. We thus obtain the first tight bounds for a natural property of Boolean functions in the tolerant testing model.

We present a simple semi-streaming algorithm for $(1-\epsilon)$-approximation of bipartite matching in $O(\log{\!(n)}/\epsilon)$ passes. This matches the performance of state-of-the-art "$\epsilon$-efficient" algorithms -- the ones with much better dependence on $\epsilon$ albeit with some mild dependence on $n$ -- while being considerably simpler. The algorithm relies on a direct application of the multiplicative weight update method with a self-contained primal-dual analysis that can be of independent interest. To show case this, we use the same ideas, alongside standard tools from matching theory, to present an equally simple semi-streaming algorithm for $(1-\epsilon)$-approximation of weighted matchings in general (not necessarily bipartite) graphs, again in $O(\log{\!(n)}/\epsilon)$ passes.

We discuss two types of discrete inf-sup conditions for the Taylor-Hood family $Q_k$-$Q_{k-1}$ for all $k\in \mathbb{N}$ with $k\ge 2$ in 2D and 3D. While in 2D all results hold for a general class of hexahedral meshes, the results in 3D are restricted to meshes of parallelepipeds. The analysis is based on an element-wise technique as opposed to the widely used macroelement technique. This leads to inf-sup conditions on each element of the subdivision as well as to inf-sup conditions on the whole computational domain.

We consider minimizing a perturbed function $F(W) = \mathbb{E}_{U}[f(W + U)]$, given a function $f: \mathbb{R}^d \rightarrow \mathbb{R}$ and a random sample $U$ from a distribution $\mathcal{P}$ with mean zero. When $\mathcal{P}$ is the isotropic Gaussian, $F(W)$ is roughly equal to $f(W)$ plus a penalty on the trace of $\nabla^2 f(W)$, scaled by the variance of $\mathcal{P}$. This penalty on the Hessian has the benefit of improving generalization, through PAC-Bayes analysis. It is useful in low-sample regimes, for instance, when a (large) pre-trained model is fine-tuned on a small data set. One way to minimize $F$ is by adding $U$ to $W$, and then run SGD. We observe, empirically, that this noise injection does not provide significant gains over SGD, in our experiments of conducting fine-tuning on three image classification data sets. We design a simple, practical algorithm that adds noise along both $U$ and $-U$, with the option of adding several perturbations and taking their average. We analyze the convergence of this algorithm, showing tight rates on the norm of the output's gradient. We provide a comprehensive empirical analysis of our algorithm, by first showing that in an over-parameterized matrix sensing problem, it can find solutions with lower test loss than naive noise injection. Then, we compare our algorithm with four sharpness-reducing training methods (such as the Sharpness-Aware Minimization (Foret et al., 2021)). We find that our algorithm can outperform them by up to 1.8% test accuracy, for fine-tuning ResNet on six image classification data sets. It leads to a 17.7% (and 12.8%) reduction in the trace (and largest eigenvalue) of the Hessian matrix of the loss surface. This form of regularization on the Hessian is compatible with $\ell_2$ weight decay (and data augmentation), in the sense that combining both can lead to improved empirical performance.

Given an increasing sequence of integers $x_1,\ldots,x_n$ from a universe $\{0,\ldots,u-1\}$, the monotone minimal perfect hash function (MMPHF) for this sequence is a data structure that answers the following rank queries: $rank(x) = i$ if $x = x_i$, for $i\in \{1,\ldots,n\}$, and $rank(x)$ is arbitrary otherwise. Assadi, Farach-Colton, and Kuszmaul recently presented at SODA'23 a proof of the lower bound $\Omega(n \min\{\log\log\log u, \log n\})$ for the bits of space required by MMPHF, provided $u \ge n 2^{2^{\sqrt{\log\log n}}}$, which is tight since there is a data structure for MMPHF that attains this space bound (and answers the queries in $O(\log u)$ time). In this paper, we close the remaining gap by proving that, for $u \ge (1+\epsilon)n$, where $\epsilon > 0$ is any constant, the tight lower bound is $\Omega(n \min\{\log\log\log \frac{u}{n}, \log n\})$, which is also attainable; we observe that, for all reasonable cases when $n < u < (1+\epsilon)n$, known facts imply tight bounds, which virtually settles the problem. Along the way we substantially simplify the proof of Assadi et al. replacing a part of their heavy combinatorial machinery by trivial observations. However, an important part of the proof still remains complicated. This part of our paper repeats arguments of Assadi et al. and is not novel. Nevertheless, we include it, for completeness, offering a somewhat different perspective on these arguments.

北京阿比特科技有限公司