亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We define a notion called leftmost separator of size at most $k$. A leftmost separator of size $k$ is a minimal separator $S$ that separates two given sets of vertices $X$ and $Y$ such that we "cannot move $S$ more towards $X$" such that $|S|$ remains smaller than the threshold. One of the incentives is that by using leftmost separators we can improve the time complexity of treewidth approximation. Treewidth approximation is a problem which is known to have a linear time FPT algorithm in terms of input size, and only single exponential in terms of the parameter, treewidth. It is not known whether this result can be improved theoretically. However, the coefficient of the parameter $k$ (the treewidth) in the exponent is large. Hence, our goal is to decrease the coefficient of $k$ in the exponent, in order to achieve a more practical algorithm. Hereby, we trade a linear-time algorithm for an $\mathcal{O}(n \log n)$-time algorithm. The previous known $\mathcal{O}(f(k) n \log n)$-time algorithms have dependences of $2^{24k}k!$, $2^{8.766k}k^2$ (a better analysis shows that it is $2^{7.671k}k^2$), and higher. In this paper, we present an algorithm for treewidth approximation which runs in time $\mathcal{O}(2^{6.755k}\ n \log n)$, Furthermore, we count the number of leftmost separators and give a tight upper bound for them. We show that the number of leftmost separators of size $\leq k$ is at most $C_{k-1}$ (Catalan number). Then, we present an algorithm which outputs all leftmost separators in time $\mathcal{O}(\frac{4^k}{\sqrt{k}}n)$.

相關內容

We study the classical expander codes, introduced by Sipser and Spielman \cite{SS96}. Given any constants $0< \alpha, \varepsilon < 1/2$, and an arbitrary bipartite graph with $N$ vertices on the left, $M < N$ vertices on the right, and left degree $D$ such that any left subset $S$ of size at most $\alpha N$ has at least $(1-\varepsilon)|S|D$ neighbors, we show that the corresponding linear code given by parity checks on the right has distance at least roughly $\frac{\alpha N}{2 \varepsilon }$. This is strictly better than the best known previous result of $2(1-\varepsilon ) \alpha N$ \cite{Sudan2000note, Viderman13b} whenever $\varepsilon < 1/2$, and improves the previous result significantly when $\varepsilon $ is small. Furthermore, we show that this distance is tight in general, thus providing a complete characterization of the distance of general expander codes. Next, we provide several efficient decoding algorithms, which vastly improve previous results in terms of the fraction of errors corrected, whenever $\varepsilon < \frac{1}{4}$. Finally, we also give a bound on the list-decoding radius of general expander codes, which beats the classical Johnson bound in certain situations (e.g., when the graph is almost regular and the code has a high rate). Our techniques exploit novel combinatorial properties of bipartite expander graphs. In particular, we establish a new size-expansion tradeoff, which may be of independent interests.

We propose a new estimation method for the spatial blind source separation model. The new estimation is based on an eigenanalysis of a positive definite matrix defined in terms of multiple spatial local covariance matrices, and, therefore, can handle moderately high-dimensional random fields. The consistency of the estimated mixing matrix is established with explicit error rates even when the eigen-gap decays to zero slowly. The proposed method is illustrated via both simulation and a real data example.

There are many applications of max flow with capacities that depend on one or more parameters. Many of these applications fall into the "Source-Sink Monotone" framework, a special case of Topkis's monotonic optimization framework, which implies that the parametric min cuts are nested. When there is a single parameter, this property implies that the number of distinct min cuts is linear in the number of nodes, which is quite useful for constructing algorithms to identify all possible min cuts. When there are multiple Source-Sink Monotone parameters and the vector of parameters are ordered in the usual vector sense, the resulting min cuts are still nested. However, the number of distinct min cuts was an open question. We show that even with only two parameters, the number of distinct min cuts can be exponential in the number of nodes.

In this work, we study the following problem, that we refer to as Low Rank column-wise Compressive Sensing (LRcCS): how to recover an $n \times q$ rank-$r$ matrix, $X^* =[x^*_1 , x^*_2 ,...x^*_q]$ from $m$ independent linear projections of each of its $q$ columns, i.e., from $y_k := A_k x^*_k , k \in [q]$, when $y_k$ is an $m$-length vector. The matrices $A_k$ are known and mutually independent for different $k$. The regime of interest is low-rank, i.e., $r \ll \min(n,q)$, and undersampled measurements, i.e., $m < n$. Even though many LR recovery problems have been extensively studied in the last decade, this particular problem has received little attention so far in terms of methods with provable guarantees. We introduce a novel gradient descent (GD) based solution called altGDmin. We show that, if all entries of all $A_k$s are i.i.d. Gaussian, and if the right singular vectors of $X^*$ satisfy the incoherence assumption, then $\epsilon$-accurate recovery of $X^*$ is possible with $mq > C (n+q) r^2 \log(1/\epsilon)$ total samples and $O( mq nr \log (1/\epsilon))$ time. Compared to existing work, to our best knowledge, this is the fastest solution and, for $\epsilon < 1/\sqrt{r}$, it also has the best sample complexity. Moreover, we show that a simple extension of our approach also solves LR Phase Retrieval (LRPR), which is the magnitude-only generalization of LRcCS. It involves recovering $X^*$ from the magnitudes of entries of $y_k$. We show that altGDmin-LRPR has matching sample complexity and better time complexity when compared with the (best) existing solution for LRPR.

The problem of linear predictions has been extensively studied for the past century under pretty generalized frameworks. Recent advances in the robust statistics literature allow us to analyze robust versions of classical linear models through the prism of Median of Means (MoM). Combining these approaches in a piecemeal way might lead to ad-hoc procedures, and the restricted theoretical conclusions that underpin each individual contribution may no longer be valid. To meet these challenges coherently, in this study, we offer a unified robust framework that includes a broad variety of linear prediction problems on a Hilbert space, coupled with a generic class of loss functions. Notably, we do not require any assumptions on the distribution of the outlying data points ($\mathcal{O}$) nor the compactness of the support of the inlying ones ($\mathcal{I}$). Under mild conditions on the dual norm, we show that for misspecification level $\epsilon$, these estimators achieve an error rate of $O(\max\left\{|\mathcal{O}|^{1/2}n^{-1/2}, |\mathcal{I}|^{1/2}n^{-1} \right\}+\epsilon)$, matching the best-known rates in literature. This rate is slightly slower than the classical rates of $O(n^{-1/2})$, indicating that we need to pay a price in terms of error rates to obtain robust estimates. Additionally, we show that this rate can be improved to achieve so-called ``fast rates" under additional assumptions.

For a graph class ${\cal H}$, the graph parameters elimination distance to ${\cal H}$ (denoted by ${\bf ed}_{\cal H}$) [Bulian and Dawar, Algorithmica, 2016], and ${\cal H}$-treewidth (denoted by ${\bf tw}_{\cal H}$) [Eiben et al. JCSS, 2021] aim to minimize the treedepth and treewidth, respectively, of the "torso" of the graph induced on a modulator to the graph class ${\cal H}$. Here, the torso of a vertex set $S$ in a graph $G$ is the graph with vertex set $S$ and an edge between two vertices $u, v \in S$ if there is a path between $u$ and $v$ in $G$ whose internal vertices all lie outside $S$. In this paper, we show that from the perspective of (non-uniform) fixed-parameter tractability (FPT), the three parameters described above give equally powerful parameterizations for every hereditary graph class ${\cal H}$ that satisfies mild additional conditions. In fact, we show that for every hereditary graph class ${\cal H}$ satisfying mild additional conditions, with the exception of ${\bf tw}_{\cal H}$ parameterized by ${\bf ed}_{\cal H}$, for every pair of these parameters, computing one parameterized by itself or any of the others is FPT-equivalent to the standard vertex-deletion (to ${\cal H}$) problem. As an example, we prove that an FPT algorithm for the vertex-deletion problem implies a non-uniform FPT algorithm for computing ${\bf ed}_{\cal H}$ and ${\bf tw}_{\cal H}$. The conclusions of non-uniform FPT algorithms being somewhat unsatisfactory, we essentially prove that if ${\cal H}$ is hereditary, union-closed, CMSO-definable, and (a) the canonical equivalence relation (or any refinement thereof) for membership in the class can be efficiently computed, or (b) the class admits a "strong irrelevant vertex rule", then there exists a uniform FPT algorithm for ${\bf ed}_{\cal H}$.

We develop novel methods for using persistent homology to infer the homology of an unknown Riemannian manifold $(M, g)$ from a point cloud sampled from an arbitrary smooth probability density function. Standard distance-based filtered complexes, such as the \v{C}ech complex, often have trouble distinguishing noise from features that are simply small. We address this problem by defining a family of "density-scaled filtered complexes" that includes a density-scaled \v{C}ech complex and a density-scaled Vietoris--Rips complex. We show that the density-scaled \v{C}ech complex is homotopy-equivalent to $M$ for filtration values in an interval whose starting point converges to $0$ in probability as the number of points $N \to \infty$ and whose ending point approaches infinity as $N \to \infty$. By contrast, the standard \v{C}ech complex may only be homotopy-equivalent to $M$ for a very small range of filtration values. The density-scaled filtered complexes also have the property that they are invariant under conformal transformations, such as scaling. We implement a filtered complex $\widehat{DVR}$ that approximates the density-scaled Vietoris--Rips complex, and we empirically test the performance of our implementation. As examples, we use $\widehat{DVR}$ to identify clusters that have different densities, and we apply $\widehat{DVR}$ to a time-delay embedding of the Lorenz dynamical system. Our implementation is stable (under conditions that are almost surely satisfied) and designed to handle outliers in the point cloud that do not lie on $M$.

This article fits in the area of research that investigates the application of topological duality methods to problems that appear in theoretical computer science. One of the eventual goals of this approach is to derive results in computational complexity theory by studying appropriate topological objects which characterize them. The link which relates these two seemingly separated fields is logic, more precisely a subdomain of finite model theory known as logic on words. It allows for a description of complexity classes as certain families of languages, possibly non-regular, on a finite alphabet. Very few is known about the duality theory relative to fragments of first-order logic on words which lie outside of the scope of regular languages. The contribution of our work is a detailed study of such a fragment. Fixing an integer $k \geq 1$, we consider the Boolean algebra $\mathcal{B}\Sigma_1[\mathcal{N}^{u}_k]$. It corresponds to the fragment of logic on words consisting in Boolean combinations of sentences defined by using a block of at most $k$ existential quantifiers, letter predicates and uniform numerical predicates of arity $l \in \{1,...,k\}$. We give a detailed study of the dual space of this Boolean algebra, for any $k \geq 1$, and provide several characterizations of its points. In the particular case where $k=1$, we are able to construct a family of ultrafilter equations which characterize the Boolean algebra $\mathcal{B} \Sigma_1[\mathcal{N}^{u}_1]$. We use topological methods in order to prove that these equations are sound and complete with respect to the Boolean algebra we mentioned.

One of the key steps in Neural Architecture Search (NAS) is to estimate the performance of candidate architectures. Existing methods either directly use the validation performance or learn a predictor to estimate the performance. However, these methods can be either computationally expensive or very inaccurate, which may severely affect the search efficiency and performance. Moreover, as it is very difficult to annotate architectures with accurate performance on specific tasks, learning a promising performance predictor is often non-trivial due to the lack of labeled data. In this paper, we argue that it may not be necessary to estimate the absolute performance for NAS. On the contrary, we may need only to understand whether an architecture is better than a baseline one. However, how to exploit this comparison information as the reward and how to well use the limited labeled data remains two great challenges. In this paper, we propose a novel Contrastive Neural Architecture Search (CTNAS) method which performs architecture search by taking the comparison results between architectures as the reward. Specifically, we design and learn a Neural Architecture Comparator (NAC) to compute the probability of candidate architectures being better than a baseline one. Moreover, we present a baseline updating scheme to improve the baseline iteratively in a curriculum learning manner. More critically, we theoretically show that learning NAC is equivalent to optimizing the ranking over architectures. Extensive experiments in three search spaces demonstrate the superiority of our CTNAS over existing methods.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

北京阿比特科技有限公司