亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The CSP dichotomy conjecture has been recently established, but a number of other dichotomy questions remain open, including the dichotomy classification of list homomorphism problems for signed graphs. Signed graphs arise naturally in many contexts, including for instance nowhere-zero flows for graphs embedded in non-orientable surfaces. For a fixed signed graph $\widehat{H}$, the list homomorphism problem asks whether an input signed graph $\widehat{G}$ with lists $L(v) \subseteq V(\widehat{H}), v \in V(\widehat{G}),$ admits a homomorphism $f$ to $\widehat{H}$ with all $f(v) \in L(v), v \in V(\widehat{G})$. Usually, a dichotomy classification is easier to obtain for list homomorphisms than for homomorphisms, but in the context of signed graphs a structural classification of the complexity of list homomorphism problems has not even been conjectured, even though the classification of the complexity of homomorphism problems is known. Kim and Siggers have conjectured a structural classification in the special case of "weakly balanced" signed graphs. We confirm their conjecture for reflexive and irreflexive signed graphs; this generalizes previous results on weakly balanced signed trees, and weakly balanced separable signed graphs. In the reflexive case, the result was first presented in a paper of Kim and Siggers, where the proof relies on a result in this paper. The irreflexive result is new, and its proof depends on first deriving a theorem on extensions of min orderings of (unsigned) bipartite graphs, which is interesting on its own.

相關內容

The classical Cheeger's inequality relates the edge conductance $\phi$ of a graph and the second smallest eigenvalue $\lambda_2$ of the Laplacian matrix. Recently, Olesker-Taylor and Zanetti discovered a Cheeger-type inequality $\psi^2 / \log |V| \lesssim \lambda_2^* \lesssim \psi$ connecting the vertex expansion $\psi$ of a graph $G=(V,E)$ and the maximum reweighted second smallest eigenvalue $\lambda_2^*$ of the Laplacian matrix. In this work, we first improve their result to $\psi^2 / \log d \lesssim \lambda_2^* \lesssim \psi$ where $d$ is the maximum degree in $G$, which is optimal assuming the small-set expansion conjecture. Also, the improved result holds for weighted vertex expansion, answering an open question by Olesker-Taylor and Zanetti. Building on this connection, we then develop a new spectral theory for vertex expansion. We discover that several interesting generalizations of Cheeger inequalities relating edge conductances and eigenvalues have a close analog in relating vertex expansions and reweighted eigenvalues. These include an analog of Trevisan's result on bipartiteness, an analog of higher order Cheeger's inequality, and an analog of improved Cheeger's inequality. Finally, inspired by this connection, we present negative evidence to the $0/1$-polytope edge expansion conjecture by Mihail and Vazirani. We construct $0/1$-polytopes whose graphs have very poor vertex expansion. This implies that the fastest mixing time to the uniform distribution on the vertices of these $0/1$-polytopes is almost linear in the graph size. This does not provide a counterexample to the conjecture, but this is in contrast with known positive results which proved poly-logarithmic mixing time to the uniform distribution on the vertices of subclasses of $0/1$-polytopes.

The Strong Exponential Time Hypothesis (SETH) asserts that for every $\varepsilon>0$ there exists $k$ such that $k$-SAT requires time $(2-\varepsilon)^n$. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-$k$-SAT, and Set Cover. In this paper, we show that fine-grained reductions implying even $\lambda^n$-hardness of these problems from SETH for any $\lambda>1$, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question). We also extend this barrier result to the class of parameterized problems. Namely, for every $\lambda>1$ we conditionally rule out fine-grained reductions implying SETH-based lower bounds of $\lambda^k$ for a number of problems parameterized by the solution size $k$. Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).

We study the rank of sub-matrices arising out of kernel functions, $F(\pmb{x},\pmb{y}): \mathbb{R}^d \times \mathbb{R}^d \mapsto \mathbb{R}$, where $\pmb{x},\pmb{y} \in \mathbb{R}^d$, that have a singularity along $\pmb{x}=\pmb{y}$. Such kernel functions are frequently encountered in a wide range of applications such as $N$ body problems, Green's functions, integral equations, geostatistics, kriging, Gaussian processes, etc. One of the challenges in dealing with these kernel functions is that the corresponding matrix associated with these kernels is large and dense and thereby, the computational cost of matrix operations is high. In this article, we prove new theorems bounding the numerical rank of sub-matrices arising out of these kernel functions. Under reasonably mild assumptions, we prove that the rank of certain sub-matrices is rank-deficient in finite precision. This rank depends on the dimension of the ambient space and also on the type of interaction between the hyper-cubes containing the corresponding set of particles. This rank structure can be leveraged to reduce the computational cost of certain matrix operations such as matrix-vector products, solving linear systems, etc. We also present numerical results on the growth of rank of certain sub-matrices in $1$D, $2$D, $3$D and $4$D, which, not surprisingly, agrees with the theoretical results.

Asymptotic study on the partition function $p(n)$ began with the work of Hardy and Ramanujan. Later Rademacher obtained a convergent series for $p(n)$ and an error bound was given by Lehmer. Despite having this, a full asymptotic expansion for $p(n)$ with an explicit error bound is not known. Recently O'Sullivan studied the asymptotic expansion of $p^{k}(n)$-partitions into $k$th powers, initiated by Wright, and consequently obtained an asymptotic expansion for $p(n)$ along with a concise description of the coefficients involved in the expansion but without any estimation of the error term. Here we consider a detailed and comprehensive analysis on an estimation of the error term obtained by truncating the asymptotic expansion for $p(n)$ at any positive integer $n$. This gives rise to an infinite family of inequalities for $p(n)$ which finally answers to a question proposed by Chen. Our error term estimation predominantly relies on applications of algorithmic methods from symbolic summation.

This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function (Stein, 1999, Section 6.7) with Fourier coefficients $\phi$($\alpha$^2 + j^2)^(--$\nu$--1/2). Convergence rates are studied for the joint maximum likelihood estimation of $\nu$ and $\phi$ when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a ''deterministic'' element of a continuous Sobolev space is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.

Classical results in general equilibrium theory assume divisible goods and convex preferences of market participants. In many real-world markets, participants have non-convex preferences and the allocation problem needs to consider complex constraints. Electricity markets are a prime example. In such markets, Walrasian prices are impossible, and heuristic pricing rules based on the dual of the relaxed allocation problem are used in practice. However, these rules have been criticized for high side-payments and inadequate congestion signals. We show that existing pricing heuristics optimize specific design goals that can be conflicting. The trade-offs can be substantial, and we establish that the design of pricing rules is fundamentally a multi-objective optimization problem addressing different incentives. In addition to traditional multi-objective optimization techniques using weighing of individual objectives, we introduce a novel parameter-free pricing rule that minimizes incentives for market participants to deviate locally. Our findings show how the new pricing rule capitalizes on the upsides of existing pricing rules under scrutiny today. It leads to prices that incur low make-whole payments while providing adequate congestion signals and low lost opportunity costs. Our suggested pricing rule does not require weighing of objectives, it is computationally scalable, and balances trade-offs in a principled manner, addressing an important policy issue in electricity markets.

We give an efficient algorithm that transforms any bounded degree expander graph into another that achieves almost optimal (namely, near-quadratic, $d \leq 1/\lambda^{2+o(1)}$) trade-off between (any desired) spectral expansion $\lambda$ and degree $d$. Furthermore, the algorithm is local: every vertex can compute its new neighbors as a subset of its original neighborhood of radius $O(\log(1/\lambda))$. The optimal quadratic trade-off is known as the Ramanujan bound, so our construction gives almost Ramanujan expanders from arbitrary expanders. The locality of the transformation preserves structural properties of the original graph, and thus has many consequences. Applied to Cayley graphs, our transformation shows that any expanding finite group has almost Ramanujan expanding generators. Similarly, one can obtain almost optimal explicit constructions of quantum expanders, dimension expanders, monotone expanders, etc., from existing (suboptimal) constructions of such objects. Another consequence is a "derandomized" random walk on the original (suboptimal) expander with almost optimal convergence rate. Our transformation also applies when the degree is not bounded or the expansion is not constant. We obtain our results by a generalization of Ta-Shma's technique in his breakthrough paper [STOC 2017], used to obtain explicit almost optimal binary codes. Specifically, our spectral amplification extends Ta-Shma's analysis of bias amplification from scalars to matrices of arbitrary dimension in a very natural way. Curiously, while Ta-Shma's explicit bias amplification derandomizes a well-known probabilistic argument (underlying the Gilbert--Varshamov bound), there seems to be no known probabilistic (or other existential) way of achieving our explicit ("high-dimensional") spectral amplification.

Recent years have witnessed the resurgence of knowledge engineering which is featured by the fast growth of knowledge graphs. However, most of existing knowledge graphs are represented with pure symbols, which hurts the machine's capability to understand the real world. The multi-modalization of knowledge graphs is an inevitable key step towards the realization of human-level machine intelligence. The results of this endeavor are Multi-modal Knowledge Graphs (MMKGs). In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques. We then systematically review the challenges, progresses and opportunities on the construction and application of MMKGs respectively, with detailed analyses of the strength and weakness of different solutions. We finalize this survey with open research problems relevant to MMKGs.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.

北京阿比特科技有限公司