亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 1943, Hadwiger conjectured that every graph with no $K_t$ minor is $(t-1)$-colorable for every $t\ge 1$. In the 1980s, Kostochka and Thomason independently proved that every graph with no $K_t$ minor has average degree $O(t\sqrt{\log t})$ and hence is $O(t\sqrt{\log t})$-colorable. Recently, Norin, Song and the second author showed that every graph with no $K_t$ minor is $O(t(\log t)^{\beta})$-colorable for every $\beta > 1/4$, making the first improvement on the order of magnitude of the $O(t\sqrt{\log t})$ bound. The first main result of this paper is that every graph with no $K_t$ minor is $O(t\log\log t)$-colorable. This is a corollary of our main technical result that the chromatic number of a $K_t$-minor-free graph is bounded by $O(t(1+f(G,t)))$ where $f(G,t)$ is the maximum of $\frac{\chi(H)}{a}$ over all $a\ge \frac{t}{\sqrt{\log t}}$ and $K_a$-minor-free subgraphs $H$ of $G$ that are small (i.e. $O(a\log^4 a)$ vertices). This has a number of interesting corollaries. First as mentioned, using the current best-known bounds on coloring small $K_t$-minor-free graphs, we show that $K_t$-minor-free graphs are $O(t\log\log t)$-colorable. Second, it shows that proving Linear Hadwiger's Conjecture (that $K_t$-minor-free graphs are $O(t)$-colorable) reduces to proving it for small graphs. Third, we prove that $K_t$-minor-free graphs with clique number at most $\sqrt{\log t}/ (\log \log t)^2$ are $O(t)$-colorable. This implies our final corollary that Linear Hadwiger's Conjecture holds for $K_r$-free graphs for every fixed $r$. One key to proving the main theorem is a new standalone result that every $K_t$-minor-free graph of average degree $d=\Omega(t)$ has a subgraph on $O(t \log^3 t)$ vertices with average degree $\Omega(d)$.

相關內容

In this paper we consider the filtering of a class of partially observed piecewise deterministic Markov processes (PDMPs). In particular, we assume that an ordinary differential equation (ODE) drives the deterministic element and can only be solved numerically via a time discretization. We develop, based upon the approach in [20], a new particle and multilevel particle filter (MLPF) in order to approximate the filter associated to the discretized ODE. We provide a bound on the mean square error associated to the MLPF which provides guidance on setting the simulation parameter of that algorithm and implies that significant computational gains can be obtained versus using a particle filter. Our theoretical claims are confirmed in several numerical examples.

Neural temporal point processes(TPPs) have shown promise for modeling continuous-time event sequences. However, capturing the interactions between events is challenging yet critical for performing inference tasks like forecasting on event sequence data. Existing TPP models have focused on parameterizing the conditional distribution of future events but struggle to model event interactions. In this paper, we propose a novel approach that leverages Neural Relational Inference (NRI) to learn a relation graph that infers interactions while simultaneously learning the dynamics patterns from observational data. Our approach, the Contrastive Relational Inference-based Hawkes Process (CRIHP), reasons about event interactions under a variational inference framework. It utilizes intensity-based learning to search for prototype paths to contrast relationship constraints. Extensive experiments on three real-world datasets demonstrate the effectiveness of our model in capturing event interactions for event sequence modeling tasks.

A $k$-stack layout (or $k$-page book embedding) of a graph consists of a total order of the vertices, and a partition of the edges into $k$ sets of non-crossing edges with respect to the vertex order. The stack number of a graph is the minimum $k$ such that it admits a $k$-stack layout. In this paper we study a long-standing problem regarding the stack number of planar directed acyclic graphs (DAGs), for which the vertex order has to respect the orientation of the edges. We investigate upper and lower bounds on the stack number of several families of planar graphs: We improve the constant upper bounds on the stack number of single-source and monotone outerplanar DAGs and of outerpath DAGs, and improve the constant upper bound for upward planar 3-trees. Further, we provide computer-aided lower bounds for upward (outer-) planar DAGs.

We study the differential privacy (DP) of a core ML problem, linear ordinary least squares (OLS), a.k.a. $\ell_2$-regression. Our key result is that the approximate LS algorithm (ALS) (Sarlos, 2006), a randomized solution to the OLS problem primarily used to improve performance on large datasets, also preserves privacy. ALS achieves a better privacy/utility tradeoff, without modifications or further noising, when compared to alternative private OLS algorithms which modify and/or noise OLS. We give the first {\em tight} DP-analysis for the ALS algorithm and the standard Gaussian mechanism (Dwork et al., 2014) applied to OLS. Our methodology directly improves the privacy analysis of (Blocki et al., 2012) and (Sheffet, 2019)) and introduces new tools which may be of independent interest: (1) the exact spectrum of $(\epsilon, \delta)$-DP parameters (``DP spectrum") for mechanisms whose output is a $d$-dimensional Gaussian, and (2) an improved DP spectrum for random projection (compared to (Blocki et al., 2012) and (Sheffet, 2019)). All methods for private OLS (including ours) assume, often implicitly, restrictions on the input database, such as bounds on leverage and residuals. We prove that such restrictions are necessary. Hence, computing the privacy of mechanisms such as ALS must estimate these database parameters, which can be infeasible in big datasets. For more complex ML models, DP bounds may not even be tractable. There is a need for blackbox DP-estimators (Lu et al., 2022) which empirically estimate a data-dependent privacy. We demonstrate the effectiveness of such a DP-estimator by empirically recovering a DP-spectrum that matches our theory for OLS. This validates the DP-estimator in a nontrivial ML application, opening the door to its use in more complex nonlinear ML settings where theory is unavailable.

Multilevel lattice codes, such as those associated to Constructions $C$, $\overline{D}$, D and D', have relevant applications in communications. In this paper, we investigate some properties of lattices obtained via Constructions D and D' from $q$-ary linear codes. Connections with Construction A, generator matrices, expressions and bounds for the lattice volume and minimum distances are derived. Extensions of previous results regarding construction and decoding of binary and $p$-ary linear codes ($p$ prime) are also presented.

We investigate the decidability of the ${0,\infty}$ fragment of Timed Propositional Temporal Logic (TPTL). We show that the satisfiability checking of TPTL$^{0,\infty}$ is PSPACE-complete. Moreover, even its 1-variable fragment (1-TPTL$^{0,\infty}$) is strictly more expressive than Metric Interval Temporal Logic (MITL) for which satisfiability checking is EXPSPACE complete. Hence, we have a strictly more expressive logic with computationally easier satisfiability checking. To the best of our knowledge, TPTL$^{0,\infty}$ is the first multi-variable fragment of TPTL for which satisfiability checking is decidable without imposing any bounds/restrictions on the timed words (e.g. bounded variability, bounded time, etc.). The membership in PSPACE is obtained by a reduction to the emptiness checking problem for a new "non-punctual" subclass of Alternating Timed Automata with multiple clocks called Unilateral Very Weak Alternating Timed Automata (VWATA$^{0,\infty}$) which we prove to be in PSPACE. We show this by constructing a simulation equivalent non-deterministic timed automata whose number of clocks is polynomial in the size of the given VWATA$^{0,\infty}$.

Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that HopRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process.

Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.

Graph Neural Networks (GNNs) have recently been used for node and graph classification tasks with great success, but GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels. In this work, we consider the task of inductive node classification using GNNs in supervised and semi-supervised settings, with the goal of incorporating label dependencies. Because current GNNs are not universal (i.e., most-expressive) graph representations, we propose a general collective learning approach to increase the representation power of any existing GNN. Our framework combines ideas from collective classification with self-supervised learning, and uses a Monte Carlo approach to sampling embeddings for inductive learning across graphs. We evaluate performance on five real-world network datasets and demonstrate consistent, significant improvement in node classification accuracy, for a variety of state-of-the-art GNNs.

Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.

北京阿比特科技有限公司