亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A temporal graph is a graph in which edges are assigned a time label. Two nodes u and v of a temporal graph are connected one to the other if there exists a path from u to v with increasing edge time labels. We consider the problem of assigning time labels to the edges of a digraph in order to maximize the total reachability of the resulting temporal graph (that is, the number of pairs of nodes which are connected one to the other). In particular, we prove that this problem is NP-hard. We then conjecture that the problem is approximable within a constant approximation ratio. This conjecture is a consequence of the following graph theoretic conjecture: any strongly connected directed graph with n nodes admits an out-arborescence and an in-arborescence that are edge-disjoint, have the same root, and each spans $\Omega$(n) nodes.

相關內容

In this paper I propose a concept of a correct loss function in a generative model of supervised learning for an input space $\mathcal{X}$ and a label space $\mathcal{Y}$, which are measurable spaces. A correct loss function in a generative model of supervised learning must correctly measure the discrepancy between elements of a hypothesis space $\mathcal{H}$ of possible predictors and the supervisor operator, which may not belong to $\mathcal{H}$. To define correct loss functions, I propose a characterization of a regular conditional probability measure $\mu_{\mathcal{Y}|\mathcal{X}}$ for a probability measure $\mu$ on $\mathcal{X} \times \mathcal{Y}$ relative to the projection $\Pi_{\mathcal{X}}: \mathcal{X}\times\mathcal{Y}\to \mathcal{X}$ as a solution of a linear operator equation. If $\mathcal{Y}$ is a separable metrizable topological space with the Borel $\sigma$-algebra $ \mathcal{B} (\mathcal{Y})$, I propose another characterization of a regular conditional probability measure $\mu_{\mathcal{Y}|\mathcal{X}}$ as a minimizer of a mean square error on the space of Markov kernels, called probabilistic morphisms, from $\mathcal{X}$ to $\mathcal{Y}$, using kernel mean embeddings. Using these results and using inner measure to quantify generalizability of a learning algorithm, I give a generalization of a result due to Cucker-Smale, which concerns the learnability of a regression model, to a setting of a conditional probability estimation problem. I also give a variant of Vapnik's regularization method for solving stochastic ill-posed problems, using inner measure, and present its applications.

We show that the \textsc{Maximum Weight Independent Set} problem (\textsc{MWIS}) can be solved in quasi-polynomial time on $H$-free graphs (graphs excluding a fixed graph $H$ as an induced subgraph) for every $H$ whose every connected component is a path or a subdivided claw (i.e., a tree with at most three leaves). This completes the dichotomy of the complexity of \textsc{MWIS} in $\mathcal{F}$-free graphs for any finite set $\mathcal{F}$ of graphs into NP-hard cases and cases solvable in quasi-polynomial time, and corroborates the conjecture that the cases not known to be NP-hard are actually polynomial-time solvable. The key graph-theoretic ingredient in our result is as follows. Fix an integer $t \geq 1$. Let $S_{t,t,t}$ be the graph created from three paths on $t$ edges by identifying one endpoint of each path into a single vertex. We show that, given a graph $G$, one can in polynomial time find either an induced $S_{t,t,t}$ in $G$, or a balanced separator consisting of $\Oh(\log |V(G)|)$ vertex neighborhoods in $G$, or an extended strip decomposition of $G$ (a decomposition almost as useful for recursion for \textsc{MWIS} as a partition into connected components) with each particle of weight multiplicatively smaller than the weight of $G$. This is a strengthening of a result of Majewski et al.\ [ICALP~2022] which provided such an extended strip decomposition only after the deletion of $\Oh(\log |V(G)|)$ vertex neighborhoods. To reach the final result, we employ an involved branching strategy that relies on the structural lemma presented above.

We investigate swarms of autonomous mobile robots in the Euclidean plane. Each robot has a target function to determine a destination point from the robots' positions. All robots in a swarm conventionally take the same target function. We allow the robots in a swarm to take different target functions, and investigate the effects of the number of distinct target functions on the problem-solving ability. Specifically, we are interested in how many distinct target functions are necessary and sufficient to solve some known problems which are not solvable when all robots take the same target function, regarding target function as a resource to solve a problem, like time and message. The number of distinct target functions necessary and sufficient to solve a problem $\Pi$ is called the minimum algorithm size (MAS) for $\Pi$. (The MAS is $\infty$, if $\Pi$ is not solvable even for the robots with unique target functions.) We establish the MASs for solving the gathering and related problems from any initial configuration, i.e., in a self-stabilizing manner. Our results include: There is a family of the scattering problems $c$SCT $(1 \leq c \leq n)$ such that the MAS for the $c$SCAT is $c$, where $n$ is the size of the swarm. The MAS for the gathering problem is 2. It is 3, for the problem of gathering all non-faulty robots at a single point, regardless of the number $(< n)$ of crash failures. It is however $\infty$, for the problem of gathering all robots at a single point, in the presence of at most one crash failure.

While there has been progress in establishing the unprovability of complexity statements in lower fragments of bounded arithmetic, understanding the limits of Je\v{r}\'abek's theory $APC_1$ (2007) and of higher levels of Buss's hierarchy $S^i_2$ (1986) has been a more elusive task. Even in the more restricted setting of Cook's theory PV (1975), known results often rely on a less natural formalization that encodes a complexity statement using a collection of sentences instead of a single sentence. This is done to reduce the quantifier complexity of the resulting sentences so that standard witnessing results can be invoked. In this work, we establish unprovability results for stronger theories and for sentences of higher quantifier complexity. In particular, we unconditionally show that $APC_1$ cannot prove strong complexity lower bounds separating the third level of the polynomial hierarchy. In more detail, we consider non-uniform average-case separations, and establish that $APC_1$ cannot prove a sentence stating that $\forall n \ge n_0\;\exists\,f_n \in \Pi_{3}$-$SIZE[n^d]$ that is $(1/n)$-far from every $\Sigma_{3}$-$SIZE[2^{n^{\delta}}]$ circuit. This is a consequence of a much more general result showing that, for every $i \geq 1$, strong separations for $\Pi_{i}$-$SIZE[poly(n)]$ versus $\Sigma_{i}$-$SIZE[2^{n^{\Omega(1)}}]$ cannot be proved in the theory $T_{PV}^i$ consisting of all true $\forall \Sigma^b_{i-1}$-sentences in the language of Cook's theory PV. Our argument employs a convenient game-theoretic witnessing result that can be applied to sentences of arbitrary quantifier complexity. We combine it with extensions of a technique introduced by Kraj\'i\v{c}ek (2011) that was recently employed by Pich and Santhanam (2021) to establish the unprovability of lower bounds in PV (i.e., the case $i=1$ above, but under a weaker formalization) and in a fragment of $APC_1$.

The moment-sum-of-squares (moment-SOS) hierarchy is one of the most celebrated and widely applied methods for approximating the minimum of an n-variate polynomial over a feasible region defined by polynomial (in)equalities. A key feature of the hierarchy is that, at a fixed level, it can be formulated as a semidefinite program of size polynomial in the number of variables n. Although this suggests that it may therefore be computed in polynomial time, this is not necessarily the case. Indeed, as O'Donnell (2017) and later Raghavendra & Weitz (2017) show, there exist examples where the sos-representations used in the hierarchy have exponential bit-complexity. We study the computational complexity of the moment-SOS hierarchy, complementing and expanding upon earlier work of Raghavendra & Weitz (2017). In particular, we establish algebraic and geometric conditions under which polynomial-time computation is guaranteed to be possible.

Inferring causal structures from time series data is the central interest of many scientific inquiries. A major barrier to such inference is the problem of subsampling, i.e., the frequency of measurement is much lower than that of causal influence. To overcome this problem, numerous methods have been proposed, yet either was limited to the linear case or failed to achieve identifiability. In this paper, we propose a constraint-based algorithm that can identify the entire causal structure from subsampled time series, without any parametric constraint. Our observation is that the challenge of subsampling arises mainly from hidden variables at the unobserved time steps. Meanwhile, every hidden variable has an observed proxy, which is essentially itself at some observable time in the future, benefiting from the temporal structure. Based on these, we can leverage the proxies to remove the bias induced by the hidden variables and hence achieve identifiability. Following this intuition, we propose a proxy-based causal discovery algorithm. Our algorithm is nonparametric and can achieve full causal identification. Theoretical advantages are reflected in synthetic and real-world experiments.

Let $A(n, d)$ denote the maximum size of a binary code of length $n$ and minimum Hamming distance $d$. Studying $A(n, d)$, including efforts to determine it as well to derive bounds on $A(n, d)$ for large $n$'s, is one of the most fundamental subjects in coding theory. In this paper, we explore new lower and upper bounds on $A(n, d)$ in the large-minimum distance regime, in particular, when $d = n/2 - \Omega(\sqrt{n})$. We first provide a new construction of cyclic codes, by carefully selecting specific roots in the binary extension field for the check polynomial, with length $n= 2^m -1$, distance $d \geq n/2 - 2^{c-1}\sqrt{n}$, and size $n^{c+1/2}$, for any $m\geq 4$ and any integer $c$ with $0 \leq c \leq m/2 - 1$. These code parameters are slightly worse than those of the Delsarte--Goethals (DG) codes that provide the previously known best lower bound in the large-minimum distance regime. However, using a similar and extended code construction technique we show a sequence of cyclic codes that improve upon DG codes and provide the best lower bound in a narrower range of the minimum distance $d$, in particular, when $d = n/2 - \Omega(n^{2/3})$. Furthermore, by leveraging a Fourier-analytic view of Delsarte's linear program, upper bounds on $A(n, n/2 - \rho\sqrt{n})$ with $\rho\in (0.5, 9.5)$ are obtained that scale polynomially in $n$. To the best of authors' knowledge, the upper bound due to Barg and Nogin \cite{barg2006spectral} is the only previously known upper bound that scale polynomially in $n$ in this regime. We numerically demonstrate that our upper bound improves upon the Barg-Nogin upper bound in the specified high-minimum distance regime.

Hierarchical Clustering is a popular unsupervised machine learning method with decades of history and numerous applications. We initiate the study of differentially private approximation algorithms for hierarchical clustering under the rigorous framework introduced by (Dasgupta, 2016). We show strong lower bounds for the problem: that any $\epsilon$-DP algorithm must exhibit $O(|V|^2/ \epsilon)$-additive error for an input dataset $V$. Then, we exhibit a polynomial-time approximation algorithm with $O(|V|^{2.5}/ \epsilon)$-additive error, and an exponential-time algorithm that meets the lower bound. To overcome the lower bound, we focus on the stochastic block model, a popular model of graphs, and, with a separation assumption on the blocks, propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly. Finally, we perform an empirical study of our algorithms and validate their performance.

To alleviate the practical constraints for deploying deep neural networks (DNNs) on edge devices, quantization is widely regarded as one promising technique. It reduces the resource requirements for computational power and storage space by quantizing the weights and/or activation tensors of a DNN into lower bit-width fixed-point numbers, resulting in quantized neural networks (QNNs). While it has been empirically shown to introduce minor accuracy loss, critical verified properties of a DNN might become invalid once quantized. Existing verification methods focus on either individual neural networks (DNNs or QNNs) or quantization error bound for partial quantization. In this work, we propose a quantization error bound verification method, named QEBVerif, where both weights and activation tensors are quantized. QEBVerif consists of two parts, i.e., a differential reachability analysis (DRA) and a mixed-integer linear programming (MILP) based verification method. DRA performs difference analysis between the DNN and its quantized counterpart layer-by-layer to compute a tight quantization error interval efficiently. If DRA fails to prove the error bound, then we encode the verification problem into an equivalent MILP problem which can be solved by off-the-shelf solvers. Thus, QEBVerif is sound, complete, and reasonably efficient. We implement QEBVerif and conduct extensive experiments, showing its effectiveness and efficiency.

Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require that a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with an arbitrary depth. Although the primitive graph neural networks have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on graph convolutional network (GCN) and gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.

北京阿比特科技有限公司