亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Monadic Second-Order Logic (MSO) extends First-Order Logic (FO) with variables ranging over sets and quantifications over those variables. We introduce and study Monadic Tree Logic (MTL), a fragment of MSO interpreted on infinite-tree models, where the sets over which the variables range are arbitrary subtrees of the original model. We analyse the expressiveness of MTL compared with variants of MSO and MPL, namely MSO with quantifications over paths. We also discuss the connections with temporal logics, by providing non-trivial fragments of the Graded {\mu}-Calculus that can be embedded into MTL and by showing that MTL is enough to encode temporal logics for reasoning about strategies with FO-definable goals.

相關內容

多任務學習(MTL)是機器學習的一個子領域,可以同時解決多個學習任務,同時利用各個任務之間的共性和差異。與單獨訓練模型相比,這可以提高特定任務模型的學習效率和預測準確性。多任務學習是歸納傳遞的一種方法,它通過將相關任務的訓練信號中包含的域信息用作歸納偏差來提高泛化能力。通過使用共享表示形式并行學習任務來實現,每個任務所學的知識可以幫助更好地學習其它任務。

Control variates are variance reduction tools for Monte Carlo estimators. They can provide significant variance reduction, but usually require a large number of samples, which can be prohibitive when sampling or evaluating the integrand is computationally expensive. Furthermore, there are many scenarios where we need to compute multiple related integrals simultaneously or sequentially, which can further exacerbate computational costs. In this paper, we propose vector-valued control variates, an extension of control variates which can be used to reduce the variance of multiple Monte Carlo estimators jointly. This allows for the transfer of information across integration tasks, and hence reduces the need for a large number of samples. We focus on control variates based on kernel interpolants and our novel construction is obtained through a generalised Stein identity and the development of novel matrix-valued Stein reproducing kernels. We demonstrate our methodology on a range of problems including multifidelity modelling, Bayesian inference for dynamical systems, and model evidence computation through thermodynamic integration.

Graphical models with heavy-tailed factors can be used to model extremal dependence or causality between extreme events. In a Bayesian network, variables are recursively defined in terms of their parents according to a directed acyclic graph (DAG). We focus on max-linear graphical models with respect to a special type of graphs, which we call a tree of transitive tournaments. The latter are block graphs combining in a tree-like structure a finite number of transitive tournaments, each of which is a DAG in which every two nodes are connected. We study the limit of the joint tails of the max-linear model conditionally on the event that a given variable exceeds a high threshold. Under a suitable condition, the limiting distribution involves the factorization into independent increments along the shortest trail between two variables, thereby imitating the behavior of a Markov random field. We are also interested in the identifiability of the model parameters in case some variables are latent and only a subvector is observed. It turns out that the parameters are identifiable under a criterion on the nodes carrying the latent variables which is easy and quick to check.

We consider the problem of continuous-time policy evaluation. This consists in learning through observations the value function associated with an uncontrolled continuous-time stochastic dynamic and a reward function. We propose two original variants of the well-known TD(0) method using vanishing time steps. One is model-free and the other is model-based. For both methods, we prove theoretical convergence rates that we subsequently verify through numerical simulations. Alternatively, those methods can be interpreted as novel reinforcement learning approaches for approximating solutions of linear PDEs (partial differential equations) or linear BSDEs (backward stochastic differential equations).

Deciding formulas mixing arithmetic and uninterpreted predicates is of practical interest, notably for applications in verification. Some decision procedures consist in building by structural induction an automaton that recognizes the set of models of the formula under analysis, and then testing whether this automaton accepts a non-empty language. A drawback is that universal quantification is usually handled by a reduction to existential quantification and complementation. For logical formalisms in which models are encoded as infinite words, this hinders the practical use of this method due to the difficulty of complementing infinite-word automata. The contribution of this paper is to introduce an algorithm for directly computing the effect of universal first-order quantifiers on automata recognizing sets of models, for formulas involving natural numbers encoded in unary notation. This makes it possible to apply the automata-based approach to obtain implementable decision procedures for various arithmetic theories.

Answering complex logical queries on incomplete knowledge graphs is a challenging task, and has been widely studied. Embedding-based methods require training on complex queries, and cannot generalize well to out-of-distribution query structures. Recent work frames this task as an end-to-end optimization problem, and it only requires a pretrained link predictor. However, due to the exponentially large combinatorial search space, the optimal solution can only be approximated, limiting the final accuracy. In this work, we propose QTO (Query Computation Tree Optimization) that can efficiently find the exact optimal solution. QTO finds the optimal solution by a forward-backward propagation on the tree-like computation graph, i.e., query computation tree. In particular, QTO utilizes the independence encoded in the query computation tree to reduce the search space, where only local computations are involved during the optimization procedure. Experiments on 3 datasets show that QTO obtains state-of-the-art performance on complex query answering, outperforming previous best results by an average of 22%. Moreover, QTO can interpret the intermediate solutions for each of the one-hop atoms in the query with over 90% accuracy. The code of our paper is at //github.com/bys0318/QTO.

We present representative sets-style statements for linear delta-matroids, which are set systems that generalize matroids, with important connections to matching theory and graph embeddings. Furthermore, our proof uses a new approach of sieving polynomial families, which generalizes the linear algebra approach of the representative sets lemma to a setting of bounded-degree polynomials. The representative sets statements for linear delta-matroids then follow by analyzing the Pfaffian of the skew-symmetric matrix representing the delta-matroid. Applying the same framework to the determinant instead of the Pfaffian recovers the representative sets lemma for linear matroids. Altogether, this significantly extends the toolbox available for kernelization. As an application, we show an exact sparsification result for Mader networks: Let $G=(V,E)$ be a graph and $\mathcal{T}$ a partition of a set of terminals $T \subseteq V(G)$, $|T|=k$. A $\mathcal{T}$-path in $G$ is a path with endpoints in distinct parts of $\mathcal{T}$ and internal vertices disjoint from $T$. In polynomial time, we can derive a graph $G'=(V',E')$ with $T \subseteq V(G')$, such that for every subset $S \subseteq T$ there is a packing of $\mathcal{T}$-paths with endpoints $S$ in $G$ if and only if there is one in $G'$, and $|V(G')|=O(k^3)$. This generalizes the (undirected version of the) cut-covering lemma, which corresponds to the case that $\mathcal{T}$ contains only two blocks. To prove the Mader network sparsification result, we furthermore define the class of Mader delta-matroids, and show that they have linear representations. This should be of independent interest.

Motivated by the manifold hypothesis, which states that data with a high extrinsic dimension may yet have a low intrinsic dimension, we develop refined statistical bounds for entropic optimal transport that are sensitive to the intrinsic dimension of the data. Our bounds involve a robust notion of intrinsic dimension, measured at only a single distance scale depending on the regularization parameter, and show that it is only the minimum of these single-scale intrinsic dimensions which governs the rate of convergence. We call this the Minimum Intrinsic Dimension scaling (MID scaling) phenomenon, and establish MID scaling with no assumptions on the data distributions so long as the cost is bounded and Lipschitz, and for various entropic optimal transport quantities beyond just values, with stronger analogs when one distribution is supported on a manifold. Our results significantly advance the theoretical state of the art by showing that MID scaling is a generic phenomenon, and provide the first rigorous interpretation of the statistical effect of entropic regularization as a distance scale.

Matrix factor model is drawing growing attention for simultaneous two-way dimension reduction of well-structured matrix-valued observations. This paper focuses on robust statistical inference for matrix factor model in the ``diverging dimension" regime. We derive the convergence rates of the robust estimators for loadings, factors and common components under finite second moment assumption of the idiosyncratic errors. In addition, the asymptotic distributions of the estimators are also derived under mild conditions. We propose a rank minimization and an eigenvalue-ratio method to estimate the pair of factor numbers consistently. Numerical studies confirm the iterative Huber regression algorithm is a practical and reliable approach for the estimation of matrix factor model, especially under the cases with heavy-tailed idiosyncratic errors . We illustrate the practical usefulness of the proposed methods by two real datasets, one on financial portfolios and one on the macroeconomic indices of China.

We consider a first-order logic for the integers with addition. This logic extends classical first-order logic by modulo-counting, threshold-counting and exact-counting quantifiers, all applied to tuples of variables (here, residues are given as terms while moduli and thresholds are given explicitly). Our main result shows that satisfaction for this logic is decidable in two-fold exponential space. If only threshold- and exact-counting quantifiers are allowed, we prove an upper bound of alternating two-fold exponential time with linearly many alternations. This latter result almost matches Berman's exact complexity of first-order logic without counting quantifiers. To obtain these results, we first translate threshold- and exact-counting quantifiers into classical first-order logic in polynomial time (which already proves the second result). To handle the remaining modulo-counting quantifiers for tuples, we first reduce them in doubly exponential time to modulo-counting quantifiers for single elements. For these quantifiers, we provide a quantifier elimination procedure similar to Reddy and Loveland's procedure for first-order logic and analyse the growth of coefficients, constants, and moduli appearing in this process. The bounds obtained this way allow to restrict quantification in the original formula to integers of bounded size which then implies the first result mentioned above. Our logic is incomparable with the logic considered by Chistikov et al. in 2022. They allow more general counting operations in quantifiers, but only unary quantifiers. The move from unary to non-unary quantifiers is non-trivial, since, e.g., the non-unary version of the H\"artig quantifier results in an undecidable theory.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司