亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Machine learning with Semantic Web ontologies follows several strategies, one of which involves projecting ontologies into graph structures and applying graph embeddings or graph-based machine learning methods to the resulting graphs. Several methods have been developed that project ontology axioms into graphs. However, these methods are limited in the type of axioms they can project (totality), whether they are invertible (injectivity), and how they exploit semantic information. These limitations restrict the kind of tasks to which they can be applied. Category-theoretical semantics of logic languages formalizes interpretations using categories instead of sets, and categories have a graph-like structure. We developed CatE, which uses the category-theoretical formulation of the semantics of the Description Logic $\mathcal{ALC}$ to generate a graph representation for ontology axioms. The CatE projection is total and injective, and therefore overcomes limitations of other graph-based ontology embedding methods which are generally not invertible. We apply CatE to a number of different tasks, including deductive and inductive reasoning, and we demonstrate that CatE improves over state of the art ontology embedding methods. Furthermore, we show that CatE can also outperform model-theoretic ontology embedding methods in machine learning tasks in the biomedical domain.

相關內容

We consider a a collection of categorical random variables. Of special interest is the causal effect on an outcome variable following an intervention on another variable. Conditionally on a Directed Acyclic Graph (DAG), we assume that the joint law of the random variables can be factorized according to the DAG, where each term is a categorical distribution for the node-variable given a configuration of its parents. The graph is equipped with a causal interpretation through the notion of interventional distribution and the allied "do-calculus". From a modeling perspective, the likelihood is decomposed into a product over nodes and parents of DAG-parameters, on which a suitably specified collection of Dirichlet priors is assigned. The overall joint distribution on the ensemble of DAG-parameters is then constructed using global and local independence. We account for DAG-model uncertainty and propose a reversible jump Markov Chain Monte Carlo (MCMC) algorithm which targets the joint posterior over DAGs and DAG-parameters; from the output we are able to recover a full posterior distribution of any causal effect coefficient of interest, possibly summarized by a Bayesian Model Averaging (BMA) point estimate. We validate our method through extensive simulation studies, wherein comparisons with alternative state-of-the-art procedures reveal an outperformance in terms of estimation accuracy. Finally, we analyze a dataset relative to a study on depression and anxiety in undergraduate students.

The focus of this article is on shape and topology optimization of transient vibroacoustic problems. The main contribution is a transient problem formulation that enables optimization over wide ranges of frequencies with complex signals, which are often of interest in industry. The work employs time domain methods to realize wide band optimization in the frequency domain. To this end, the objective function is defined in frequency domain where the frequency response of the system is obtained through a fast Fourier transform (FFT) algorithm on the transient response of the system. The work utilizes a parametric level set approach to implicitly define the geometry in which the zero level describes the interface between acoustic and structural domains. A cut element method is used to capture the geometry on a fixed background mesh through utilization of a special integration scheme that accurately resolves the interface. This allows for accurate solutions to strongly coupled vibroacoustic systems without having to re-mesh at each design update. The present work relies on efficient gradient based optimizers where the discrete adjoint method is used to calculate the sensitivities of objective and constraint functions. A thorough explanation of the consistent sensitivity calculation is given involving the FFT operation needed to define the objective function in frequency domain. Finally, the developed framework is applied to various vibroacoustic filter designs and the optimization results are verified using commercial finite element software with a steady state time-harmonic formulation.

Nurmuhammad et al. developed Sinc-Nystr\"{o}m methods for initial value problems in which solutions exhibit exponential decay end behavior. In the methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved those transformations so that a better convergence rate could be attained, which was afterward supported by theoretical error analyses. However, due to a special function included in the basis functions, the methods have a drawback for computation. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study gives error analyses for the methods.

We consider the Sobolev embedding operator $E_s : H^s(\Omega) \to L_2(\Omega)$ and its role in the solution of inverse problems. In particular, we collect various properties and investigate different characterizations of its adjoint operator $E_s^*$, which is a common component in both iterative and variational regularization methods. These include variational representations and connections to boundary value problems, Fourier and wavelet representations, as well as connections to spatial filters. Moreover, we consider characterizations in terms of Fourier series, singular value decompositions and frame decompositions, as well as representations in finite dimensional settings. While many of these results are already known to researchers from different fields, a detailed and general overview or reference work containing rigorous mathematical proofs is still missing. Hence, in this paper we aim to fill this gap by collecting, introducing and generalizing a large number of characterizations of $E_s^*$ and discuss their use in regularization methods for solving inverse problems. The resulting compilation can serve both as a reference as well as a useful guide for its efficient numerical implementation in practice.

Knowledge graphs (KGs), as structured representations of real world facts, are intelligent databases incorporating human knowledge that can help machine imitate the way of human problem solving. However, KGs are usually huge and there are inevitably missing facts in KGs, thus undermining applications such as question answering and recommender systems that are based on knowledge graph reasoning. Link prediction for knowledge graphs is the task aiming to complete missing facts by reasoning based on the existing knowledge. Two main streams of research are widely studied: one learns low-dimensional embeddings for entities and relations that can explore latent patterns, and the other gains good interpretability by mining logical rules. Unfortunately, the heterogeneity of modern KGs that involve entities and relations of various types is not well considered in the previous studies. In this paper, we propose DegreEmbed, a model that combines embedding-based learning and logic rule mining for inferring on KGs. Specifically, we study the problem of predicting missing links in heterogeneous KGs from the perspective of the degree of nodes. Experimentally, we demonstrate that our DegreEmbed model outperforms the state-of-the-art methods on real world datasets and the rules mined by our model are of high quality and interpretability.

We present a revisit of the seeds algorithm to explore the semigroup tree. First, an equivalent definition of seed is presented, which seems easier to manage. Second, we determine the seeds of semigroups with at most three left elements. And third, we find the great-grandchildren of any numerical semigroup in terms of its seeds. The RGD algorithm is the fastest known algorithm at the moment. But if one compares the originary seeds algorithm with the RGD algorithm, one observes that the seeds algorithm uses more elaborated mathematical tools while the RGD algorithm uses data structures that are better adapted to the final C implementations. For genera up to around one half of the maximum size of native integers, the newly defined seeds algorithm performs significantly better than the RGD algorithm. For future compilators allowing larger native sized integers this may constitute a powerful tool to explore the semigroup tree up to genera never explored before. The new seeds algorithm uses bitwise integer operations, the knowledge of the seeds of semigroups with at most three left elements and of the great-grandchildren of any numerical semigroup, apart from techniques such as parallelization and depth first search as wisely introduced in this context by Fromentin and Hivert. The algorithm has been used to prove that there are no Eliahou semigroups of genus $66$, hence proving the Wilf conjecture for genus up to $66$. We also found three Eliahou semigroups of genus $67$. One of these semigroups is neither of Eliahou-Fromentin type, nor of Delgado's type. However, it is a member of a new family suggested by Shalom Eliahou.

We consider the problem of recovering conditional independence relationships between $p$ jointly distributed Hilbertian random elements given $n$ realizations thereof. We operate in the sparse high-dimensional regime, where $n \ll p$ and no element is related to more than $d \ll p$ other elements. In this context, we propose an infinite-dimensional generalization of the graphical lasso. We prove model selection consistency under natural assumptions and extend many classical results to infinite dimensions. In particular, we do not require finite truncation or additional structural restrictions. The plug-in nature of our method makes it applicable to any observational regime, whether sparse or dense, and indifferent to serial dependence. Importantly, our method can be understood as naturally arising from a coherent maximum likelihood philosophy.

Causal effect estimation has been studied by many researchers when only observational data is available. Sound and complete algorithms have been developed for pointwise estimation of identifiable causal queries. For non-identifiable causal queries, researchers developed polynomial programs to estimate tight bounds on causal effect. However, these are computationally difficult to optimize for variables with large support sizes. In this paper, we analyze the effect of "weak confounding" on causal estimands. More specifically, under the assumption that the unobserved confounders that render a query non-identifiable have small entropy, we propose an efficient linear program to derive the upper and lower bounds of the causal effect. We show that our bounds are consistent in the sense that as the entropy of unobserved confounders goes to zero, the gap between the upper and lower bound vanishes. Finally, we conduct synthetic and real data simulations to compare our bounds with the bounds obtained by the existing work that cannot incorporate such entropy constraints and show that our bounds are tighter for the setting with weak confounders.

Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.

Knowledge Graph Embedding (KGE) aims to learn representations for entities and relations. Most KGE models have gained great success, especially on extrapolation scenarios. Specifically, given an unseen triple (h, r, t), a trained model can still correctly predict t from (h, r, ?), or h from (?, r, t), such extrapolation ability is impressive. However, most existing KGE works focus on the design of delicate triple modeling function, which mainly tells us how to measure the plausibility of observed triples, but offers limited explanation of why the methods can extrapolate to unseen data, and what are the important factors to help KGE extrapolate. Therefore in this work, we attempt to study the KGE extrapolation of two problems: 1. How does KGE extrapolate to unseen data? 2. How to design the KGE model with better extrapolation ability? For the problem 1, we first discuss the impact factors for extrapolation and from relation, entity and triple level respectively, propose three Semantic Evidences (SEs), which can be observed from train set and provide important semantic information for extrapolation. Then we verify the effectiveness of SEs through extensive experiments on several typical KGE methods. For the problem 2, to make better use of the three levels of SE, we propose a novel GNN-based KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN). In SE-GNN, each level of SE is modeled explicitly by the corresponding neighbor pattern, and merged sufficiently by the multi-layer aggregation, which contributes to obtaining more extrapolative knowledge representation. Finally, through extensive experiments on FB15k-237 and WN18RR datasets, we show that SE-GNN achieves state-of-the-art performance on Knowledge Graph Completion task and performs a better extrapolation ability.

北京阿比特科技有限公司