亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Bayesian nonparametric hierarchical priors provide flexible models for sharing of information within and across groups. We focus on latent feature allocation models, where the data structures correspond to multisets or unbounded sparse matrices. The fundamental development in this regard is the Hierarchical Indian Buffet process (HIBP), devised by Thibaux and Jordan (2007). However, little is known in terms of explicit tractable descriptions of the joint, marginal, posterior and predictive distributions of the HIBP. We provide explicit novel descriptions of these quantities, in the Bernoulli HIBP and general spike and slab HIBP settings, which allows for exact sampling and simpler practical implementation. We then extend these results to the more complex setting of hierarchies of general HIBP (HHIBP). The generality of our framework allows one to recognize important structure that may otherwise be masked in the Bernoulli setting, and involves characterizations via dynamic mixed Poisson random count matrices. Our analysis shows that the standard choice of hierarchical Beta processes for modeling across group sharing is not ideal in the classic Bernoulli HIBP setting proposed by Thibaux and Jordan (2007), or other spike and slab HIBP settings, and we thus indicate tractable alternative priors.

相關內容

Bergsma (2006) proposed a covariance $\kappa$(X,Y) between random variables X and Y. He derived their asymptotic distributions under the null hypothesis of independence between X and Y. The non-null (dependent) case does not seem to have been studied in the literature. We derive several alternate expressions for $\kappa$. One of them leads us to a very intuitive estimator of $\kappa$(X,Y) that is a nice function of four naturally arising U-statistics. We derive the exact finite sample relation between all three estimates. The asymptotic distribution of our estimator, and hence also of the other two estimators, in the non-null (dependence) case, is then obtained by using the U-statistics central limit theorem. For specific parametric bivariate distributions, the value of $\kappa$ can be derived in terms of the natural dependence parameters of these distributions. In particular, we derive the formula for $\kappa$ when (X,Y) are distributed as Gumbel's bivariate exponential. We bring out various aspects of these estimators through extensive simulations from several prominent bivariate distributions. In particular, we investigate the empirical relationship between $\kappa$ and the dependence parameters, the distributional properties of the estimators, and the accuracy of these estimators. We also investigate the powers of these measures for testing independence, compare these among themselves, and with other well known such measures. Based on these exercises, the proposed estimator seems as good or better than its competitors both in terms of power and computing efficiency.

Causal mapping of the functional organisation of the human brain requires evidence of \textit{necessity} available at adequate scale only from pathological lesions of natural origin. This demands inferential models with sufficient flexibility to capture both the observable distribution of pathological damage and the unobserved distribution of the neural substrate. Current model frameworks -- both mass-univariate and multivariate -- either ignore distributed lesion-deficit relations or do not model them explicitly, relying on featurization incidental to a predictive task. Here we initiate the application of deep generative neural network architectures to the task of lesion-deficit inference, formulating it as the estimation of an expressive hierarchical model of the joint lesion and deficit distributions conditioned on a latent neural substrate. We implement such deep lesion deficit inference with variational convolutional volumetric auto-encoders. We introduce a comprehensive framework for lesion-deficit model comparison, incorporating diverse candidate substrates, forms of substrate interactions, sample sizes, noise corruption, and population heterogeneity. Drawing on 5500 volume images of ischaemic stroke, we show that our model outperforms established methods by a substantial margin across all simulation scenarios, including comparatively small-scale and noisy data regimes. Our analysis justifies the widespread adoption of this approach, for which we provide an open source implementation: //github.com/guilherme-pombo/vae_lesion_deficit

We introduce an approach which allows inferring causal relationships between variables for which the time evolution is available. Our method builds on the ideas of Granger Causality and Transfer Entropy, but overcomes most of their limitations. Specifically, our approach tests whether the predictability of a putative driven system Y can be improved by incorporating information from a potential driver system X, without making assumptions on the underlying dynamics and without the need to compute probability densities of the dynamic variables. Causality is assessed by a rigorous variational scheme based on the Information Imbalance of distance ranks, a recently developed statistical test capable of inferring the relative information content of different distance measures. This framework makes causality detection possible even for high-dimensional systems where only few of the variables are known or measured. Benchmark tests on coupled dynamical systems demonstrate that our approach outperforms other model-free causality detection methods, successfully handling both unidirectional and bidirectional couplings, and it is capable of detecting the arrow of time when present. We also show that the method can be used to robustly detect causality in electroencephalography data in humans.

Generating desirable molecular structures in 3D is a fundamental problem for drug discovery. Despite the considerable progress we have achieved, existing methods usually generate molecules in atom resolution and ignore intrinsic local structures such as rings, which leads to poor quality in generated structures, especially when generating large molecules. Fragment-based molecule generation is a promising strategy, however, it is nontrivial to be adapted for 3D non-autoregressive generations because of the combinational optimization problems. In this paper, we utilize a coarse-to-fine strategy to tackle this problem, in which a Hierarchical Diffusion-based model (i.e.~HierDiff) is proposed to preserve the validity of local segments without relying on autoregressive modeling. Specifically, HierDiff first generates coarse-grained molecule geometries via an equivariant diffusion process, where each coarse-grained node reflects a fragment in a molecule. Then the coarse-grained nodes are decoded into fine-grained fragments by a message-passing process and a newly designed iterative refined sampling module. Lastly, the fine-grained fragments are then assembled to derive a complete atomic molecular structure. Extensive experiments demonstrate that HierDiff consistently improves the quality of molecule generation over existing methods

Background: Despite a growing body of literature on the impact of software bots on open source software development teams, their effects on team communication, coordination, and collaboration practices are not well understood. Bots can have negative consequences, such as producing information overload or reducing interactions between developers. Objective: The objective of this study is to investigate the effect of specific GitHub Actions on the collaboration networks of Open Source Software teams, using a network-analytic approach. The study will focus on Code Review bots, one of the most frequently implemented types of bots. Method: Fine-grained, time-stamped data of co-editing networks, developer-file contribution networks, as well as workflow runs and git commit logs will be obtained from a large sample of GitHub repositories. This will allow us to study how bots affect the collaboration networks of developers over time. By using a more representative sample of GitHub repositories than previous studies, which includes projects whose sizes span the whole range of Open Source communities, this study will provide generalizable results and updated findings on the general usage and distribution of GitHub Actions. With this study, we aim to contribute to advancing our knowledge of human-bot interaction and the effects of support tools on software engineering teams.

The angular measure on the unit sphere characterizes the first-order dependence structure of the components of a random vector in extreme regions and is defined in terms of standardized margins. Its statistical recovery is an important step in learning problems involving observations far away from the center. In the common situation that the components of the vector have different distributions, the rank transformation offers a convenient and robust way of standardizing data in order to build an empirical version of the angular measure based on the most extreme observations. We provide a functional asymptotic expansion for the empirical angular measure in the bivariate case based on the theory of weak convergence in the space of bounded functions. From the expansion, not only can the known asymptotic distribution of the empirical angular measure be recovered, it also enables to find expansions and weak limits for other statistics based on the associated empirical process or its quantile version.

During operation, aerial manipulation systems are affected by various disturbances. Among them is a gravitational torque caused by the weight of the robotic arm. Common propeller-based actuation is ineffective against such disturbances because of possible overheating and high power consumption. To overcome this issue, in this paper we propose a winchbased actuation for the crane-stationed cable-suspended aerial manipulator. Three winch-controlled suspension rigging cables produce a desired cable tension distribution to generate a wrench that reduces the effect of gravitational torque. In order to coordinate the robotic arm and the winch-based actuation, a model-based hierarchical whole-body controller is adapted. It resolves two tasks: keeping the robotic arm end-effector at the desired pose and shifting the system center of mass in the location with zero gravitational torque. The performance of the introduced actuation system as well as control strategy is validated through experimental studies.

There is increasing interest in modeling high-dimensional longitudinal outcomes in applications such as developmental neuroimaging research. Growth curve model offers a useful tool to capture both the mean growth pattern across individuals, as well as the dynamic changes of outcomes over time within each individual. However, when the number of outcomes is large, it becomes challenging and often infeasible to tackle the large covariance matrix of the random effects involved in the model. In this article, we propose a high-dimensional response growth curve model, with three novel components: a low-rank factor model structure that substantially reduces the number of parameters in the large covariance matrix, a re-parameterization formulation coupled with a sparsity penalty that selects important fixed and random effect terms, and a computational trick that turns the inversion of a large matrix into the inversion of a stack of small matrices and thus considerably speeds up the computation. We develop an efficient expectation-maximization type estimation algorithm, and demonstrate the competitive performance of the proposed method through both simulations and a longitudinal study of brain structural connectivity in association with human immunodeficiency virus.

In structure learning, the output is generally a structure that is used as supervision information to achieve good performance. Considering the interpretation of deep learning models has raised extended attention these years, it will be beneficial if we can learn an interpretable structure from deep learning models. In this paper, we focus on Recurrent Neural Networks (RNNs) whose inner mechanism is still not clearly understood. We find that Finite State Automaton (FSA) that processes sequential data has more interpretable inner mechanism and can be learned from RNNs as the interpretable structure. We propose two methods to learn FSA from RNN based on two different clustering methods. We first give the graphical illustration of FSA for human beings to follow, which shows the interpretability. From the FSA's point of view, we then analyze how the performance of RNNs are affected by the number of gates, as well as the semantic meaning behind the transition of numerical hidden states. Our results suggest that RNNs with simple gated structure such as Minimal Gated Unit (MGU) is more desirable and the transitions in FSA leading to specific classification result are associated with corresponding words which are understandable by human beings.

Recent years have witnessed the enormous success of low-dimensional vector space representations of knowledge graphs to predict missing facts or find erroneous ones. Currently, however, it is not yet well-understood how ontological knowledge, e.g. given as a set of (existential) rules, can be embedded in a principled way. To address this shortcoming, in this paper we introduce a framework based on convex regions, which can faithfully incorporate ontological knowledge into the vector space embedding. Our technical contribution is two-fold. First, we show that some of the most popular existing embedding approaches are not capable of modelling even very simple types of rules. Second, we show that our framework can represent ontologies that are expressed using so-called quasi-chained existential rules in an exact way, such that any set of facts which is induced using that vector space embedding is logically consistent and deductively closed with respect to the input ontology.

北京阿比特科技有限公司