亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There has recently been an explosion of interest in how "higher-order" structures emerge in complex systems. This "emergent" organization has been found in a variety of natural and artificial systems, although at present the field lacks a unified understanding of what the consequences of higher-order synergies and redundancies are for systems. Typical research treat the presence (or absence) of synergistic information as a dependent variable and report changes in the level of synergy in response to some change in the system. Here, we attempt to flip the script: rather than treating higher-order information as a dependent variable, we use evolutionary optimization to evolve boolean networks with significant higher-order redundancies, synergies, or statistical complexity. We then analyse these evolved populations of networks using established tools for characterizing discrete dynamics: the number of attractors, average transient length, and Derrida coefficient. We also assess the capacity of the systems to integrate information. We find that high-synergy systems are unstable and chaotic, but with a high capacity to integrate information. In contrast, evolved redundant systems are extremely stable, but have negligible capacity to integrate information. Finally, the complex systems that balance integration and segregation (known as Tononi-Sporns-Edelman complexity) show features of both chaosticity and stability, with a greater capacity to integrate information than the redundant systems while being more stable than the random and synergistic systems. We conclude that there may be a fundamental trade-off between the robustness of a systems dynamics and its capacity to integrate information (which inherently requires flexibility and sensitivity), and that certain kinds of complexity naturally balance this trade-off.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Existing survival models either do not scale to high dimensional and multi-modal data or are difficult to interpret. In this study, we present a supervised topic model called MixEHR-SurG to simultaneously integrate heterogeneous EHR data and model survival hazard. Our contributions are three-folds: (1) integrating EHR topic inference with Cox proportional hazards likelihood; (2) integrating patient-specific topic hyperparameters using the PheCode concepts such that each topic can be identified with exactly one PheCode-associated phenotype; (3) multi-modal survival topic inference. This leads to a highly interpretable survival topic model that can infer PheCode-specific phenotype topics associated with patient mortality. We evaluated MixEHR-SurG using a simulated dataset and two real-world EHR datasets: the Quebec Congenital Heart Disease (CHD) data consisting of 8,211 subjects with 75,187 outpatient claim records of 1,767 unique ICD codes; the MIMIC-III consisting of 1,458 subjects with multi-modal EHR records. Compared to the baselines, MixEHR-SurG achieved a superior dynamic AUROC for mortality prediction, with a mean AUROC score of 0.89 in the simulation dataset and a mean AUROC of 0.645 on the CHD dataset. Qualitatively, MixEHR-SurG associates severe cardiac conditions with high mortality risk among the CHD patients after the first heart failure hospitalization and critical brain injuries with increased mortality among the MIMIC- III patients after their ICU discharge. Together, the integration of the Cox proportional hazards model and EHR topic inference in MixEHR-SurG not only leads to competitive mortality prediction but also meaningful phenotype topics for in-depth survival analysis. The software is available at GitHub: //github.com/li-lab-mcgill/MixEHR-SurG.

The theory of two projections is utilized to study two-component Gibbs samplers. Through this theory, previously intractable problems regarding the asymptotic variances of two-component Gibbs samplers are reduced to elementary matrix algebra exercises. It is found that in terms of asymptotic variance, the two-component random-scan Gibbs sampler is never much worse, and could be considerably better than its deterministic-scan counterpart, provided that the selection probability is appropriately chosen. This is especially the case when there is a large discrepancy in computation cost between the two components. The result contrasts with the known fact that the deterministic-scan version has a faster convergence rate, which can also be derived from the method herein. On the other hand, a modified version of the deterministic-scan sampler that accounts for computation cost can outperform the random-scan version.

Collaborative problem-solving (CPS) is a vital skill used both in the workplace and in educational environments. CPS is useful in tackling increasingly complex global, economic, and political issues and is considered a central 21st century skill. The increasingly connected global community presents a fruitful opportunity for creative and collaborative problem-solving interactions and solutions that involve diverse perspectives. Unfortunately, women and underrepresented minorities (URMs) often face obstacles during collaborative interactions that hinder their key participation in these problem-solving conversations. Here, we explored the communication patterns of minority and non-minority individuals working together in a CPS task. Group Communication Analysis (GCA), a temporally-sensitive computational linguistic tool, was used to examine how URM status impacts individuals' sociocognitive linguistic patterns. Results show differences across racial/ethnic groups in key sociocognitive features that indicate fruitful collaborative interactions. We also investigated how the groups' racial/ethnic composition impacts both individual and group communication patterns. In general, individuals in more demographically diverse groups displayed more productive communication behaviors than individuals who were in majority-dominated groups. We discuss the implications of individual and group diversity on communication patterns that emerge during CPS and how these patterns can impact collaborative outcomes.

For multivariate data, tandem clustering is a well-known technique aiming to improve cluster identification through initial dimension reduction. Nevertheless, the usual approach using principal component analysis (PCA) has been criticized for focusing solely on inertia so that the first components do not necessarily retain the structure of interest for clustering. To address this limitation, a new tandem clustering approach based on invariant coordinate selection (ICS) is proposed. By jointly diagonalizing two scatter matrices, ICS is designed to find structure in the data while providing affine invariant components. Certain theoretical results have been previously derived and guarantee that under some elliptical mixture models, the group structure can be highlighted on a subset of the first and/or last components. However, ICS has garnered minimal attention within the context of clustering. Two challenges associated with ICS include choosing the pair of scatter matrices and selecting the components to retain. For effective clustering purposes, it is demonstrated that the best scatter pairs consist of one scatter matrix capturing the within-cluster structure and another capturing the global structure. For the former, local shape or pairwise scatters are of great interest, as is the minimum covariance determinant (MCD) estimator based on a carefully chosen subset size that is smaller than usual. The performance of ICS as a dimension reduction method is evaluated in terms of preserving the cluster structure in the data. In an extensive simulation study and empirical applications with benchmark data sets, various combinations of scatter matrices as well as component selection criteria are compared in situations with and without outliers. Overall, the new approach of tandem clustering with ICS shows promising results and clearly outperforms the PCA-based approach.

With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.

Peridynamics (PD), as a nonlocal theory, is well-suited for solving problems with discontinuities, such as cracks. However, the nonlocal effect of peridynamics makes it computationally expensive for dynamic fracture problems in large-scale engineering applications. As an alternative, this study proposes a multi-time-step (MTS) coupling model of PD and classical continuum mechanics (CCM) based on the Arlequin framework. Peridynamics is applied to the fracture domain of the structure, while continuum mechanics is applied to the rest of the structure. The MTS method enables the peridynamic model to be solved at a small time step and the continuum mechanical model is solved at a larger time step. Consequently, higher computational efficiency is achieved for the fracture domain of the structure while ensuring computational accuracy, and this coupling method can be easily applied to large-scale engineering fracture problems.

Necessary and sufficient conditions of uniform consistency are explored. A hypothesis is simple. Nonparametric sets of alternatives are bounded convex sets in $\mathbb{L}_p$, $p >1$ with "small" balls deleted. The "small" balls have the center at the point of hypothesis and radii of balls tend to zero as sample size increases. For problem of hypothesis testing on a density, we show that, for the sets of alternatives, there are uniformly consistent tests for some sequence of radii of the balls, if and only if, convex set is relatively compact. The results are established for problem of hypothesis testing on a density, for signal detection in Gaussian white noise, for linear ill-posed problems with random Gaussian noise and so on.

This work explores the dimension reduction problem for Bayesian nonparametric regression and density estimation. More precisely, we are interested in estimating a functional parameter $f$ over the unit ball in $\mathbb{R}^d$, which depends only on a $d_0$-dimensional subspace of $\mathbb{R}^d$, with $d_0 < d$.It is well-known that rescaled Gaussian process priors over the function space achieve smoothness adaptation and posterior contraction with near minimax-optimal rates. Moreover, hierarchical extensions of this approach, equipped with subspace projection, can also adapt to the intrinsic dimension $d_0$ (\cite{Tokdar2011DimensionAdapt}).When the ambient dimension $d$ does not vary with $n$, the minimax rate remains of the order $n^{-\beta/(2\beta +d_0)}$.%When $d$ does not vary with $n$, the order of the minimax rate remains the same regardless of the ambient dimension $d$. However, this is up to multiplicative constants that can become prohibitively large when $d$ grows. The dependences between the contraction rate and the ambient dimension have not been fully explored yet and this work provides a first insight: we let the dimension $d$ grow with $n$ and, by combining the arguments of \cite{Tokdar2011DimensionAdapt} and \cite{Jiang2021VariableSelection}, we derive a growth rate for $d$ that still leads to posterior consistency with minimax rate.The optimality of this growth rate is then discussed.Additionally, we provide a set of assumptions under which consistent estimation of $f$ leads to a correct estimation of the subspace projection, assuming that $d_0$ is known.

Direct reciprocity based on the repeated prisoner's dilemma has been intensively studied. Most theoretical investigations have concentrated on memory-$1$ strategies, a class of elementary strategies just reacting to the previous-round outcomes. Though the properties of "All-or-None" strategies ($AoN_K$) have been discovered, simulations just confirmed the good performance of $AoN_K$ of very short memory lengths. It remains unclear how $AoN_K$ strategies would fare when players have access to longer rounds of history information. We construct a theoretical model to investigate the performance of the class of $AoN_K$ strategies of varying memory length $K$. We rigorously derive the payoffs and show that $AoN_K$ strategies of intermediate memory length $K$ are most prevalent, while strategies of larger memory lengths are less competent. Larger memory lengths make it hard for $AoN_K$ strategies to coordinate, and thus inhibiting their mutual reciprocity. We then propose the adaptive coordination strategy combining tolerance and $AoN_K$' coordination rule. This strategy behaves like $AoN_K$ strategy when coordination is not sufficient, and tolerates opponents' occasional deviations by still cooperating when coordination is sufficient. We found that the adaptive coordination strategy wins over other classic memory-$1$ strategies in various typical competition environments, and stabilizes the population at high levels of cooperation, suggesting the effectiveness of high level adaptability in resolving social dilemmas. Our work may offer a theoretical framework for exploring complex strategies using history information, which are different from traditional memory-$n$ strategies.

A component-splitting method is proposed to improve convergence characteristics for implicit time integration of compressible multicomponent reactive flows. The characteristic decomposition of flux jacobian of multicomponent Navier-Stokes equations yields a large sparse eigensystem, presenting challenges of slow convergence and high computational costs for implicit methods. To addresses this issue, the component-splitting method segregates the implicit operator into two parts: one for the flow equations (density/momentum/energy) and the other for the component equations. Each part's implicit operator employs flux-vector splitting based on their respective spectral radii to achieve accelerated convergence. This approach improves the computational efficiency of implicit iteration, mitigating the quadratic increase in time cost with the number of species. Two consistence corrections are developed to reduce the introduced component-splitting error and ensure the numerical consistency of mass fraction. Importantly, the impact of component-splitting method on accuracy is minimal as the residual approaches convergence. The accuracy, efficiency, and robustness of component-splitting method are thoroughly investigated and compared with the coupled implicit scheme through several numerical cases involving thermo-chemical nonequilibrium hypersonic flows. The results demonstrate that the component-splitting method decreases the required number of iteration steps for convergence of residual and wall heat flux, decreases the computation time per iteration step, and diminishes the residual to lower magnitude. The acceleration efficiency is enhanced with increases in CFL number and number of species.

北京阿比特科技有限公司