亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Human ecological success relies on our characteristic ability to flexibly self-organize into cooperative social groups, the most successful of which employ substantial specialization and division of labor. Unlike most other animals, humans learn by trial and error during their lives what role to take on. However, when some critical roles are more attractive than others, and individuals are self-interested, then there is a social dilemma: each individual would prefer others take on the critical but unremunerative roles so they may remain free to take one that pays better. But disaster occurs if all act thusly and a critical role goes unfilled. In such situations learning an optimum role distribution may not be possible. Consequently, a fundamental question is: how can division of labor emerge in groups of self-interested lifetime-learning individuals? Here we show that by introducing a model of social norms, which we regard as emergent patterns of decentralized social sanctioning, it becomes possible for groups of self-interested individuals to learn a productive division of labor involving all critical roles. Such social norms work by redistributing rewards within the population to disincentivize antisocial roles while incentivizing prosocial roles that do not intrinsically pay as well as others.

相關內容

In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.

Analyzing a unique real-time dataset from across 26 social media platforms, we show why the hate-extremism ecosystem now has unprecedented reach and recruitment paths online; why it is now able to exert instant and massive global mainstream influence, e.g. following the October 7 Hamas attack; why it will become increasingly robust in 2024 and beyond; why recent E.U. laws fall short because the effect of many smaller, lesser-known platforms outstrips larger ones like Twitter; and why law enforcement should expect increasingly hard-to-understand paths ahead of offline mass attacks. This new picture of online hate and extremism challenges current notions of a niche activity at the 'fringe' of the Internet driven by specific news sources. But it also suggests a new opportunity for system-wide control akin to adaptive vs. extinction treatments for cancer.

Feedforward neural networks (FNNs) are typically viewed as pure prediction algorithms, and their strong predictive performance has led to their use in many machine-learning applications. However, their flexibility comes with an interpretability trade-off; thus, FNNs have been historically less popular among statisticians. Nevertheless, classical statistical theory, such as significance testing and uncertainty quantification, is still relevant. Supplementing FNNs with methods of statistical inference, and covariate-effect visualisations, can shift the focus away from black-box prediction and make FNNs more akin to traditional statistical models. This can allow for more inferential analysis, and, hence, make FNNs more accessible within the statistical-modelling context.

Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms. Nonetheless, the problem of identifying entanglement has still not reached a general solution for systems larger than two qubits. In this study, we use deep convolutional neural networks, a type of supervised machine learning, to identify quantum entanglement for any bipartition in a 3-qubit system. We demonstrate that training the model on synthetically generated datasets of random density matrices excluding challenging positive-under-partial-transposition entangled states (PPTES), which cannot be identified (and correctly labeled) in general, leads to good model accuracy even for PPTES states, that were outside the training data. Our aim is to enhance the model's generalization on PPTES. By applying entanglement-preserving symmetry operations through a triple Siamese network trained in a semi-supervised manner, we improve the model's accuracy and ability to recognize PPTES. Moreover, by constructing an ensemble of Siamese models, even better generalization is observed, in analogy with the idea of finding separate types of entanglement witnesses for different classes of states. The neural models' code and training schemes, as well as data generation procedures, are available at github.com/Maticraft/quantum_correlations.

It has been classically conjectured that the brain assigns probabilistic models to sequences of stimuli. An important issue associated with this conjecture is the identification of the classes of models used by the brain to perform this task. We address this issue by using a new clustering procedure for sets of electroencephalographic (EEG) data recorded from participants exposed to a sequence of auditory stimuli generated by a stochastic chain. This clustering procedure indicates that the brain uses renewal points in the stochastic sequence of auditory stimuli in order to build a model.

We introduce exploration via linear loss perturbations (EVILL), a randomised exploration method for structured stochastic bandit problems that works by solving for the minimiser of a linearly perturbed regularised negative log-likelihood function. We show that, for the case of generalised linear bandits, EVILL reduces to perturbed history exploration (PHE), a method where exploration is done by training on randomly perturbed rewards. In doing so, we provide a simple and clean explanation of when and why random reward perturbations give rise to good bandit algorithms. With the data-dependent perturbations we propose, not present in previous PHE-type methods, EVILL is shown to match the performance of Thompson-sampling-style parameter-perturbation methods, both in theory and in practice. Moreover, we show an example outside of generalised linear bandits where PHE leads to inconsistent estimates, and thus linear regret, while EVILL remains performant. Like PHE, EVILL can be implemented in just a few lines of code.

In harsh environments, organisms may self-organize into spatially patterned systems in various ways. So far, studies of ecosystem spatial self-organization have primarily focused on apparent orders reflected by regular patterns. However, self-organized ecosystems may also have cryptic orders that can be unveiled only through certain quantitative analyses. Here we show that disordered hyperuniformity as a striking class of hidden orders can exist in spatially self-organized vegetation landscapes. By analyzing the high-resolution remotely sensed images across the American drylands, we demonstrate that it is not uncommon to find disordered hyperuniform vegetation states characterized by suppressed density fluctuations at long range. Such long-range hyperuniformity has been documented in a wide range of microscopic systems. Our finding contributes to expanding this domain to accommodate natural landscape ecological systems. We use theoretical modeling to propose that disordered hyperuniform vegetation patterning can arise from three generalized mechanisms prevalent in dryland ecosystems, including (1) critical absorbing states driven by an ecological legacy effect, (2) scale-dependent feedbacks driven by plant-plant facilitation and competition, and (3) density-dependent aggregation driven by plant-sediment feedbacks. Our modeling results also show that disordered hyperuniform patterns can help ecosystems cope with arid conditions with enhanced functioning of soil moisture acquisition. However, this advantage may come at the cost of slower recovery of ecosystem structure upon perturbations. Our work highlights that disordered hyperuniformity as a distinguishable but underexplored ecosystem self-organization state merits systematic studies to better understand its underlying mechanisms, functioning, and resilience.

Time-dependent protocols that perform irreversible logical operations, such as memory erasure, cost work and produce heat, placing bounds on the efficiency of computers. Here we use a prototypical computer model of a physical memory to show that it is possible to learn feedback-control protocols to do fast memory erasure without input of work or production of heat. These protocols, which are enacted by a neural-network ``demon'', do not violate the second law of thermodynamics because the demon generates more heat than the memory absorbs. The result is a form of nonlocal heat exchange in which one computation is rendered energetically favorable while a compensating one produces heat elsewhere, a tactic that could be used to rationally design the flow of energy within a computer.

The reduction of Hamiltonian systems aims to build smaller reduced models, valid over a certain range of time and parameters, in order to reduce computing time. By maintaining the Hamiltonian structure in the reduced model, certain long-term stability properties can be preserved. In this paper, we propose a non-linear reduction method for models coming from the spatial discretization of partial differential equations: it is based on convolutional auto-encoders and Hamiltonian neural networks. Their training is coupled in order to simultaneously learn the encoder-decoder operators and the reduced dynamics. Several test cases on non-linear wave dynamics show that the method has better reduction properties than standard linear Hamiltonian reduction methods.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司