亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study stochastic pairwise interaction network systems whereby a finite population of agents, identified with the nodes of a graph, update their states in response to both individual mutations and pairwise interactions with their neighbors. The considered class of systems include the main epidemic models -such as the SIS, SIR, and SIRS models-, certain social dynamics models -such as the voter and anti-voter models-, as well as evolutionary dynamics on graphs. Since these stochastic systems fall into the class of finite-state Markov chains, they always admit stationary distributions. We analyze the asymptotic behavior of these stationary distributions in the limit as the population size grows large while the interaction network maintains certain mixing properties. Our approach relies on the use of Lyapunov-type functions to obtain concentration results on these stationary distributions. Notably, our results are not limited to fully mixed population models, as they do apply to a much broader spectrum of interaction network structures, including, e.g., Erd\"oos-R\'enyi random graphs.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · INFORMS · 估計/估計量 · 相關系數 · ·
2024 年 12 月 11 日

Ideally, all analyses of normally distributed data should include the full covariance information between all data points. In practice, the full covariance matrix between all data points is not always available. Either because a result was published without a covariance matrix, or because one tries to combine multiple results from separate publications. For simple hypothesis tests, it is possible to define robust test statistics that will behave conservatively in the presence on unknown correlations. For model parameter fits, one can inflate the variance by a factor to ensure that things remain conservative at least up to a chosen confidence level. This paper describes a class of robust test statistics for simple hypothesis tests, as well as an algorithm to determine the necessary inflation factor for model parameter fits and Goodness of Fit tests and composite hypothesis tests. It then presents some example applications of the methods to real neutrino interaction data and model comparisons.

This manuscript describes the notions of blocker and interdiction applied to well-known optimization problems. The main interest of these two concepts is the capability to analyze the existence of a combinatorial structure after some modifications. We focus on graph modification, like removing vertices or links in a network. In the interdiction version, we have a budget for modification to reduce as much as possible the size of a given combinatorial structure. Whereas, for the blocker version, we minimize the number of modifications such that the network does not contain a given combinatorial structure. Blocker and interdiction problems have some similarities and can be applied to well-known optimization problems. We consider matching, connectivity, shortest path, max flow, and clique problems. For these problems, we analyze either the blocker version or the interdiction one. Applying the concept of blocker or interdiction to well-known optimization problems can change their complexities. Some optimization problems become harder when one of these two notions is applied. For this reason, we propose some complexity analysis to show when an optimization problem, or the associated decision problem, becomes harder. Another fundamental aspect developed in the manuscript is the use of exact methods to tackle these optimization problems. The main way to solve these problems is to use integer linear programming to model them. An interesting aspect of integer linear programming is the possibility to analyze theoretically the strength of these models, using cutting planes. For most of the problems studied in this manuscript, a polyhedral analysis is performed to prove the strength of inequalities or describe new families of inequalities. The exact algorithms proposed are based on Branch-and-Cut or Branch-and-Price algorithm, where dedicated separation and pricing algorithms are proposed.

Delay Tolerant Networking (DTN) aims to address a myriad of significant networking challenges that appear in time-varying settings, such as mobile and satellite networks, wherein changes in network topology are frequent and often subject to environmental constraints. Within this paradigm, routing problems are often solved by extending classical graph-theoretic path finding algorithms, such as the Bellman-Ford or Floyd-Warshall algorithms, to the time-varying setting; such extensions are simple to understand, but they have strict optimality criteria and can exhibit non-polynomial scaling. Acknowledging this, we study time-varying shortest path problems on metric graphs whose vertices are traced by semi-algebraic curves. As an exemplary application, we establish a polynomial upper bound on the number of topological critical events encountered by a set of $n$ satellites moving along elliptic curves in low Earth orbit (per orbital period). Experimental evaluations on networks derived from STARLINK satellite TLE's demonstrate that not only does this geometric framework allow for routing schemes between satellites requiring recomputation an order of magnitude less than graph-based methods, but it also demonstrates metric spanner properties exist in metric graphs derived from real-world data, opening the door for broader applications of geometric DTN routing.

Extreme events over large spatial domains may exhibit highly heterogeneous tail dependence characteristics, yet most existing spatial extremes models yield only one dependence class over the entire spatial domain. To accurately characterize "data-level dependence'' in analysis of extreme events, we propose a mixture model that achieves flexible dependence properties and allows high-dimensional inference for extremes of spatial processes. We modify the popular random scale construction that multiplies a Gaussian random field by a single radial variable; we allow the radial variable to vary smoothly across space and add non-stationarity to the Gaussian process. As the level of extremeness increases, this single model exhibits both asymptotic independence at long ranges and either asymptotic dependence or independence at short ranges. We make joint inference on the dependence model and a marginal model using a copula approach within a Bayesian hierarchical model. Three different simulation scenarios show close to nominal frequentist coverage rates. Lastly, we apply the model to a dataset of extreme summertime precipitation over the central United States. We find that the joint tail of precipitation exhibits non-stationary dependence structure that cannot be captured by limiting extreme value models or current state-of-the-art sub-asymptotic models.

Identifying predictive covariates, which forecast individual treatment effectiveness, is crucial for decision-making across different disciplines such as personalized medicine. These covariates, referred to as biomarkers, are extracted from pre-treatment data, often within randomized controlled trials, and should be distinguished from prognostic biomarkers, which are independent of treatment assignment. Our study focuses on discovering predictive imaging biomarkers, specific image features, by leveraging pre-treatment images to uncover new causal relationships. Unlike labor-intensive approaches relying on handcrafted features prone to bias, we present a novel task of directly learning predictive features from images. We propose an evaluation protocol to assess a model's ability to identify predictive imaging biomarkers and differentiate them from purely prognostic ones by employing statistical testing and a comprehensive analysis of image feature attribution. We explore the suitability of deep learning models originally developed for estimating the conditional average treatment effect (CATE) for this task, which have been assessed primarily for their precision of CATE estimation while overlooking the evaluation of imaging biomarker discovery. Our proof-of-concept analysis demonstrates the feasibility and potential of our approach in discovering and validating predictive imaging biomarkers from synthetic outcomes and real-world image datasets. Our code is available at \url{//github.com/MIC-DKFZ/predictive_image_biomarker_analysis}.

Given that distributed systems face adversarial behaviors such as eavesdropping and cyberattacks, how to ensure the evidence fusion result is credible becomes a must-be-addressed topic. Different from traditional research that assumes nodes are cooperative, we focus on three requirements for evidence fusion, i.e., preserving evidence's privacy, identifying attackers and excluding their evidence, and dissipating high-conflicting among evidence caused by random noise and interference. To this end, this paper proposes an algorithm for credible evidence fusion against cyberattacks. Firstly, the fusion strategy is constructed based on conditionalized credibility to avoid counterintuitive fusion results caused by high-conflicting. Under this strategy, distributed evidence fusion is transformed into the average consensus problem for the weighted average value by conditional credibility of multi-source evidence (WAVCCME), which implies a more concise consensus process and lower computational complexity than existing algorithms. Secondly, a state decomposition and reconstruction strategy with weight encryption is designed, and its effectiveness for privacy-preserving under directed graphs is guaranteed: decomposing states into different random sub-states for different neighbors to defend against internal eavesdroppers, and encrypting the sub-states' weight in the reconstruction to guard against out-of-system eavesdroppers. Finally, the identities and types of attackers are identified by inter-neighbor broadcasting and comparison of nodes' states, and the proposed update rule with state corrections is used to achieve the consensus of the WAVCCME. The states of normal nodes are shown to converge to their WAVCCME, while the attacker's evidence is excluded from the fusion, as verified by the simulation on a distributed unmanned reconnaissance swarm.

Neural networks are a group of neurons stacked together in multiple layers to mimic the biological neurons in a human brain. Neural networks have been trained using the backpropagation algorithm based on gradient descent strategy for several decades. Several variants have been developed to improve the backpropagation algorithm. The loss function for the neural network is optimized through backpropagation, but several local minima exist in the manifold of the constructed neural network. We obtain several solutions matching the minima. The gradient descent strategy cannot avoid the problem of local minima and gets stuck in the minima due to the initialization. Particle swarm optimization (PSO) was proposed to select the best local minima among the search space of the loss function. The search space is limited to the instantiated particles in the PSO algorithm, and sometimes it cannot select the best solution. In the proposed approach, we overcome the problem of gradient descent and the limitation of the PSO algorithm by training individual neurons separately, capable of collectively solving the problem as a group of neurons forming a network.

Alignment is a social phenomenon wherein individuals share a common goal or perspective. Mirroring, or mimicking the behaviors and opinions of another individual, is one mechanism by which individuals can become aligned. Large scale investigations of the effect of mirroring on alignment have been limited due to the scalability of traditional experimental designs in sociology. In this paper, we introduce a simple computational framework that enables studying the effect of mirroring behavior on alignment in multi-agent systems. We simulate systems of interacting large language models in this framework and characterize overall system behavior and alignment with quantitative measures of agent dynamics. We find that system behavior is strongly influenced by the range of communication of each agent and that these effects are exacerbated by increased rates of mirroring. We discuss the observed simulated system behavior in the context of known human social dynamics.

Traditional neural networks (multi-layer perceptrons) have become an important tool in data science due to their success across a wide range of tasks. However, their performance is sometimes unsatisfactory, and they often require a large number of parameters, primarily due to their reliance on the linear combination structure. Meanwhile, additive regression has been a popular alternative to linear regression in statistics. In this work, we introduce novel deep neural networks that incorporate the idea of additive regression. Our neural networks share architectural similarities with Kolmogorov-Arnold networks but are based on simpler yet flexible activation and basis functions. Additionally, we introduce several hybrid neural networks that combine this architecture with that of traditional neural networks. We derive their universal approximation properties and demonstrate their effectiveness through simulation studies and a real-data application. The numerical results indicate that our neural networks generally achieve better performance than traditional neural networks while using fewer parameters.

The quasi-2D electrostatic systems, characterized by periodicity in two dimensions with a free third dimension, have garnered significant interest in many fields. We apply the sum-of-Gaussians (SOG) approximation to the Laplace kernel, dividing the interactions into near-field, mid-range, and long-range components. The near-field component, singular but compactly supported in a local domain, is directly calculated. The mid-range component is managed using a procedure similar to nonuniform fast Fourier transforms in three dimensions. The long-range component, which includes Gaussians of large variance, is treated with polynomial interpolation/anterpolation in the free dimension and Fourier spectral solver in the other two dimensions on proxy points. Unlike the fast Ewald summation, which requires extensive zero padding in the case of high aspect ratios, the separability of Gaussians allows us to handle such case without any zero padding in the free direction. Furthermore, while NUFFTs typically rely on certain upsampling in each dimension, and the truncated kernel method introduces an additional factor of upsampling due to kernel oscillation, our scheme eliminates the need for upsampling in any direction due to the smoothness of Gaussians, significantly reducing computational cost for large-scale problems. Finally, whereas all periodic fast multipole methods require dividing the periodic tiling into a smooth far part and a near part containing its nearest neighboring cells, our scheme operates directly on the fundamental cell, resulting in better performance with simpler implementation. We provide a rigorous error analysis showing that upsampling is not required in NUFFT-like steps, achieving $O(N\log N)$ complexity with a small prefactor. The performance of the scheme is demonstrated via extensive numerical experiments.

北京阿比特科技有限公司