亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many mechanisms behind the evolution of cooperation, such as reciprocity, indirect reciprocity, and altruistic punishment, require group knowledge of individual actions. But what keeps people cooperating when no one is looking? Conformist norm internalization, the tendency to abide by the behavior of the majority of the group, even when it is individually harmful, could be the answer. In this paper, we analyze a world where (1) there is group selection and punishment by indirect reciprocity but (2) many actions (half) go unobserved, and therefore unpunished. Can norm internalization fill this "observation gap" and lead to high levels of cooperation, even when agents may in principle cooperate only when likely to be caught and punished? Specifically, we seek to understand whether adding norm internalization to the strategy space in a public goods game can lead to higher levels of cooperation when both norm internalization and cooperation start out rare. We found the answer to be positive, but, interestingly, not because norm internalizers end up making up a substantial fraction of the population, nor because they cooperate much more than other agent types. Instead, norm internalizers, by polarizing, catalyzing, and stabilizing cooperation, can increase levels of cooperation of other agent types, while only making up a minority of the population themselves.

相關內容

Group一直是研究計算機支持的合作工作、人機交互、計算機支持的協作學習和社會技術研究的主要場所。該會議將社會科學、計算機科學、工程、設計、價值觀以及其他與小組工作相關的多個不同主題的工作結合起來,并進行了廣泛的概念化。官網鏈接: · MoDELS · Performer · Signal Processing · Analysis ·
2023 年 11 月 6 日

Scale-free dynamics, formalized by selfsimilarity, provides a versatile paradigm massively and ubiquitously used to model temporal dynamics in real-world data. However, its practical use has mostly remained univariate so far. By contrast, modern applications often demand multivariate data analysis. Accordingly, models for multivariate selfsimilarity were recently proposed. Nevertheless, they have remained rarely used in practice because of a lack of available robust estimation procedures for the vector of selfsimilarity parameters. Building upon recent mathematical developments, the present work puts forth an efficient estimation procedure based on the theoretical study of the multiscale eigenstructure of the wavelet spectrum of multivariate selfsimilar processes. The estimation performance is studied theoretically in the asymptotic limits of large scale and sample sizes, and computationally for finite-size samples. As a practical outcome, a fully operational and documented multivariate signal processing estimation toolbox is made freely available and is ready for practical use on real-world data. Its potential benefits are illustrated in epileptic seizure prediction from multi-channel EEG data.

Inspired by the traditional partial differential equation (PDE) approach for image denoising, we propose a novel neural network architecture, referred as NODE-ImgNet, that combines neural ordinary differential equations (NODEs) with convolutional neural network (CNN) blocks. NODE-ImgNet is intrinsically a PDE model, where the dynamic system is learned implicitly without the explicit specification of the PDE. This naturally circumvents the typical issues associated with introducing artifacts during the learning process. By invoking such a NODE structure, which can also be viewed as a continuous variant of a residual network (ResNet) and inherits its advantage in image denoising, our model achieves enhanced accuracy and parameter efficiency. In particular, our model exhibits consistent effectiveness in different scenarios, including denoising gray and color images perturbed by Gaussian noise, as well as real-noisy images, and demonstrates superiority in learning from small image datasets.

We construct a bipartite generalization of Alon and Szegedy's nearly orthogonal vectors, thereby obtaining strong bounds for several extremal problems involving the Lov\'asz theta function, vector chromatic number, minimum semidefinite rank, nonnegative rank, and extension complexity of polytopes. In particular, we derive a couple of general lower bounds for the vector chromatic number which may be of independent interest.

We numerically investigate the generalized Steklov problem for the modified Helmholtz equation and focus on the relation between its spectrum and the geometric structure of the domain. We address three distinct aspects: (i) the asymptotic behavior of eigenvalues for polygonal domains; (ii) the dependence of the integrals of eigenfunctions on the domain symmetries; and (iii) the localization and exponential decay of Steklov eigenfunctions away from the boundary for smooth shapes and in the presence of corners. For this purpose, we implemented two complementary numerical methods to compute the eigenvalues and eigenfunctions of the associated Dirichlet-to-Neumann operator for various simply-connected planar domains. We also discuss applications of the obtained results in the theory of diffusion-controlled reactions and formulate several conjectures with relevance in spectral geometry.

We propose a new framework for the simultaneous inference of monotone and smoothly time-varying functions under complex temporal dynamics utilizing the monotone rearrangement and the nonparametric estimation. We capitalize the Gaussian approximation for the nonparametric monotone estimator and construct the asymptotically correct simultaneous confidence bands (SCBs) by carefully designed bootstrap methods. We investigate two general and practical scenarios. The first is the simultaneous inference of monotone smooth trends from moderately high-dimensional time series, and the proposed algorithm has been employed for the joint inference of temperature curves from multiple areas. Specifically, most existing methods are designed for a single monotone smooth trend. In such cases, our proposed SCB empirically exhibits the narrowest width among existing approaches while maintaining confidence levels, and has been used for testing several hypotheses tailored to global warming. The second scenario involves simultaneous inference of monotone and smoothly time-varying regression coefficients in time-varying coefficient linear models. The proposed algorithm has been utilized for testing the impact of sunshine duration on temperature which is believed to be increasing by the increasingly severe greenhouse effect. The validity of the proposed methods has been justified in theory as well as by extensive simulations.

This paper investigates the multiple testing problem for high-dimensional sparse binary sequences, motivated by the crowdsourcing problem in machine learning. We study the empirical Bayes approach for multiple testing on the high-dimensional Bernoulli model with a conjugate spike and uniform slab prior. We first show that the hard thresholding rule deduced from the posterior distribution is suboptimal. Consequently, the $\ell$-value procedure constructed using this posterior tends to be overly conservative in estimating the false discovery rate (FDR). We then propose two new procedures based on $\adj\ell$-values and $q$-values to correct this issue. Sharp frequentist theoretical results are obtained, demonstrating that both procedures can effectively control the FDR under sparsity. Numerical experiments are conducted to validate our theory in finite samples. To our best knowledge, this work provides the first uniform FDR control result in multiple testing for high-dimensional sparse binary data.

Canonical correlation analysis (CCA) is a popular statistical technique for exploring relationships between datasets. In recent years, the estimation of sparse canonical vectors has emerged as an important but challenging variant of the CCA problem, with widespread applications. Unfortunately, existing rate-optimal estimators for sparse canonical vectors have high computational cost. We propose a quasi-Bayesian estimation procedure that not only achieves the minimax estimation rate, but also is easy to compute by Markov Chain Monte Carlo (MCMC). The method builds on Tan et al. (2018) and uses a re-scaled Rayleigh quotient function as the quasi-log-likelihood. However, unlike Tan et al. (2018), we adopt a Bayesian framework that combines this quasi-log-likelihood with a spike-and-slab prior to regularize the inference and promote sparsity. We investigate the empirical behavior of the proposed method on both continuous and truncated data, and we demonstrate that it outperforms several state-of-the-art methods. As an application, we use the proposed methodology to maximally correlate clinical variables and proteomic data for better understanding the Covid-19 disease.

We propose a new method for the construction of layer-adapted meshes for singularly perturbed differential equations (SPDEs), based on mesh partial differential equations (MPDEs) that incorporate \emph{a posteriori} solution information. There are numerous studies on the development of parameter robust numerical methods for SPDEs that depend on the layer-adapted mesh of Bakhvalov. In~\citep{HiMa2021}, a novel MPDE-based approach for constructing a generalisation of these meshes was proposed. Like with most layer-adapted mesh methods, the algorithms in that article depended on detailed derivations of \emph{a priori} bounds on the SPDE's solution and its derivatives. In this work we extend that approach so that it instead uses \emph{a posteriori} computed estimates of the solution. We present detailed algorithms for the efficient implementation of the method, and numerical results for the robust solution of two-parameter reaction-convection-diffusion problems, in one and two dimensions. We also provide full FEniCS code for a one-dimensional example.

Physics-based and first-principles models pervade the engineering and physical sciences, allowing for the ability to model the dynamics of complex systems with a prescribed accuracy. The approximations used in deriving governing equations often result in discrepancies between the model and sensor-based measurements of the system, revealing the approximate nature of the equations and/or the signal-to-noise ratio of the sensor itself. In modern dynamical systems, such discrepancies between model and measurement can lead to poor quantification, often undermining the ability to produce accurate and precise control algorithms. We introduce a discrepancy modeling framework to identify the missing physics and resolve the model-measurement mismatch with two distinct approaches: (i) by learning a model for the evolution of systematic state-space residual, and (ii) by discovering a model for the deterministic dynamical error. Regardless of approach, a common suite of data-driven model discovery methods can be used. The choice of method depends on one's intent (e.g., mechanistic interpretability) for discrepancy modeling, sensor measurement characteristics (e.g., quantity, quality, resolution), and constraints imposed by practical applications (e.g., modeling approaches using the suite of data-driven modeling methods on three continuous dynamical systems under varying signal-to-noise ratios. Finally, we emphasize structural shortcomings of each discrepancy modeling approach depending on error type. In summary, if the true dynamics are unknown (i.e., an imperfect model), one should learn a discrepancy model of the missing physics in the dynamical space. Yet, if the true dynamics are known yet model-measurement mismatch still exists, one should learn a discrepancy model in the state space.

Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP $T u$ that is the image of another GP $u$ under a linear transformation $T$ acting on the sample paths of $u$ are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when $T$ is an unbounded operator such as a differential operator, which is common in many modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator $T$ acting on the sample paths of a square-integrable (not necessarily Gaussian) stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.

北京阿比特科技有限公司