亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The delimitation of biological species, i.e., deciding which individuals belong to the same species and whether and how many different species are represented in a data set, is key to the conservation of biodiversity. Much existing work uses only genetic data for species delimitation, often employing some kind of cluster analysis. This can be misleading, because geographically distant groups of individuals can be genetically quite different even if they belong to the same species. This paper investigates the problem of testing whether two potentially separated groups of individuals can belong to a single species or not based on genetic and spatial data. Various approaches are compared (some of which already exist in the literature) based on simulated metapopulations generated with SLiM and GSpace - two software packages that can simulate spatially-explicit genetic data at an individual level. Approaches involve partial Mantel testing, maximum likelihood mixed-effects models with a population effect, and jackknife-based homogeneity tests. A key challenge is that most tests perform on genetic and geographical distance data, violating standard independence assumptions. Simulations showed that partial Mantel tests and mixed-effects models have larger power than jackknife-based methods, but tend to display type-I-error rates slightly above the significance level. Moreover, a multiple regression model neglecting the dependence in the dissimilarities did not show inflated type-I-error rate. An application on brassy ringlets concludes the paper.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 合一 · CASES · 變換 · 講稿 ·
2024 年 3 月 5 日

As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate each term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the elaboration, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in a partial algorithm which uses a constraint-postponement strategy for trying to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.

It is well known that Newton's method, especially when applied to large problems such as the discretization of nonlinear partial differential equations (PDEs), can have trouble converging if the initial guess is too far from the solution. This work focuses on accelerating this convergence, in the context of the discretization of nonlinear elliptic PDEs. We first provide a quick review of existing methods, and justify our choice of learning an initial guess with a Fourier neural operator (FNO). This choice was motivated by the mesh-independence of such operators, whose training and evaluation can be performed on grids with different resolutions. The FNO is trained using a loss minimization over generated data, loss functions based on the PDE discretization. Numerical results, in one and two dimensions, show that the proposed initial guess accelerates the convergence of Newton's method by a large margin compared to a naive initial guess, especially for highly nonlinear or anisotropic problems.

This work concerns the implementation of the hybridizable discontinuous Galerkin (HDG) method to solve the linear anisotropic elastic equation in the frequency domain. First-order formulation with the compliance tensor and Voigt notation are employed to provide a compact description of the discretized problem and flexibility with highly heterogeneous media. We further focus on the question of optimal choice of stabilization in the definition of HDG numerical traces. For this purpose, we construct a hybridized Godunov-upwind flux for anisotropic elasticity possessing three distinct wavespeeds. This stabilization removes the need to choose scaling factors, contrary to identity and Kelvin-Christoffel based stabilizations which are popular choices in literature. We carry out comparisons among these families for isotropic and anisotropic material, with constant background and highly heterogeneous ones, in two and three dimensions. They establish the optimality of the Godunov stabilization which can be used as a reference choice for generic material and different types of waves.

It has been shown that deep neural networks of a large enough width are universal approximators but they are not if the width is too small. There were several attempts to characterize the minimum width $w_{\min}$ enabling the universal approximation property; however, only a few of them found the exact values. In this work, we show that the minimum width for $L^p$ approximation of $L^p$ functions from $[0,1]^{d_x}$ to $\mathbb R^{d_y}$ is exactly $\max\{d_x,d_y,2\}$ if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus). Compared to the known result for ReLU networks, $w_{\min}=\max\{d_x+1,d_y\}$ when the domain is $\smash{\mathbb R^{d_x}}$, our result first shows that approximation on a compact domain requires smaller width than on $\smash{\mathbb R^{d_x}}$. We next prove a lower bound on $w_{\min}$ for uniform approximation using general activation functions including ReLU: $w_{\min}\ge d_y+1$ if $d_x<d_y\le2d_x$. Together with our first result, this shows a dichotomy between $L^p$ and uniform approximations for general activation functions and input/output dimensions.

The brains of all bilaterally symmetric animals on Earth are divided into left and right hemispheres. The anatomy and functionality of the hemispheres have a large degree of overlap, but there are asymmetries and they specialize to possess different attributes. Several computation models mimic hemispheric asymmetries with a focus on reproducing human data on semantic and visual processing tasks. In this study, we aimed to understand how dual hemispheres in a bilateral architecture interact to perform well in a given task. We propose a bilateral artificial neural network that imitates lateralization observed in nature: that the left hemisphere specializes in specificity and the right in generalities. We used different training objectives to achieve the desired specialization and tested it on an image classification task with two different CNN backbones -- ResNet and VGG. Our analysis found that the hemispheres represent complementary features that are exploited by a network head which implements a type of weighted attention. The bilateral architecture outperformed a range of baselines of similar representational capacity that don't exploit differential specialization, with the exception of a conventional ensemble of unilateral networks trained on a dual training objective for specifics and generalities. The results demonstrate the efficacy of bilateralism, contribute to the discussion of bilateralism in biological brains and the principle may serves as an inductive bias for new AI systems.

In Colombia, astronomical research is experiencing accelerated growth. In order to better understand its evolution and current state, we conducted a bibliometric study using data from the Astrophysics Data System (ADS) and Web of Science (WoS). In ADS, we identified 422 peer-reviewed publications from 1980, the year of the first publication, until 2023, which was the cutoff date for our study. Among the 25 Colombian institutions identified as participants in at least one publication, the contributions of four universities stand out: Universidad de los Andes, Universidad Nacional de Colombia, Universidad Industrial de Santander, and Universidad de Antioquia, with 104, 78, 68, and 67 publications, respectively. By cross-referencing information from ADS and WoS, we found that the areas with the greatest impact in publications are threefold: high-energy and fundamental physics, stars and stellar physics, and galaxies and cosmology. Globally, according to WoS, Colombia ranks 52nd in the number of peer-reviewed publications between 2019 and 2023, and fifth in Latin America. Additionally, we identified three highly cited publications (top 1% worldwide) belonging to the field of observational cosmology. When analyzing countries with equal or greater bibliographic production, we estimate that Colombian production is approximately four times lower than expected considering its population and GDP.

The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.

We introduce a causal regularisation extension to anchor regression (AR) for improved out-of-distribution (OOD) generalisation. We present anchor-compatible losses, aligning with the anchor framework to ensure robustness against distribution shifts. Various multivariate analysis (MVA) algorithms, such as (Orthonormalized) PLS, RRR, and MLR, fall within the anchor framework. We observe that simple regularisation enhances robustness in OOD settings. Estimators for selected algorithms are provided, showcasing consistency and efficacy in synthetic and real-world climate science problems. The empirical validation highlights the versatility of anchor regularisation, emphasizing its compatibility with MVA approaches and its role in enhancing replicability while guarding against distribution shifts. The extended AR framework advances causal inference methodologies, addressing the need for reliable OOD generalisation.

We address a prime counting problem across the homology classes of a graph, presenting a graph-theoretical Dirichlet-type analogue of the prime number theorem. The main machinery we have developed and employed is a spectral antisymmetry theorem, revealing that the spectra of the twisted graph adjacency matrices have an antisymmetric distribution over the character group of the graph. Additionally, we derive some trace formulas based on the twisted adjacency matrices as part of our analysis.

Introduction: Taxonomies capture knowledge about a particular domain in a succinct manner and establish a common understanding among peers. Researchers use taxonomies to convey information about a particular knowledge area or to support automation tasks, and practitioners use them to enable communication beyond organizational boundaries. Aims: Despite this important role of taxonomies in software engineering, their quality is seldom evaluated. Our aim is to identify and define taxonomy quality attributes that provide practical measurements, helping researchers and practitioners to compare taxonomies and choose the one most adequate for the task at hand. Methods: We reviewed 324 publications from software engineering and information systems research and synthesized, when provided, the definitions of quality attributes and measurements. We evaluated the usefulness of the measurements on six taxonomies from three domains. Results: We propose the definition of seven quality attributes and suggest internal and external measurements that can be used to assess a taxonomy's quality. For two measurements we provide implementations in Python. We found the measurements useful for deciding which taxonomy is best suited for a particular purpose. Conclusion: While there exist several guidelines for creating taxonomies, there is a lack of actionable criteria to compare taxonomies. In this paper, we fill this gap by synthesizing from a wealth of literature seven, non-overlapping taxonomy quality attributes and corresponding measurements. Future work encompasses their further evaluation of usefulness and empirical validation.

北京阿比特科技有限公司