亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the impact of merging routines in merge-based sorting algorithms. More precisely, we focus on the galloping routine that TimSort uses to merge monotonic sub-arrays, hereafter called runs, and on the impact on the number of element comparisons performed if one uses this routine instead of a na\"ive merging routine. This routine was introduced in order to make TimSort more efficient on arrays with few distinct values. Alas, we prove that, although it makes TimSort sort array with two values in linear time, it does not prevent TimSort from requiring up to $\Theta(n \log(n))$ element comparisons to sort arrays of length~$n$ with three distinct values. However, we also prove that slightly modifying TimSort's galloping routine results in requiring only $\mathcal{O}(n + n \log(\sigma))$ element comparisons in the worst case, when sorting arrays of length $n$ with $\sigma$ distinct values. We do so by focusing on the notion of dual runs, which was introduced in the 1990s, and on the associated dual run-length entropy. This notion is both related to the number of distinct values and to the number of runs in an array, which came with its own run-length entropy that was used to explain TimSort's otherwise "supernatural" efficiency. We also introduce new notions of fast- and middle-growth for natural merge sorts (i.e., algorithms based on merging runs), which are found in several merge sorting algorithms similar to TimSort. We prove that algorithms with the fast- or middle-growth property, provided that they use our variant of TimSort's galloping routine for merging runs, are as efficient as possible at sorting arrays with low run-induced or dual-run-induced complexities.

相關內容

Using nonlinear projections and preserving structure in model order reduction (MOR) are currently active research fields. In this paper, we provide a novel differential geometric framework for model reduction on smooth manifolds, which emphasizes the geometric nature of the objects involved. The crucial ingredient is the construction of an embedding for the low-dimensional submanifold and a compatible reduction map, for which we discuss several options. Our general framework allows capturing and generalizing several existing MOR techniques, such as structure preservation for Lagrangian- or Hamiltonian dynamics, and using nonlinear projections that are, for instance, relevant in transport-dominated problems. The joint abstraction can be used to derive shared theoretical properties for different methods, such as an exact reproduction result. To connect our framework to existing work in the field, we demonstrate that various techniques for data-driven construction of nonlinear projections can be included in our framework.

We establish several convexity properties for the entropy and Fisher information of mixtures of centered Gaussian distributions. First, we prove that if $X_1, X_2$ are independent scalar Gaussian mixtures, then the entropy of $\sqrt{t}X_1 + \sqrt{1-t}X_2$ is concave in $t \in [0,1]$, thus confirming a conjecture of Ball, Nayar and Tkocz (2016) for this class of random variables. In fact, we prove a generalisation of this assertion which also strengthens a result of Eskenazis, Nayar and Tkocz (2018). For the Fisher information, we extend a convexity result of Bobkov (2022) by showing that the Fisher information matrix is operator convex as a matrix-valued function acting on densities of mixtures in $\mathbb{R}^d$. As an application, we establish rates for the convergence of the Fisher information matrix of the sum of weighted i.i.d. Gaussian mixtures in the operator norm along the central limit theorem under mild moment assumptions.

We study the complexity (that is, the weight of the multiplication table) of the elliptic normal bases introduced by Couveignes and Lercier. We give an upper bound on the complexity of these elliptic normal bases, and we analyze the weight of some special vectors related to the multiplication table of those bases. This analysis leads us to some perspectives on the search for low complexity normal bases from elliptic periods.

In this study, a finite deformation phase-field formulation is developed to investigate the effect of hygrothermal conditions on the viscoelastic-viscoplastic fracture behavior of epoxy nanocomposites under cyclic loading. The formulation incorporates a definition of the Helmholtz free energy, which considers the effect of nanoparticles, moisture content, and temperature. The free energy is additively decomposed into a deviatoric equilibrium, a deviatoric non-equilibrium, and a volumetric contribution, with distinct definitions for tension and compression. The proposed derivation offers a realistic modeling of damage and viscoplasticity mechanisms in the nanocomposites by coupling the phase-field damage model with a modified crack driving force and a viscoelastic-viscoplastic model. Numerical simulations are conducted to study the cyclic force-displacement response of both dry and saturated boehmite nanoparticle (BNP)/epoxy samples, considering BNP contents and temperature. Comparing numerical results with experimental data shows good agreement at various BNP contents. In addition, the predictive capability of the phase-field model is evaluated through simulations of single-edge notched nanocomposite plates subjected to monolithic tensile and shear loading.

When different researchers study the same research question using the same dataset they may obtain different and potentially even conflicting results. This is because there is often substantial flexibility in researchers' analytical choices, an issue also referred to as ''researcher degrees of freedom''. Combined with selective reporting of the smallest p-value or largest effect, researcher degrees of freedom may lead to an increased rate of false positive and overoptimistic results. In this paper, we address this issue by formalizing the multiplicity of analysis strategies as a multiple testing problem. As the test statistics of different analysis strategies are usually highly dependent, a naive approach such as the Bonferroni correction is inappropriate because it leads to an unacceptable loss of power. Instead, we propose using the ''minP'' adjustment method, which takes potential test dependencies into account and approximates the underlying null distribution of the minimal p-value through a permutation-based procedure. This procedure is known to achieve more power than simpler approaches while ensuring a weak control of the family-wise error rate. We illustrate our approach for addressing researcher degrees of freedom by applying it to a study on the impact of perioperative paO2 on post-operative complications after neurosurgery. A total of 48 analysis strategies are considered and adjusted using the minP procedure. This approach allows to selectively report the result of the analysis strategy yielding the most convincing evidence, while controlling the type 1 error -- and thus the risk of publishing false positive results that may not be replicable.

In a variety of application areas, there is interest in assessing evidence of differences in the intensity of event realizations between groups. For example, in cancer genomic studies collecting data on rare variants, the focus is on assessing whether and how the variant profile changes with the disease subtype. Motivated by this application, we develop multiresolution nonparametric Bayes tests for differential mutation rates across groups. The multiresolution approach yields fast and accurate detection of spatial clusters of rare variants, and our nonparametric Bayes framework provides great flexibility for modeling the intensities of rare variants. Some theoretical properties are also assessed, including weak consistency of our Dirichlet Process-Poisson-Gamma mixture over multiple resolutions. Simulation studies illustrate excellent small sample properties relative to competitors, and we apply the method to detect rare variants related to common variable immunodeficiency from whole exome sequencing data on 215 patients and over 60,027 control subjects.

Machine-learned normalizing flows can be used in the context of lattice quantum field theory to generate statistically correlated ensembles of lattice gauge fields at different action parameters. This work demonstrates how these correlations can be exploited for variance reduction in the computation of observables. Three different proof-of-concept applications are demonstrated using a novel residual flow architecture: continuum limits of gauge theories, the mass dependence of QCD observables, and hadronic matrix elements based on the Feynman-Hellmann approach. In all three cases, it is shown that statistical uncertainties are significantly reduced when machine-learned flows are incorporated as compared with the same calculations performed with uncorrelated ensembles or direct reweighting.

In this work we propose a heuristic clearing method of day-ahead electricity markets. In the first part of the process, a computationally less demanding problem is solved using an approximation of the cumulative demand and supply curves, which are derived via the aggregation of simple bids. Based on the outcome of this problem, estimated ranges for the clearing prices of individual periods are determined. In the final step, the clearing for the original bid set is solved, taking into account the price ranges determined previously as constraints. Adding such constraints reduces the feasibility region of the clearing problem. By removing simple bids whose acceptance or rejection is already determined by the assumed price range constraints, the size of the problem is also significantly reduced. Via simple examples, we show that due to the possible paradox rejection of block bids the proposed bid-aggregation based approach may result in a suboptimal solution or in an infeasible problem, but we also point out that these pitfalls of the algorithm may be avoided by using different aggregation patterns. We propose to construct multiple different aggregation patterns and to use parallel computing to enhance the performance of the algorithm. We test the proposed approach on setups of various problem sizes, and conclude that in the case of parallel computing with 4 threads a high success rate and a significant gain in computational speed may be achieved.

Dynamical systems across the sciences, from electrical circuits to ecological networks, undergo qualitative and often catastrophic changes in behavior, called bifurcations, when their underlying parameters cross a threshold. Existing methods predict oncoming catastrophes in individual systems but are primarily time-series-based and struggle both to categorize qualitative dynamical regimes across diverse systems and to generalize to real data. To address this challenge, we propose a data-driven, physically-informed deep-learning framework for classifying dynamical regimes and characterizing bifurcation boundaries based on the extraction of topologically invariant features. We focus on the paradigmatic case of the supercritical Hopf bifurcation, which is used to model periodic dynamics across a wide range of applications. Our convolutional attention method is trained with data augmentations that encourage the learning of topological invariants which can be used to detect bifurcation boundaries in unseen systems and to design models of biological systems like oscillatory gene regulatory networks. We further demonstrate our method's use in analyzing real data by recovering distinct proliferation and differentiation dynamics along pancreatic endocrinogenesis trajectory in gene expression space based on single-cell data. Our method provides valuable insights into the qualitative, long-term behavior of a wide range of dynamical systems, and can detect bifurcations or catastrophic transitions in large-scale physical and biological systems.

Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e. by lowering its "temperature", ensuring its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimisation problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.

北京阿比特科技有限公司