亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we introduce the flexible interpretable gamma (FIG) distribution which has been derived by Weibullisation of the body-tail generalised normal distribution. The parameters of the FIG have been verified graphically and mathematically as having interpretable roles in controlling the left-tail, body, and right-tail shape. The generalised gamma (GG) distribution has become a staple model for positive data in statistics due to its interpretable parameters and tractable equations. Although there are many generalised forms of the GG which can provide better fit to data, none of them extend the GG so that the parameters are interpretable. Additionally, we present some mathematical characteristics and prove the identifiability of the FIG parameters. Finally, we apply the FIG model to hand grip strength and insurance loss data to assess its flexibility relative to existing models.

相關內容

Private synthetic data sharing is preferred as it keeps the distribution and nuances of original data compared to summary statistics. The state-of-the-art methods adopt a select-measure-generate paradigm, but measuring large domain marginals still results in much error and allocating privacy budget iteratively is still difficult. To address these issues, our method employs a partition-based approach that effectively reduces errors and improves the quality of synthetic data, even with a limited privacy budget. Results from our experiments demonstrate the superiority of our method over existing approaches. The synthetic data produced using our approach exhibits improved quality and utility, making it a preferable choice for private synthetic data sharing.

In this paper, the strong formulation of the generalised Navier-Stokes momentum equation is investigated. Specifically, the formulation of shear-stress divergence is investigated, due to its effect on the performance and accuracy of computational methods. It is found that the term may be expressed in two different ways. While the first formulation is commonly used, the alternative derivation is found to be potentially more convenient for direct numerical manipulation. The alternative formulation relocates a part of strain information under the variable-coefficient Laplacian operator, thus making future computational schemes potentially simpler with larger time-step sizes.

Projection-based testing for mean trajectory differences in two groups of irregularly and sparsely observed functional data has garnered significant attention in the literature because it accommodates a wide spectrum of group differences and (non-stationary) covariance structures. This article presents the derivation of the theoretical power function and the introduction of a comprehensive power and sample size (PASS) calculation toolkit tailored to the projection-based testing method developed by Wang (2021). Our approach accommodates a wide spectrum of group difference scenarios and a broad class of covariance structures governing the underlying processes. Through extensive numerical simulation, we demonstrate the robustness of this testing method by showcasing that its statistical power remains nearly unaffected even when a certain percentage of observations are missing, rendering it 'missing-immune'. Furthermore, we illustrate the practical utility of this test through analysis of two randomized controlled trials of Parkinson's disease. To facilitate implementation, we provide a user-friendly R package fPASS, complete with a detailed vignette to guide users through its practical application. We anticipate that this article will significantly enhance the usability of this potent statistical tool across a range of biostatistical applications, with a particular focus on its relevance in the design of clinical trials.

In this paper we analyze the weighted essentially non-oscillatory (WENO) schemes in the finite volume framework by examining the first step of the explicit third-order total variation diminishing Runge-Kutta method. The rationale for the improved performance of the finite volume WENO-M, WENO-Z and WENO-ZR schemes over WENO-JS in the first time step is that the nonlinear weights corresponding to large errors are adjusted to increase the accuracy of numerical solutions. Based on this analysis, we propose novel Z-type nonlinear weights of the finite volume WENO scheme for hyperbolic conservation laws. Instead of taking the difference of the smoothness indicators for the global smoothness indicator, we employ the logarithmic function with tuners to ensure that the numerical dissipation is reduced around discontinuities while the essentially non-oscillatory property is preserved. The proposed scheme does not necessitate substantial extra computational expenses. Numerical examples are presented to demonstrate the capability of the proposed WENO scheme in shock capturing.

In this paper we propose polarized consensus-based dynamics in order to make consensus-based optimization (CBO) and sampling (CBS) applicable for objective functions with several global minima or distributions with many modes, respectively. For this, we ``polarize'' the dynamics with a localizing kernel and the resulting model can be viewed as a bounded confidence model for opinion formation in the presence of common objective. Instead of being attracted to a common weighted mean as in the original consensus-based methods, which prevents the detection of more than one minimum or mode, in our method every particle is attracted to a weighted mean which gives more weight to nearby particles. We prove that in the mean-field regime the polarized CBS dynamics are unbiased for Gaussian targets. We also prove that in the zero temperature limit and for sufficiently well-behaved strongly convex objectives the solution of the Fokker--Planck equation converges in the Wasserstein-2 distance to a Dirac measure at the minimizer. Finally, we propose a computationally more efficient generalization which works with a predefined number of clusters and improves upon our polarized baseline method for high-dimensional optimization.

In this paper, we propose a multiphysics finite element method for a quasi-static thermo-poroelasticity model with a nonlinear convective transport term. To design some stable numerical methods and reveal the multi-physical processes of deformation, diffusion and heat, we introduce three new variables to reformulate the original model into a fluid coupled problem. Then, we introduce an Newton's iterative algorithm by replacing the convective transport term with $\nabla T^{i}\cdot(\bm{K}\nabla p^{i-1})$, $\nabla T^{i-1}\cdot(\bm{K}\nabla p^{i})$ and $\nabla T^{i-1}\cdot(\bm{K}\nabla p^{i-1})$, and apply the Banach fixed point theorem to prove the convergence of the proposed method. Then, we propose a multiphysics finite element method with Newton's iterative algorithm, which is equivalent to a stabilized method, can effectively overcome the numerical oscillation caused by the nonlinear thermal convection term. Also, we prove that the fully discrete multiphysics finite element method has an optimal convergence order. Finally, we draw conclusions to summarize the main results of this paper.

We couple the L1 discretization of the Caputo fractional derivative in time with the Galerkin scheme to devise a linear numerical method for the semilinear subdiffusion equation. Two important points that we make are: nonsmooth initial data and time-dependent diffusion coefficient. We prove the stability and convergence of the method under weak assumptions concerning regularity of the diffusivity. We find optimal pointwise in space and global in time errors, which are verified with several numerical experiments.

This paper proposes a strategy to solve the problems of the conventional s-version of finite element method (SFEM) fundamentally. Because SFEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from its strengths. To solve these issues, we propose a novel strategy called B-spline based SFEM. To improve the accuracy of numerical integration, we employed cubic B-spline basis functions with $C^2$-continuity across element boundaries as the global basis functions. To avoid matrix singularity, we applied different basis functions to different meshes. Specifically, we employed the Lagrange basis functions as local basis functions. The numerical results indicate that using the proposed method, numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional SFEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional SFEM.

We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

The negative binomial distribution (NBD) has been theorized to express a scale-invariant property of many-body systems and has been consistently shown to outperform other statistical models in both describing the multiplicity of quantum-scale events in particle collision experiments and predicting the prevalence of cosmological observables, such as the number of galaxies in a region of space. Despite its widespread applicability and empirical success in these contexts, a theoretical justification for the NBD from first principles has remained elusive for fifty years. The accuracy of the NBD in modeling hadronic, leptonic, and semileptonic processes is suggestive of a highly general principle, which is yet to be understood. This study demonstrates that a statistical event of the NBD can in fact be derived in a general context via the dynamical equations of a canonical ensemble of particles in Minkowski space. These results describe a fundamental feature of many-body systems that is consistent with data from the ALICE and ATLAS experiments and provides an explanation for the emergence of the NBD in these multiplicity observations. Two methods are used to derive this correspondence: the Feynman path integral and a hypersurface parametrization of a propagating ensemble.

北京阿比特科技有限公司