亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article introduces novel measures of inaccuracy and divergence based on survival extropy and their dynamic forms and explores their properties and applications. To address the drawbacks of asymmetry and range limitations, we introduce two measures: the survival extropy inaccuracy ratio and symmetric divergence measures. The inaccuracy ratio is utilized for the analysis and classification of images. A goodness-of-fit test for the uniform distribution is developed using the survival extropy divergence. Characterizations of the exponential distribution are derived using the dynamic survival extropy inaccuracy and divergence measures. The article also proposes non-parametric estimators for the divergence measures and conducts simulation studies to validate their performance. Finally, it demonstrates the application of symmetric survival extropy divergence in failure time data analysis.

相關內容

We present a novel computational framework to assess the structural integrity of welds. In the first stage of the simulation framework, local fractions of microstructural constituents within weld regions are predicted based on steel composition and welding parameters. The resulting phase fraction maps are used to define heterogeneous properties that are subsequently employed in structural integrity assessments using an elastoplastic phase field fracture model. The framework is particularised to predicting failure in hydrogen pipelines, demonstrating its potential to assess the feasibility of repurposing existing pipeline infrastructure to transport hydrogen. First, the process model is validated against experimental microhardness maps for vintage and modern pipeline welds. Additionally, the influence of welding conditions on hardness and residual stresses is investigated, demonstrating that variations in heat input, filler material composition, and weld bead order can significantly affect the properties within the weld region. Coupled hydrogen diffusion-fracture simulations are then conducted to determine the critical pressure at which hydrogen transport pipelines will fail. To this end, the model is enriched with a microstructure-sensitive description of hydrogen transport and hydrogen-dependent fracture resistance. The analysis of an X52 pipeline reveals that even 2 mm defects in a hard heat-affected zone can drastically reduce the critical failure pressure.

Nested integration problems arise in various scientific and engineering applications, including Bayesian experimental design, financial risk assessment, and uncertainty quantification. These nested integrals take the form $\int f\left(\int g(\bs{y},\bs{x})\di{}\bs{x}\right)\di{}\bs{y}$, for nonlinear $f$, making them computationally challenging, particularly in high-dimensional settings. Although widely used for single integrals, traditional Monte Carlo (MC) methods can be inefficient when encountering complexities of nested integration. This work introduces a novel multilevel estimator, combining deterministic and randomized quasi-MC (rQMC) methods to handle nested integration problems efficiently. In this context, the inner number of samples and the discretization accuracy of the inner integrand evaluation constitute the level. We provide a comprehensive theoretical analysis of the estimator, deriving error bounds demonstrating significant reductions in bias and variance compared with standard methods. The proposed estimator is particularly effective in scenarios where the integrand is evaluated approximately, as it adapts to different levels of resolution without compromising precision. We verify the performance of our method via numerical experiments, focusing on estimating the expected information gain of experiments. We further introduce a truncation scheme to address the eventual unboundedness of the experimental noise. When applied to Gaussian noise in the estimator, this truncation scheme renders the same computational complexity as in the bounded noise case up to multiplicative logarithmic terms. The results reveal that the proposed multilevel rQMC estimator outperforms existing MC and rQMC approaches, offering a substantial reduction in computational costs and offering a powerful tool for practitioners dealing with complex, nested integration problems across various domains.

This paper explores the utility of diffusion-based models for anomaly detection, focusing on their efficacy in identifying deviations in both compact and high-resolution datasets. Diffusion-based architectures, including Denoising Diffusion Probabilistic Models (DDPMs) and Diffusion Transformers (DiTs), are evaluated for their performance using reconstruction objectives. By leveraging the strengths of these models, this study benchmarks their performance against traditional anomaly detection methods such as Isolation Forests, One-Class SVMs, and COPOD. The results demonstrate the superior adaptability, scalability, and robustness of diffusion-based methods in handling complex real-world anomaly detection tasks. Key findings highlight the role of reconstruction error in enhancing detection accuracy and underscore the scalability of these models to high-dimensional datasets. Future directions include optimizing encoder-decoder architectures and exploring multi-modal datasets to further advance diffusion-based anomaly detection.

This paper introduces a novel decomposition framework to explain heterogeneity in causal effects observed across different studies, considering both observational and randomized settings. We present a formal decomposition of between-study heterogeneity, identifying sources of variability in treatment effects across studies. The proposed methodology allows for robust estimation of causal parameters under various assumptions, addressing differences in pre-treatment covariate distributions, mediating variables, and the outcome mechanism. Our approach is validated through a simulation study and applied to data from the Moving to Opportunity (MTO) study, demonstrating its practical relevance. This work contributes to the broader understanding of causal inference in multi-study environments, with potential applications in evidence synthesis and policy-making.

A new geometric procedure to construct symplectic methods for constrained mechanical systems is developed in this paper. The definition of a map coming from the notion of retraction maps allows to adapt the continuous problem to the discretization rule rather than viceversa. As a result, the constraint submanifold is exactly preserved by the symplectic discrete flow and the extension of these methods to the case of non-linear configuration spaces is doable.

This paper presents a tractable sufficient condition for the consistency of maximum likelihood estimators (MLEs) in partially observed diffusion models, stated in terms of stationary distribution of the associated fully observed diffusion, under the assumption that the set of unknown parameter values is finite. This sufficient condition is then verified in the context of a latent price model of market microstructure, yielding consistency of maximum likelihood estimators of the unknown parameters in this model. Finally, we compute the latter estimators using historical financial data taken from the NASDAQ exchange.

This paper presents a loss-based generalized Bayesian methodology for high-dimensional robust regression with serially correlated errors and predictors. The proposed framework employs a novel scaled pseudo-Huber (SPH) loss function, which smooths the well-known Huber loss, achieving a balance between quadratic and absolute linear loss behaviors. This flexibility enables the framework to accommodate both thin-tailed and heavy-tailed data effectively. The generalized Bayesian approach constructs a working likelihood utilizing the SPH loss that facilitates efficient and stable estimation while providing rigorous estimation uncertainty quantification for all model parameters. Notably, this allows formal statistical inference without requiring ad hoc tuning parameter selection while adaptively addressing a wide range of tail behavior in the errors. By specifying appropriate prior distributions for the regression coefficients -- e.g., ridge priors for small or moderate-dimensional settings and spike-and-slab priors for high-dimensional settings -- the framework ensures principled inference. We establish rigorous theoretical guarantees for the accurate estimation of underlying model parameters and the correct selection of predictor variables under sparsity assumptions for a wide range of data generating setups. Extensive simulation studies demonstrate the superiority of our approach compared to traditional quadratic and absolute linear loss-based Bayesian regression methods, highlighting its flexibility and robustness in high-dimensional and challenging data contexts.

Mass lumping techniques are commonly employed in explicit time integration schemes for problems in structural dynamics and both avoid solving costly linear systems with the consistent mass matrix and increase the critical time step. In isogeometric analysis, the critical time step is constrained by so-called "outlier" frequencies, representing the inaccurate high frequency part of the spectrum. Removing or dampening these high frequencies is paramount for fast explicit solution techniques. In this work, we propose mass lumping and outlier removal techniques for nontrivial geometries, including multipatch and trimmed geometries. Our lumping strategies provably do not deteriorate (and often improve) the CFL condition of the original problem and are combined with deflation techniques to remove persistent outlier frequencies. Numerical experiments reveal the advantages of the method, especially for simulations covering large time spans where they may halve the number of iterations with little or no effect on the numerical solution.

This article introduces a novel numerical approach for studying fully nonlinear coagulation-fragmentation models, where both the coagulation and fragmentation components of the collision operator are nonlinear. The model approximates the $3-$wave kinetic equations, a pivotal framework in wave turbulence theory governing the time evolution of wave spectra in weakly nonlinear systems. An implicit finite volume scheme (FVS) is derived to solve this equation. To the best of our knowledge, this is the first numerical scheme capable of accurately capturing the long-term asymptotic behavior of solutions to a fully nonlinear coagulation-fragmentation model that includes both forward and backward energy cascades. The scheme is implemented on some test problems, demonstrating strong alignment with theoretical predictions of energy cascade rates. We further introduce a weighted FVS variant to ensure energy conservation across varying degrees of kernel homogeneity. Convergence and first-order consistency are established through theoretical analysis and verified by experimental convergence orders in test cases.

When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.

北京阿比特科技有限公司