亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, our objective is to present a constraining principle governing the spectral properties of the sample covariance matrix. This principle exhibits harmonious behavior across diverse limiting frameworks, eliminating the need for constraints on the rates of dimension $p$ and sample size $n$, as long as they both tend to infinity. We accomplish this by employing a suitable normalization technique on the original sample covariance matrix. Following this, we establish a harmonic central limit theorem for linear spectral statistics within this expansive framework. This achievement effectively eliminates the necessity for a bounded spectral norm on the population covariance matrix and relaxes constraints on the rates of dimension $p$ and sample size $n$, thereby significantly broadening the applicability of these results in the field of high-dimensional statistics. We illustrate the power of the established results by considering the test for covariance structure under high dimensionality, freeing both $p$ and $n$.

相關內容

在概率論和統計學中,協方差矩陣(也稱為自協方差矩陣,色散矩陣,方差矩陣或方差-協方差矩陣)是平方矩陣,給出了給定隨機向量的每對元素之間的協方差。 在矩陣對角線中存在方差,即每個元素與其自身的協方差。

Effect modification occurs when the impact of the treatment on an outcome varies based on the levels of other covariates known as effect modifiers. Modeling of these effect differences is important for etiological goals and for purposes of optimizing treatment. Structural nested mean models (SNMMs) are useful causal models for estimating the potentially heterogeneous effect of a time-varying exposure on the mean of an outcome in the presence of time-varying confounding. A data-driven approach for selecting the effect modifiers of an exposure may be necessary if these effect modifiers are a priori unknown and need to be identified. Although variable selection techniques are available in the context of estimating conditional average treatment effects using marginal structural models, or in the context of estimating optimal dynamic treatment regimens, all of these methods consider an outcome measured at a single point in time. In the context of an SNMM for repeated outcomes, we propose a doubly robust penalized G-estimator for the causal effect of a time-varying exposure with a simultaneous selection of effect modifiers and use this estimator to analyze the effect modification in a study of hemodiafiltration. We prove the oracle property of our estimator, and conduct a simulation study for evaluation of its performance in finite samples and for verification of its double-robustness property. Our work is motivated by and applied to the study of hemodiafiltration for treating patients with end-stage renal disease at the Centre Hospitalier de l'Universit\'e de Montr\'eal. We apply the proposed method to investigate the effect heterogeneity of dialysis facility on the repeated session-specific hemodiafiltration outcomes.

Intracranial aneurysms are the leading cause of stroke. One of the established treatment approaches is the embolization induced by coil insertion. However, the prediction of treatment and subsequent changed flow characteristics in the aneurysm, is still an open problem. In this work, we present an approach based on patient specific geometry and parameters including a coil representation as inhomogeneous porous medium. The model consists of the volume-averaged Navier-Stokes equations including the non-Newtonian blood rheology. We solve these equations using a problem-adapted lattice Boltzmann method and present a comparison between fully-resolved and volume-averaged simulations. The results indicate the validity of the model. Overall, this workflow allows for patient specific assessment of the flow due to potential treatment.

We consider the numerical behavior of the fixed-stress splitting method for coupled poromechanics as undrained regimes are approached. We explain that pressure stability is related to the splitting error of the scheme, not the fact that the discrete saddle point matrix never appears in the fixed-stress approach. This observation reconciles previous results regarding the pressure stability of the splitting method. Using examples of compositional poromechanics with application to geological CO$_2$ sequestration, we see that solutions obtained using the fixed-stress scheme with a low order finite element-finite volume discretization which is not inherently inf-sup stable can exhibit the same pressure oscillations obtained with the corresponding fully implicit scheme. Moreover, pressure jump stabilization can effectively remove these spurious oscillations in the fixed-stress setting, while also improving the efficiency of the scheme in terms of the number of iterations required at every time step to reach convergence.

In this paper, we design a new kind of high order inverse Lax-Wendroff (ILW) boundary treatment for solving hyperbolic conservation laws with finite difference method on a Cartesian mesh. This new ILW method decomposes the construction of ghost point values near inflow boundary into two steps: interpolation and extrapolation. At first, we impose values of some artificial auxiliary points through a polynomial interpolating the interior points near the boundary. Then, we will construct a Hermite extrapolation based on those auxiliary point values and the spatial derivatives at boundary obtained via the ILW procedure. This polynomial will give us the approximation to the ghost point value. By an appropriate selection of those artificial auxiliary points, high-order accuracy and stable results can be achieved. Moreover, theoretical analysis indicates that comparing with the original ILW method, especially for higher order accuracy, the new proposed one would require fewer terms using the relatively complicated ILW procedure and thus improve computational efficiency on the premise of maintaining accuracy and stability. We perform numerical experiments on several benchmarks, including one- and two-dimensional scalar equations and systems. The robustness and efficiency of the proposed scheme is numerically verified.

Modeling the behavior of biological tissues and organs often necessitates the knowledge of their shape in the absence of external loads. However, when their geometry is acquired in-vivo through imaging techniques, bodies are typically subject to mechanical deformation due to the presence of external forces, and the load-free configuration needs to be reconstructed. This paper addresses this crucial and frequently overlooked topic, known as the inverse elasticity problem (IEP), by delving into both theoretical and numerical aspects, with a particular focus on cardiac mechanics. In this work, we extend Shield's seminal work to determine the structure of the IEP with arbitrary material inhomogeneities and in the presence of both body and active forces. These aspects are fundamental in computational cardiology, and we show that they may break the variational structure of the inverse problem. In addition, we show that the inverse problem might have no solution even in the presence of constant Neumann boundary conditions and a polyconvex strain energy functional. We then present the results of extensive numerical tests to validate our theoretical framework, and to characterize the computational challenges associated with a direct numerical approximation of the IEP. Specifically, we show that this framework outperforms existing approaches both in terms of robustness and optimality, such as Sellier's iterative procedure, even when the latter is improved with acceleration techniques. A notable discovery is that multigrid preconditioners are, in contrast to standard elasticity, not efficient, where a one-level additive Schwarz and generalized Dryja-Smith-Widlund provide a much more reliable alternative. Finally, we successfully address the IEP for a full-heart geometry, demonstrating that the IEP formulation can compute the stress-free configuration in real-life scenarios.

In this paper, we introduce a new simple approach to developing and establishing the convergence of splitting methods for a large class of stochastic differential equations (SDEs), including additive, diagonal and scalar noise types. The central idea is to view the splitting method as a replacement of the driving signal of an SDE, namely Brownian motion and time, with a piecewise linear path that yields a sequence of ODEs $-$ which can be discretized to produce a numerical scheme. This new way of understanding splitting methods is inspired by, but does not use, rough path theory. We show that when the driving piecewise linear path matches certain iterated stochastic integrals of Brownian motion, then a high order splitting method can be obtained. We propose a general proof methodology for establishing the strong convergence of these approximations that is akin to the general framework of Milstein and Tretyakov. That is, once local error estimates are obtained for the splitting method, then a global rate of convergence follows. This approach can then be readily applied in future research on SDE splitting methods. By incorporating recently developed approximations for iterated integrals of Brownian motion into these piecewise linear paths, we propose several high order splitting methods for SDEs satisfying a certain commutativity condition. In our experiments, which include the Cox-Ingersoll-Ross model and additive noise SDEs (noisy anharmonic oscillator, stochastic FitzHugh-Nagumo model, underdamped Langevin dynamics), the new splitting methods exhibit convergence rates of $O(h^{3/2})$ and outperform schemes previously proposed in the literature.

In this paper, we aim to perform sensitivity analysis of set-valued models and, in particular, to quantify the impact of uncertain inputs on feasible sets, which are key elements in solving a robust optimization problem under constraints. While most sensitivity analysis methods deal with scalar outputs, this paper introduces a novel approach for performing sensitivity analysis with set-valued outputs. Our innovative methodology is designed for excursion sets, but is versatile enough to be applied to set-valued simulators, including those found in viability fields, or when working with maps like pollutant concentration maps or flood zone maps. We propose to use the Hilbert-Schmidt Independence Criterion (HSIC) with a kernel designed for set-valued outputs. After proposing a probabilistic framework for random sets, a first contribution is the proof that this kernel is characteristic, an essential property in a kernel-based sensitivity analysis context. To measure the contribution of each input, we then propose to use HSIC-ANOVA indices. With these indices, we can identify which inputs should be neglected (screening) and we can rank the others according to their influence (ranking). The estimation of these indices is also adapted to the set-valued outputs. Finally, we test the proposed method on three test cases of excursion sets.

Quantization for a Borel probability measure refers to the idea of estimating a given probability by a discrete probability with support containing a finite number of elements. In this paper, we have considered a Borel probability measure $P$ on $\mathbb R^2$, which has support a nonuniform stretched Sierpi\'{n}ski triangle generated by a set of three contractive similarity mappings on $\mathbb R^2$. For this probability measure, we investigate the optimal sets of $n$-means and the $n$th quantization errors for all positive integers $n$.

Motivated by the important statistical role of sparsity, the paper uncovers four reparametrizations for covariance matrices in which sparsity is associated with conditional independence graphs in a notional Gaussian model. The intimate relationship between the Iwasawa decomposition of the general linear group and the open cone of positive definite matrices allows a unifying perspective. Specifically, the positive definite cone can be reconstructed without loss or redundancy from the exponential map applied to four Lie subalgebras determined by the Iwasawa decomposition of the general linear group. This accords geometric interpretations to the reparametrizations and the corresponding notion of sparsity. Conditions that ensure legitimacy of the reparametrizations for statistical models are identified. While the focus of this work is on understanding population-level structure, there are strong methodological implications. In particular, since the population-level sparsity manifests in a vector space, imposition of sparsity on relevant sample quantities produces a covariance estimate that respects the positive definite cone constraint.

This paper investigates the supercloseness of a singularly perturbed convection diffusion problem using the direct discontinuous Galerkin (DDG) method on a Shishkin mesh. The main technical difficulties lie in controlling the diffusion term inside the layer, the convection term outside the layer, and the inter element jump term caused by the discontinuity of the numerical solution. The main idea is to design a new composite interpolation, in which a global projection is used outside the layer to satisfy the interface conditions determined by the selection of numerical flux, thereby eliminating or controlling the troublesome terms on the unit interface; and inside the layer, Gau{\ss} Lobatto projection is used to improve the convergence order of the diffusion term. On the basis of that, by selecting appropriate parameters in the numerical flux, we obtain the supercloseness result of almost $k+1$ order under an energy norm. Numerical experiments support our main theoretical conclusion.

北京阿比特科技有限公司