亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper extends various results related to the Gaussian product inequality (GPI) conjecture to the setting of disjoint principal minors of Wishart random matrices. This includes product-type inequalities for matrix-variate analogs of completely monotone functions and Bernstein functions of Wishart disjoint principal minors, respectively. In particular, the product-type inequalities apply to inverse determinant powers. Quantitative versions of the inequalities are also obtained when there is a mix of positive and negative exponents. Furthermore, an extended form of the GPI is shown to hold for the eigenvalues of Wishart random matrices by virtue of their law being multivariate totally positive of order~2 ($\mathrm{MTP}_2$). A new, unexplored avenue of research is presented to study the GPI from the point of view of elliptical distributions.

相關內容

Stepped wedge cluster-randomized trial (CRTs) designs randomize clusters of individuals to intervention sequences, ensuring that every cluster eventually transitions from a control period to receive the intervention under study by the end of the study period. The analysis of stepped wedge CRTs is usually more complex than parallel-arm CRTs due to potential secular trends that result in changing intra-cluster and period-cluster correlations over time. A further challenge in the analysis of closed-cohort stepped wedge CRTs, which follow groups of individuals enrolled in each period longitudinally, is the occurrence of dropout. This is particularly problematic in studies of individuals at high risk for mortality, which causes non-ignorable missing outcomes. If not appropriately addressed, missing outcomes from death will erode statistical power, at best, and bias treatment effect estimates, at worst. Joint longitudinal-survival models can accommodate informative dropout and missingness patterns in longitudinal studies. Specifically, within this framework one directly models the dropout process via a time-to-event submodel together with the longitudinal outcome of interest. The two submodels are then linked using a variety of possible association structures. This work extends linear mixed-effects models by jointly modeling the dropout process to accommodate informative missing outcome data in closed-cohort stepped wedge CRTs. We focus on constant intervention and general time-on-treatment effect parametrizations for the longitudinal submodel and study the performance of the proposed methodology using Monte Carlo simulation under several data-generating scenarios. We illustrate the joint modeling methodology in practice by reanalyzing the `Frail Older Adults: Care in Transition' (ACT) trial, a stepped wedge CRT of a multifaceted geriatric care model versus usual care in the Netherlands.

This paper focuses on investigating Stein's invariant shrinkage estimators for large sample covariance matrices and precision matrices in high-dimensional settings. We consider models that have nearly arbitrary population covariance matrices, including those with potential spikes. By imposing mild technical assumptions, we establish the asymptotic limits of the shrinkers for a wide range of loss functions. A key contribution of this work, enabling the derivation of the limits of the shrinkers, is a novel result concerning the asymptotic distributions of the non-spiked eigenvectors of the sample covariance matrices, which can be of independent interest.

In this paper, we construct and analyze divergence-free finite element methods for the Stokes problem on smooth domains. The discrete spaces are based on the Scott-Vogelius finite element pair of arbitrary polynomial degree greater than two. By combining the Piola transform with the classical isoparametric framework, and with a judicious choice of degrees of freedom, we prove that the method converges with optimal order in the energy norm. We also show that the discrete velocity error converges with optimal order in the $L^2$-norm. Numerical experiments are presented, which support the theoretical results.

This paper introduces an innovative approach to the design of efficient decoders that meet the rigorous requirements of modern communication systems, particularly in terms of ultra-reliability and low latency. We enhance an established hybrid decoding framework by proposing an ordered statistical decoding scheme augmented with a sliding window technique. This novel component replaces a key element of the current architecture, significantly reducing average complexity. A critical aspect of our scheme is the integration of a pre-trained neural network model that dynamically determines the progression or halt of the sliding window process. Furthermore, we present a user-defined soft margin mechanism that adeptly balances the trade-off between decoding accuracy and complexity. Empirical results, supported by a thorough complexity analysis, demonstrate that the proposed scheme holds a competitive advantage over existing state-of-the-art decoders, notably in addressing the decoding failures prevalent in neural min-sum decoders. Additionally, our research uncovers that short LDPC codes can deliver performance comparable to that of short classical linear codes within the critical waterfall region of the SNR, highlighting their potential for practical applications.

In this paper a set of previous general results for the development of B--series for a broad class of stochastic differential equations has been collected. The applicability of these results is demonstrated by the derivation of B--series for non-autonomous semi-linear SDEs and exponential Runge-Kutta methods applied to this class of SDEs, which is a significant generalization of existing theory on such methods.

In this paper, a force-based beam finite element model based on a modified higher-order shear deformation theory is proposed for the accurate analysis of functionally graded beams. In the modified higher-order shear deformation theory, the distribution of transverse shear stress across the beam's thickness is obtained from the differential equilibrium equation on stress, and a modified shear stiffness is derived to take the effect of transverse shear stress distribution into consideration. In the proposed beam element model, unlike traditional beam finite elements that regard generalized displacements as unknown fields, the internal forces are considered as the unknown fields, and they are predefined by using the closed-form solutions of the differential equilibrium equations of higher-order shear beam. Then, the generalized displacements are expressed by the internal forces with the introduction of geometric relations and constitutive equations, and the equation system of the beam element is constructed based on the equilibrium conditions at the boundaries and the compatibility condition within the element. Numerical examples underscore the accuracy and efficacy of the proposed higher-order beam element model in the static analysis of functionally graded sandwich beams, particularly in terms of true transverse shear stress distribution.

This paper investigates the structural changes in the parameters of first-order autoregressive models by analyzing the edge eigenvalues of the precision matrices. Specifically, edge eigenvalues in the precision matrix are observed if and only if there is a structural change in the autoregressive coefficients. We demonstrate that these edge eigenvalues correspond to the zeros of some determinantal equation. Additionally, we propose a consistent estimator for detecting outliers within the panel time series framework, supported by numerical experiments.

In (Dzanic, J. Comp. Phys., 508:113010, 2024), a limiting approach for high-order discontinuous Galerkin schemes was introduced which allowed for imposing constraints on the solution continuously (i.e., everywhere within the element). While exact for linear constraint functionals, this approach only imposed a sufficient (but not the minimum necessary) amount of limiting for nonlinear constraint functionals. This short note shows how this limiting approach can be extended to allow exactness for general nonlinear quasiconcave constraint functionals through a nonlinear limiting procedure, reducing unnecessary numerical dissipation. Some examples are shown for nonlinear pressure and entropy constraints in the compressible gas dynamics equations, where both analytic and iterative approaches are used.

This paper considers energy-efficient connectivity for Internet of Things (IoT) devices in a coexistence scenario between two distinctive communication models: pull- and push-based. In pull-based, the base station (BS) decides when to retrieve a specific type of data from the IoT devices, while in push-based, the IoT device decides when and which data to transmit. To this end, this paper advocates introducing the content-based wake-up (CoWu), which enables the BS to remotely activate only a subset of pull-based nodes equipped with wake-up receivers, observing the relevant data. In this setup, a BS pulls data with CoWu at a specific time instance to fulfill its tasks while collecting data from the nodes operating with a push-based communication model. The resource allocation plays an important role: longer data collection duration for pull-based nodes can lead to high retrieval accuracy while decreasing the probability of data transmission success for push-based nodes, and vice versa. Numerical results show that CoWu can manage communication requirements for both pull-based and push-based nodes while realizing the high energy efficiency (up to 38%) of IoT devices, compared to the baseline scheduling method.

Learning interpretable representations of data generative latent factors is an important topic for the development of artificial intelligence. With the rise of the large multimodal model, it can align images with text to generate answers. In this work, we propose a framework to comprehensively explain each latent variable in the generative models using a large multimodal model. We further measure the uncertainty of our generated explanations, quantitatively evaluate the performance of explanation generation among multiple large multimodal models, and qualitatively visualize the variations of each latent variable to learn the disentanglement effects of different generative models on explanations. Finally, we discuss the explanatory capabilities and limitations of state-of-the-art large multimodal models.

北京阿比特科技有限公司