亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Obtaining an accurate estimate of the underlying covariance matrix from finite sample size data is challenging due to sample size noise. In recent years, sophisticated covariance-cleaning techniques based on random matrix theory have been proposed to address this issue. Most of these methods aim to achieve an optimal covariance matrix estimator by minimizing the Frobenius norm distance as a measure of the discrepancy between the true covariance matrix and the estimator. However, this practice offers limited interpretability in terms of information theory. To better understand this relationship, we focus on the Kullback-Leibler divergence to quantify the information lost by the estimator. Our analysis centers on rotationally invariant estimators, which are state-of-art in random matrix theory, and we derive an analytical expression for their Kullback-Leibler divergence. Due to the intricate nature of the calculations, we use genetic programming regressors paired with human intuition. Ultimately, using this approach, we formulate a conjecture validated through extensive simulations, showing that the Frobenius distance corresponds to a first-order expansion term of the Kullback-Leibler divergence, thus establishing a more defined link between the two measures.

相關內容

在概率論和統計學中,協方差矩陣(也稱為自協方差矩陣,色散矩陣,方差矩陣或方差-協方差矩陣)是平方矩陣,給出了給定隨機向量的每對元素之間的協方差。 在矩陣對角線中存在方差,即每個元素與其自身的協方差。

The sum of quantum computing errors is the key element both for the estimation and control of errors in quantum computing and for its statistical study. In this article we analyze the sum of two independent quantum computing errors, $X_1$ and $X_2$, and we obtain the formula of the variance of the sum of these errors: $$ V(X_1+X_2)=V(X_1)+V(X_2)-\frac{V(X_1)V(X_2)}{2}. $$ We conjecture that this result holds true for general quantum computing errors and we prove the formula for independent isotropic quantum computing errors.

Accelerated failure time (AFT) models are frequently used to model survival data, providing a direct quantification of the relationship between event times and covariates. These models allow for the acceleration or deceleration of failure times through a multiplicative factor that accounts for the effect of covariates. While existing literature provides numerous methods for fitting AFT models with time-fixed covariates, adapting these approaches to scenarios involving both time-varying covariates and partly interval-censored data remains challenging. Motivated by a randomised clinical trial dataset on advanced melanoma patients, we propose a maximum penalised likelihood approach for fitting a semiparametric AFT model to survival data with partly interval-censored failure times. This method also accommodates both time-fixed and time-varying covariates. We utilise Gaussian basis functions to construct a smooth approximation of the non-parametric baseline hazard and fit the model using a constrained optimisation approach. The effectiveness of our method is demonstrated through extensive simulations. Finally, we illustrate the relevance of our approach by applying it to a dataset from a randomised clinical trial involving patients with advanced melanoma.

The notion of a non-deterministic logical matrix (where connectives are interpreted as multi-functions) extends the traditional semantics for propositional logics based on logical matrices (where connectives are interpreted as functions). This extension allows for finitely characterizing a much wider class of logics, and has proven decisive in a myriad of recent compositionality results. In this paper we show that the added expressivity brought by non-determinism also has its drawbacks, and in particular that the problem of determining whether two given finite non-deterministic matrices are equivalent, in the sense that they induce the same logic, becomes undecidable. We also discuss some workable sufficient conditions and particular cases, namely regarding rexpansion homomorphisms and bridges to calculi.

The problem of estimating, from a random sample of points, the dimension of a compact subset S of the Euclidean space is considered. The emphasis is put on consistency results in the statistical sense. That is, statements of convergence to the true dimension value when the sample size grows to infinity. Among the many available definitions of dimension, we have focused (on the grounds of its statistical tractability) on three notions: the Minkowski dimension, the correlation dimension and the, perhaps less popular, concept of pointwise dimension. We prove the statistical consistency of some natural estimators of these quantities. Our proofs partially rely on the use of an instrumental estimator formulated in terms of the empirical volume function Vn (r), defined as the Lebesgue measure of the set of points whose distance to the sample is at most r. In particular, we explore the case in which the true volume function V (r) of the target set S is a polynomial on some interval starting at zero. An empirical study is also included. Our study aims to provide some theoretical support, and some practical insights, for the problem of deciding whether or not the set S has a dimension smaller than that of the ambient space. This is a major statistical motivation of the dimension studies, in connection with the so-called Manifold Hypothesis.

We propose a novel partitioned scheme based on Eikonal equations to model the coupled propagation of the electrical signal in the His-Purkinje system and in the myocardium for cardiac electrophysiology. This scheme allows, for the first time in Eikonal-based modeling, to capture all possible signal reentries between the Purkinje network and the cardiac muscle that may occur under pathological conditions. As part of the proposed scheme, we introduce a new pseudo-time method for the Eikonal-diffusion problem in the myocardium, to correctly enforce electrical stimuli coming from the Purkinje network. We test our approach by performing numerical simulations of cardiac electrophysiology in a real biventricular geometry, under both pathological and therapeutic conditions, to demonstrate its flexibility, robustness, and accuracy.

A statistical network model with overlapping communities can be generated as a superposition of mutually independent random graphs of varying size. The model is parameterized by the number of nodes, the number of communities, and the joint distribution of the community size and the edge probability. This model admits sparse parameter regimes with power-law limiting degree distributions and non-vanishing clustering coefficients. This article presents large-scale approximations of clique and cycle frequencies for graph samples generated by the model, which are valid for regimes with unbounded numbers of overlapping communities. Our results reveal the growth rates of these subgraph frequencies and show that their theoretical densities can be reliably estimated from data.

Reliability analysis is a sub-field of uncertainty quantification that assesses the probability of a system performing as intended under various uncertainties. Traditionally, this analysis relies on deterministic models, where experiments are repeatable, i.e., they produce consistent outputs for a given set of inputs. However, real-world systems often exhibit stochastic behavior, leading to non-repeatable outcomes. These so-called stochastic simulators produce different outputs each time the model is run, even with fixed inputs. This paper formally introduces reliability analysis for stochastic models and addresses it by using suitable surrogate models to lower its typically high computational cost. Specifically, we focus on the recently introduced generalized lambda models and stochastic polynomial chaos expansions. These emulators are designed to learn the inherent randomness of the simulator's response and enable efficient uncertainty quantification at a much lower cost than traditional Monte Carlo simulation. We validate our methodology through three case studies. First, using an analytical function with a closed-form solution, we demonstrate that the emulators converge to the correct solution. Second, we present results obtained from the surrogates using a toy example of a simply supported beam. Finally, we apply the emulators to perform reliability analysis on a realistic wind turbine case study, where only a dataset of simulation results is available.

Statistical learning under distribution shift is challenging when neither prior knowledge nor fully accessible data from the target distribution is available. Distributionally robust learning (DRL) aims to control the worst-case statistical performance within an uncertainty set of candidate distributions, but how to properly specify the set remains challenging. To enable distributional robustness without being overly conservative, in this paper, we propose a shape-constrained approach to DRL, which incorporates prior information about the way in which the unknown target distribution differs from its estimate. More specifically, we assume the unknown density ratio between the target distribution and its estimate is isotonic with respect to some partial order. At the population level, we provide a solution to the shape-constrained optimization problem that does not involve the isotonic constraint. At the sample level, we provide consistency results for an empirical estimator of the target in a range of different settings. Empirical studies on both synthetic and real data examples demonstrate the improved accuracy of the proposed shape-constrained approach.

We prove, for stably computably enumerable formal systems, direct analogues of the first and second incompleteness theorems of G\"odel. A typical stably computably enumerable set is the set of Diophantine equations with no integer solutions, and in particular such sets are generally not computably enumerable. And so this gives the first extension of the second incompleteness theorem to non classically computable formal systems. Let's motivate this with a somewhat physical application. Let $\mathcal{H} $ be the suitable infinite time limit (stabilization in the sense of the paper) of the mathematical output of humanity, specializing to first order sentences in the language of arithmetic (for simplicity), and understood as a formal system. Suppose that all the relevant physical processes in the formation of $\mathcal{H} $ are Turing computable. Then as defined $\mathcal{H} $ may \emph{not} be computably enumerable, but it is stably computably enumerable. Thus, the classical G\"odel disjunction applied to $\mathcal{H} $ is meaningless, but applying our incompleteness theorems to $\mathcal{H} $ we then get a sharper version of G\"odel's disjunction: assume $\mathcal{H} \vdash PA$ then either $\mathcal{H} $ is not stably computably enumerable or $\mathcal{H} $ is not 1-consistent (in particular is not sound) or $\mathcal{H} $ cannot prove a certain true statement of arithmetic (and cannot disprove it if in addition $\mathcal{H} $ is 2-consistent).

Many combinatorial optimization problems can be formulated as the search for a subgraph that satisfies certain properties and minimizes the total weight. We assume here that the vertices correspond to points in a metric space and can take any position in given uncertainty sets. Then, the cost function to be minimized is the sum of the distances for the worst positions of the vertices in their uncertainty sets. We propose two types of polynomial-time approximation algorithms. The first one relies on solving a deterministic counterpart of the problem where the uncertain distances are replaced with maximum pairwise distances. We study in details the resulting approximation ratio, which depends on the structure of the feasible subgraphs and whether the metric space is Ptolemaic or not. The second algorithm is a fully-polynomial time approximation scheme for the special case of $s-t$ paths.

北京阿比特科技有限公司