亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The mean residual life function is a key functional for a survival distribution. It has a practically useful interpretation as the expected remaining lifetime given survival up to a particular time point, and it also characterizes the survival distribution. However, it has received limited attention in terms of inference methods under a probabilistic modeling framework. We seek to provide general inference methodology for mean residual life regression. We employ Dirichlet process mixture modeling for the joint stochastic mechanism of the covariates and the survival response. This density regression approach implies a flexible model structure for the mean residual life of the conditional response distribution, allowing general shapes for mean residual life as a function of covariates given a specific time point, as well as a function of time given particular values of the covariates. We further extend the mixture model to incorporate dependence across experimental groups. This extension is built from a dependent Dirichlet process prior for the group-specific mixing distributions, with common atoms and weights that vary across groups through latent bivariate Beta distributed random variables. We discuss properties of the regression models, and develop methods for posterior inference. The different components of the methodology are illustrated with simulated data examples, and the model is also applied to a data set comprising right censored survival times.

相關內容

We develop a theory of asymptotic efficiency in regular parametric models when data confidentiality is ensured by local differential privacy (LDP). Even though efficient parameter estimation is a classical and well-studied problem in mathematical statistics, it leads to several non-trivial obstacles that need to be tackled when dealing with the LDP case. Starting from a standard parametric model $\mathcal P=(P_\theta)_{\theta\in\Theta}$, $\Theta\subseteq\mathbb R^p$, for the iid unobserved sensitive data $X_1,\dots, X_n$, we establish local asymptotic mixed normality (along subsequences) of the model $$Q^{(n)}\mathcal P=(Q^{(n)}P_\theta^n)_{\theta\in\Theta}$$ generating the sanitized observations $Z_1,\dots, Z_n$, where $Q^{(n)}$ is an arbitrary sequence of sequentially interactive privacy mechanisms. This result readily implies convolution and local asymptotic minimax theorems. In case $p=1$, the optimal asymptotic variance is found to be the inverse of the supremal Fisher-Information $\sup_{Q\in\mathcal Q_\alpha} I_\theta(Q\mathcal P)\in\mathbb R$, where the supremum runs over all $\alpha$-differentially private (marginal) Markov kernels. We present an algorithm for finding a (nearly) optimal privacy mechanism $\hat{Q}$ and an estimator $\hat{\theta}_n(Z_1,\dots, Z_n)$ based on the corresponding sanitized data that achieves this asymptotically optimal variance.

For multivariate data, tandem clustering is a well-known technique aiming to improve cluster identification through initial dimension reduction. Nevertheless, the usual approach using principal component analysis (PCA) has been criticized for focusing solely on inertia so that the first components do not necessarily retain the structure of interest for clustering. To address this limitation, a new tandem clustering approach based on invariant coordinate selection (ICS) is proposed. By jointly diagonalizing two scatter matrices, ICS is designed to find structure in the data while providing affine invariant components. Certain theoretical results have been previously derived and guarantee that under some elliptical mixture models, the group structure can be highlighted on a subset of the first and/or last components. However, ICS has garnered minimal attention within the context of clustering. Two challenges associated with ICS include choosing the pair of scatter matrices and selecting the components to retain. For effective clustering purposes, it is demonstrated that the best scatter pairs consist of one scatter matrix capturing the within-cluster structure and another capturing the global structure. For the former, local shape or pairwise scatters are of great interest, as is the minimum covariance determinant (MCD) estimator based on a carefully chosen subset size that is smaller than usual. The performance of ICS as a dimension reduction method is evaluated in terms of preserving the cluster structure in the data. In an extensive simulation study and empirical applications with benchmark data sets, various combinations of scatter matrices as well as component selection criteria are compared in situations with and without outliers. Overall, the new approach of tandem clustering with ICS shows promising results and clearly outperforms the PCA-based approach.

With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.

Necessary and sufficient conditions of uniform consistency are explored. A hypothesis is simple. Nonparametric sets of alternatives are bounded convex sets in $\mathbb{L}_p$, $p >1$ with "small" balls deleted. The "small" balls have the center at the point of hypothesis and radii of balls tend to zero as sample size increases. For problem of hypothesis testing on a density, we show that, for the sets of alternatives, there are uniformly consistent tests for some sequence of radii of the balls, if and only if, convex set is relatively compact. The results are established for problem of hypothesis testing on a density, for signal detection in Gaussian white noise, for linear ill-posed problems with random Gaussian noise and so on.

In recent years, operator learning, particularly the DeepONet, has received much attention for efficiently learning complex mappings between input and output functions across diverse fields. However, in practical scenarios with limited and noisy data, accessing the uncertainty in DeepONet predictions becomes essential, especially in mission-critical or safety-critical applications. Existing methods, either computationally intensive or yielding unsatisfactory uncertainty quantification, leave room for developing efficient and informative uncertainty quantification (UQ) techniques tailored for DeepONets. In this work, we proposed a novel inference approach for efficient UQ for operator learning by harnessing the power of the Ensemble Kalman Inversion (EKI) approach. EKI, known for its derivative-free, noise-robust, and highly parallelizable feature, has demonstrated its advantages for UQ for physics-informed neural networks [28]. Our innovative application of EKI enables us to efficiently train ensembles of DeepONets while obtaining informative uncertainty estimates for the output of interest. We deploy a mini-batch variant of EKI to accommodate larger datasets, mitigating the computational demand due to large datasets during the training stage. Furthermore, we introduce a heuristic method to estimate the artificial dynamics covariance, thereby improving our uncertainty estimates. Finally, we demonstrate the effectiveness and versatility of our proposed methodology across various benchmark problems, showcasing its potential to address the pressing challenges of uncertainty quantification in DeepONets, especially for practical applications with limited and noisy data.

It is well known that Newton's method, especially when applied to large problems such as the discretization of nonlinear partial differential equations (PDEs), can have trouble converging if the initial guess is too far from the solution. This work focuses on accelerating this convergence, in the context of the discretization of nonlinear elliptic PDEs. We first provide a quick review of existing methods, and justify our choice of learning an initial guess with a Fourier neural operator (FNO). This choice was motivated by the mesh-independence of such operators, whose training and evaluation can be performed on grids with different resolutions. The FNO is trained using a loss minimization over generated data, loss functions based on the PDE discretization. Numerical results, in one and two dimensions, show that the proposed initial guess accelerates the convergence of Newton's method by a large margin compared to a naive initial guess, especially for highly nonlinear or anisotropic problems.

We study the multivariate integration problem for periodic functions from the weighted Korobov space in the randomized setting. We introduce a new randomized rank-1 lattice rule with a randomly chosen number of points, which avoids the need for component-by-component construction in the search for good generating vectors while still achieving nearly the optimal rate of the randomized error. Our idea is to exploit the fact that at least half of the possible generating vectors yield nearly the optimal rate of the worst-case error in the deterministic setting. By randomly choosing generating vectors $r$ times and comparing their corresponding worst-case errors, one can find one generating vector with a desired worst-case error bound with a very high probability, and the (small) failure probability can be controlled by increasing $r$ logarithmically as a function of the number of points. Numerical experiments are conducted to support our theoretical findings.

Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be consistently estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing runtimes between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.

We develop a new, powerful method for counting elements in a multiset. As a first application, we use this algorithm to study the number of occurrences of patterns in a permutation. For patterns of length 3 there are two Wilf classes, and the general behaviour of these is reasonably well-known. We slightly extend some of the known results in that case, and exhaustively study the case of patterns of length 4, about which there is little previous knowledge. For such patterns, there are seven Wilf classes, and based on extensive enumerations and careful series analysis, we have conjectured the asymptotic behaviour for all classes.

Consider a diffusion process X=(X_t), with t in [0,1], observed at discrete times and high frequency, solution of a stochastic differential equation whose drift and diffusion coefficients are assumed to be unknown. In this article, we focus on the nonparametric esstimation of the diffusion coefficient. We propose ridge estimators of the square of the diffusion coefficient from discrete observations of X and that are obtained by minimization of the least squares contrast. We prove that the estimators are consistent and derive rates of convergence as the size of the sample paths tends to infinity, and the discretization step of the time interval [0,1] tend to zero. The theoretical results are completed with a numerical study over synthetic data.

北京阿比特科技有限公司