亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Functional principal component analysis has been shown to be invaluable for revealing variation modes of longitudinal outcomes, which serves as important building blocks for forecasting and model building. Decades of research have advanced methods for functional principal component analysis often assuming independence between the observation times and longitudinal outcomes. Yet such assumptions are fragile in real-world settings where observation times may be driven by outcome-related reasons. Rather than ignoring the informative observation time process, we explicitly model the observational times by a counting process dependent on time-varying prognostic factors. Identification of the mean, covariance function, and functional principal components ensues via inverse intensity weighting. We propose using weighted penalized splines for estimation and establish consistency and convergence rates for the weighted estimators. Simulation studies demonstrate that the proposed estimators are substantially more accurate than the existing ones in the presence of a correlation between the observation time process and the longitudinal outcome process. We further examine the finite-sample performance of the proposed method using the Acute Infection and Early Disease Research Program study.

相關內容

Time-to-event analysis, or Survival analysis, provides valuable insights into clinical prognosis and treatment recommendations. However, this task is typically more challenging than other regression tasks due to the censored observations. Moreover, concerns regarding the reliability of predictions persist among clinicians, mainly attributed to the absence of confidence assessment, robustness, and calibration of prediction. To address those challenges, we introduce an evidential regression model designed especially for time-to-event prediction tasks, with which the most plausible event time, is directly quantified by aggregated Gaussian random fuzzy numbers (GRFNs). The GRFNs are a newly introduced family of random fuzzy subsets of the real line that generalizes both Gaussian random variables and Gaussian possibility distributions. Different from conventional methods that construct models based on strict data distribution, e.g., proportional hazard function, our model only assumes the event time is encoded in a real line GFRN without any strict distribution assumption, therefore offering more flexibility in complex data scenarios. Furthermore, the epistemic and aleatory uncertainty regarding the event time is quantified within the aggregated GRFN as well. Our model can, therefore, provide more detailed clinical decision-making guidance with two more degrees of information. The model is fit by minimizing a generalized negative log-likelihood function that accounts for data censoring based on uncertainty evidence reasoning. Experimental results on simulated datasets with varying data distributions and censoring scenarios, as well as on real-world datasets across diverse clinical settings and tasks, demonstrate that our model achieves both accurate and reliable performance, outperforming state-of-the-art methods.

The susceptibility of timestepping algorithms to numerical instabilities is an important consideration when simulating partial differential equations (PDEs). Here we identify and analyze a pernicious numerical instability arising in pseudospectral simulations of nonlinear wave propagation resulting in finite-time blow-up. The blow-up time scale is independent of the spatial resolution and spectral basis but sensitive to the timestepping scheme and the timestep size. The instability appears in multi-step and multi-stage implicit-explicit (IMEX) timestepping schemes of different orders of accuracy and has been found to manifest in simulations of soliton solutions of the Korteweg-de Vries (KdV) equation and traveling wave solutions of a nonlinear generalized Klein-Gordon equation. Focusing on the case of KdV solitons, we show that modal predictions from linear stability theory are unable to explain the instability because the spurious growth from linear dispersion is small and nonlinear sources of error growth converge too slowly in the limit of small timestep size. We then develop a novel multi-scale asymptotic framework that captures the slow, nonlinear accumulation of timestepping errors. The framework allows the solution to vary with respect to multiple time scales related to the timestep size and thus recovers the instability as a function of a slow time scale dictated by the order of accuracy of the timestepping scheme. We show that this approach correctly describes our simulations of solitons by making accurate predictions of the blow-up time scale and transient features of the instability. Our work demonstrates that studies of long-time simulations of nonlinear waves should exercise caution when validating their timestepping schemes.

Missing data is a common problem that challenges the study of effects of treatments. In the context of mediation analysis, this paper addresses missingness in the two key variables, mediator and outcome, focusing on identification. We consider self-separated missingness models where identification is achieved by conditional independence assumptions only and self-connected missingness models where identification relies on so-called shadow variables. The first class is somewhat limited as it is constrained by the need to remove a certain number of connections from the model. The second class turns out to include substantial variation in the position of the shadow variable in the causal structure (vis-a-vis the mediator and outcome) and the corresponding implications for the model. In constructing the models, to improve plausibility, we pay close attention to allowing, where possible, dependencies due to unobserved causes of the missingness. In this exploration, we develop theory where needed. This results in templates for identification in this mediation setting, generally useful identification techniques, and perhaps most significantly, synthesis and substantial expansion of shadow variable theory.

We discuss nonparametric estimation of the trend coefficient in models governed by a stochastic differential equation driven by a multiplicative stochastic volatility.

If an algorithm is to be counted as a practically working solution to a decision problem, then the algorithm must must verifiable in some constructed and ``trusted'' theory such as PA or ZF. In this paper, a class of decision problems related to inconsistency proofs for a general class of formal theories is used to demonstrate that under this constructibility restriction, an unformalizable yet arguably convincing argument can be given for the existence of decision problems for which there exists an explicitly constructible algorithm recognizing correct solutions in polynomial time, but for which there exists no explicitly constructible solution algorithm. While these results do not solve the P versus NP problem in the classical sense of supplying a proof one way or the other in a ``trusted'' formal theory, arguably they resolve the natural constructive version of the problem.

If the conclusion of a data analysis is sensitive to dropping very few data points, that conclusion might hinge on the particular data at hand rather than representing a more broadly applicable truth. How could we check whether this sensitivity holds? One idea is to consider every small subset of data, drop it from the dataset, and re-run our analysis. But running MCMC to approximate a Bayesian posterior is already very expensive; running multiple times is prohibitive, and the number of re-runs needed here is combinatorially large. Recent work proposes a fast and accurate approximation to find the worst-case dropped data subset, but that work was developed for problems based on estimating equations -- and does not directly handle Bayesian posterior approximations using MCMC. We make two principal contributions in the present work. We adapt the existing data-dropping approximation to estimators computed via MCMC. Observing that Monte Carlo errors induce variability in the approximation, we use a variant of the bootstrap to quantify this uncertainty. We demonstrate how to use our approximation in practice to determine whether there is non-robustness in a problem. Empirically, our method is accurate in simple models, such as linear regression. In models with complicated structure, such as hierarchical models, the performance of our method is mixed.

We use backward error analysis for differential equations to obtain modified or distorted equations describing the behaviour of the Newmark scheme applied to the transient structural dynamics equation. Based on the newly derived distorted equations, we give expressions for the numerically or algorithmically distorted stiffness and damping matrices of a system simulated using the Newmark scheme. Using these results, we show how to construct compensation terms from the original parameters of the system, which improve the performance of Newmark simulations. The required compensation terms turn out to be slight modifications to the original system parameters (e.g. the damping or stiffness matrices), and can be applied without changing the time step or modifying the scheme itself. Two such compensations are given: one eliminates numerical damping, while the other achieves fourth-order accurate calculations using the traditionally second-order Newmark method. The performance of both compensation methods is evaluated numerically to demonstrate their validity, and they are compared to the uncompensated Newmark method, the generalized-$\alpha$ method and the 4th-order Runge--Kutta scheme.

Stochastic gradient descent (SGD) is a workhorse algorithm for solving large-scale optimization problems in data science and machine learning. Understanding the convergence of SGD is hence of fundamental importance. In this work we examine the SGD convergence (with various step sizes) when applied to unconstrained convex quadratic programming (essentially least-squares (LS) problems), and in particular analyze the error components respect to the eigenvectors of the Hessian. The main message is that the convergence depends largely on the corresponding eigenvalues (singular values of the coefficient matrix in the LS context), namely the components for the large singular values converge faster in the initial phase. We then show there is a phase transition in the convergence where the convergence speed of the components, especially those corresponding to the larger singular values, will decrease. Finally, we show that the convergence of the overall error (in the solution) tends to decay as more iterations are run, that is, the initial convergence is faster than the asymptote.

Fairness holds a pivotal role in the realm of machine learning, particularly when it comes to addressing groups categorised by protected attributes, e.g., gender, race. Prevailing algorithms in fair learning predominantly hinge on accessibility or estimations of these protected attributes, at least in the training process. We design a single group-blind projection map that aligns the feature distributions of both groups in the source data, achieving (demographic) group parity, without requiring values of the protected attribute for individual samples in the computation of the map, as well as its use. Instead, our approach utilises the feature distributions of the privileged and unprivileged groups in a boarder population and the essential assumption that the source data are unbiased representation of the population. We present numerical results on synthetic data and real data.

We consider a class of stochastic gradient optimization schemes. Assuming that the objective function is strongly convex, we prove weak error estimates which are uniform in time for the error between the solution of the numerical scheme, and the solutions of continuous-time modified (or high-resolution) differential equations at first and second orders, with respect to the time-step size. At first order, the modified equation is deterministic, whereas at second order the modified equation is stochastic and depends on a modified objective function. We go beyond existing results where the error estimates have been considered only on finite time intervals and were not uniform in time. This allows us to then provide a rigorous complexity analysis of the method in the large time and small time step size regimes.

北京阿比特科技有限公司