亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Covariate-adjusted randomization (CAR) can reduce the risk of covariate imbalance and, when accounted for in analysis, increase the power of a trial. Despite CAR advances, stratified randomization remains the most common CAR method. Matched Randomization (MR) randomizes treatment assignment within optimally identified matched pairs based on covariates and a distance matrix. When participants enroll sequentially, Sequentially Matched Randomization (SMR) randomizes within matches found "on-the-fly" to meet a pre-specified matching threshold. However, pre-specifying the ideal threshold can be challenging and SMR yields less-optimal matches than MR. We extend SMR to allow multiple participants to be randomized simultaneously, to use a dynamic threshold, and to allow matches to break and rematch if a better match later enrolls (Sequential Rematched Randomization; SRR). In simplified settings and a real-world application, we assess whether these extensions improve covariate balance, estimator/study efficiency, and optimality of matches. We investigate whether adjusting for more covariates can be detrimental upon covariate balance and efficiency as is the case of traditional stratified randomization. As secondary objectives, we use the case study to assess how SMR schemes compare side-by-side with common and related CAR schemes and whether adjusting for covariates in the design can be as powerful as adjusting for covariates in a parametric model. We find each SMR extension, individually and collectively, to improve covariate balance, estimator efficiency, study power, and quality of matches. We provide a case-study where CAR schemes with randomization-based inference can be as and more powerful than Non-CAR schemes with parametric adjustment for covariates.

相關內容

Previous researchers conducting Just-In-Time (JIT) defect prediction tasks have primarily focused on the performance of individual pre-trained models, without exploring the relationship between different pre-trained models as backbones. In this study, we build six models: RoBERTaJIT, CodeBERTJIT, BARTJIT, PLBARTJIT, GPT2JIT, and CodeGPTJIT, each with a distinct pre-trained model as its backbone. We systematically explore the differences and connections between these models. Specifically, we investigate the performance of the models when using Commit code and Commit message as inputs, as well as the relationship between training efficiency and model distribution among these six models. Additionally, we conduct an ablation experiment to explore the sensitivity of each model to inputs. Furthermore, we investigate how the models perform in zero-shot and few-shot scenarios. Our findings indicate that each model based on different backbones shows improvements, and when the backbone's pre-training model is similar, the training resources that need to be consumed are much more closer. We also observe that Commit code plays a significant role in defect detection, and different pre-trained models demonstrate better defect detection ability with a balanced dataset under few-shot scenarios. These results provide new insights for optimizing JIT defect prediction tasks using pre-trained models and highlight the factors that require more attention when constructing such models. Additionally, CodeGPTJIT and GPT2JIT achieved better performance than DeepJIT and CC2Vec on the two datasets respectively under 2000 training samples. These findings emphasize the effectiveness of transformer-based pre-trained models in JIT defect prediction tasks, especially in scenarios with limited training data.

Stochastic optimization methods have been hugely successful in making large-scale optimization problems feasible when computing the full gradient is computationally prohibitive. Using the theory of modified equations for numerical integrators, we propose a class of stochastic differential equations that approximate the dynamics of general stochastic optimization methods more closely than the original gradient flow. Analyzing a modified stochastic differential equation can reveal qualitative insights about the associated optimization method. Here, we study mean-square stability of the modified equation in the case of stochastic coordinate descent.

We provide a statistical analysis of a tool in nonlinear-type time-frequency analysis, the synchrosqueezing transform (SST), for both the null and non-null cases. The intricate nonlinear interaction of different quantities in SST is quantified by carefully analyzing relevant multivariate complex Gaussian random variables. Specifically, we provide the quotient distribution of dependent and improper complex Gaussian random variables. Then, a central limit theorem result for SST is established. {As an example}, we provide a block bootstrap scheme based on the established SST theory to test if a given time series contains oscillatory components.

Recently, a stability theory has been developed to study the linear stability of modified Patankar--Runge--Kutta (MPRK) schemes. This stability theory provides sufficient conditions for a fixed point of an MPRK scheme to be stable as well as for the convergence of an MPRK scheme towards the steady state of the corresponding initial value problem, whereas the main assumption is that the initial value is sufficiently close to the steady state. Initially, numerical experiments in several publications indicated that these linear stability properties are not only local, but even global, as is the case for general linear methods. Recently, however, it was discovered that the linear stability of the MPDeC(8) scheme is indeed only local in nature. Our conjecture is that this is a result of negative Runge--Kutta (RK) parameters of MPDeC(8) and that linear stability is indeed global, if the RK parameters are nonnegative. To support this conjecture, we examine the family of MPRK22($\alpha$) methods with negative RK parameters and show that even among these methods there are methods for which the stability properties are only local. However, this local linear stability is not observed for MPRK22($\alpha$) schemes with nonnegative Runge-Kutta parameters.

Binary responses arise in a multitude of statistical problems, including binary classification, bioassay, current status data problems and sensitivity estimation. There has been an interest in such problems in the Bayesian nonparametrics community since the early 1970s, but inference given binary data is intractable for a wide range of modern simulation-based models, even when employing MCMC methods. Recently, Christensen (2023) introduced a novel simulation technique based on counting permutations, which can estimate both posterior distributions and marginal likelihoods for any model from which a random sample can be generated. However, the accompanying implementation of this technique struggles when the sample size is too large (n > 250). Here we present perms, a new implementation of said technique which is substantially faster and able to handle larger data problems than the original implementation. It is available both as an R package and a Python library. The basic usage of perms is illustrated via two simple examples: a tractable toy problem and a bioassay problem. A more complex example involving changepoint analysis is also considered. We also cover the details of the implementation and illustrate the computational speed gain of perms via a simple simulation study.

The non-identifiability of the competing risks model requires researchers to work with restrictions on the model to obtain informative results. We present a new identifiability solution based on an exclusion restriction. Many areas of applied research use methods that rely on exclusion restrcitions. It appears natural to also use them for the identifiability of competing risks models. By imposing the exclusion restriction couple with an Archimedean copula, we are able to avoid any parametric restriction on the marginal distributions. We introduce a semiparametric estimation approach for the nonparametric marginals and the parametric copula. Our simulation results demonstrate the usefulness of the suggested model, as the degree of risk dependence can be estimated without parametric restrictions on the marginal distributions.

In this paper, efficient alternating direction implicit (ADI) schemes are proposed to solve three-dimensional heat equations with irregular boundaries and interfaces. Starting from the well-known Douglas-Gunn ADI scheme, a modified ADI scheme is constructed to mitigate the issue of accuracy loss in solving problems with time-dependent boundary conditions. The unconditional stability of the new ADI scheme is also rigorously proven with the Fourier analysis. Then, by combining the ADI schemes with a 1D kernel-free boundary integral (KFBI) method, KFBI-ADI schemes are developed to solve the heat equation with irregular boundaries. In 1D sub-problems of the KFBI-ADI schemes, the KFBI discretization takes advantage of the Cartesian grid and preserves the structure of the coefficient matrix so that the fast Thomas algorithm can be applied to solve the linear system efficiently. Second-order accuracy and unconditional stability of the KFBI-ADI schemes are verified through several numerical tests for both the heat equation and a reaction-diffusion equation. For the Stefan problem, which is a free boundary problem of the heat equation, a level set method is incorporated into the ADI method to capture the time-dependent interface. Numerical examples for simulating 3D dendritic solidification phenomenons are also presented.

A standard approach to solve ordinary differential equations, when they describe dynamical systems, is to adopt a Runge-Kutta or related scheme. Such schemes, however, are not applicable to the large class of equations which do not constitute dynamical systems. In several physical systems, we encounter integro-differential equations with memory terms where the time derivative of a state variable at a given time depends on all past states of the system. Secondly, there are equations whose solutions do not have well-defined Taylor series expansion. The Maxey-Riley-Gatignol equation, which describes the dynamics of an inertial particle in nonuniform and unsteady flow, displays both challenges. We use it as a test bed to address the questions we raise, but our method may be applied to all equations of this class. We show that the Maxey-Riley-Gatignol equation can be embedded into an extended Markovian system which is constructed by introducing a new dynamical co-evolving state variable that encodes memory of past states. We develop a Runge-Kutta algorithm for the resultant Markovian system. The form of the kernels involved in deriving the Runge-Kutta scheme necessitates the use of an expansion in powers of $t^{1/2}$. Our approach naturally inherits the benefits of standard time-integrators, namely a constant memory storage cost, a linear growth of operational effort with simulation time, and the ability to restart a simulation with the final state as the new initial condition.

Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions. In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is generally a computationally expensive task, all the more with Bayesian estimation methods. In particular, for generalised nonlinear Hawkes processes, Monte-Carlo Markov Chain methods applied to compute the doubly intractable posterior distribution are not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we first unify existing variational Bayes approaches under a general nonparametric inference framework, and analyse the asymptotic properties of these methods under easily verifiable conditions on the prior, the variational class, and the nonlinear model. Secondly, we propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes. Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting. Through an extensive set of numerical simulations, we also demonstrate that our procedure is able to adapt to the dimensionality of the parameter of the Hawkes process, and is partially robust to some type of model mis-specification.

Fair calibration is a widely desirable fairness criteria in risk prediction contexts. One way to measure and achieve fair calibration is with multicalibration. Multicalibration constrains calibration error among flexibly-defined subpopulations while maintaining overall calibration. However, multicalibrated models can exhibit a higher percent calibration error among groups with lower base rates than groups with higher base rates. As a result, it is possible for a decision-maker to learn to trust or distrust model predictions for specific groups. To alleviate this, we propose \emph{proportional multicalibration}, a criteria that constrains the percent calibration error among groups and within prediction bins. We prove that satisfying proportional multicalibration bounds a model's multicalibration as well its \emph{differential calibration}, a fairness criteria that directly measures how closely a model approximates sufficiency. Therefore, proportionally calibrated models limit the ability of decision makers to distinguish between model performance on different patient groups, which may make the models more trustworthy in practice. We provide an efficient algorithm for post-processing risk prediction models for proportional multicalibration and evaluate it empirically. We conduct simulation studies and investigate a real-world application of PMC-postprocessing to prediction of emergency department patient admissions. We observe that proportional multicalibration is a promising criteria for controlling simultaneous measures of calibration fairness of a model over intersectional groups with virtually no cost in terms of classification performance.

北京阿比特科技有限公司