亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A popular design choice in public health and implementation science research, stepped wedge cluster randomized trials (SW-CRTs) are a form of randomized trial whereby clusters are progressively transitioned from control to intervention, and the timing of transition is randomized for each cluster. An important task at the design stage is to ensure that the planned trial has sufficient power to observe a clinically meaningful effect size. While methods for determining study power have been well-developed for SW-CRTs with continuous and binary outcomes, limited methods for power calculation are available for SW-CRTs with censored time-to-event outcomes. In this article, we propose a stratified marginal Cox model to account for secular trend in cross-sectional SW-CRTs, and derive an explicit expression of the robust sandwich variance to facilitate power calculations without the need for computationally intensive simulations. Power formulas based on both the Wald and robust score tests are developed and compared via simulation, generally demonstrating superiority of robust score procedures in different finite-sample scenarios. Finally, we illustrate our methods using a SW-CRT testing the effect of a new electronic reminder system on time to catheter removal in hospital settings. We also offer an R Shiny application to facilitate sample size and power calculations using our proposed methods.

相關內容

Multi-product formulas (MPF) are linear combinations of Trotter circuits offering high-quality simulation of Hamiltonian time evolution with fewer Trotter steps. Here we report two contributions aimed at making multi-product formulas more viable for near-term quantum simulations. First, we extend the theory of Trotter error with commutator scaling developed by Childs, Su, Tran et al. to multi-product formulas. Our result implies that multi-product formulas can achieve a quadratic reduction of Trotter error in 1-norm (nuclear norm) on arbitrary time intervals compared with the regular product formulas without increasing the required circuit depth or qubit connectivity. The number of circuit repetitions grows only by a constant factor. Second, we introduce dynamic multi-product formulas with time-dependent coefficients chosen to minimize a certain efficiently computable proxy for the Trotter error. We use a minimax estimation method to make dynamic multi-product formulas robust to uncertainty from algorithmic errors, sampling and hardware noise. We call this method Minimax MPF and we provide a rigorous bound on its error.

Many products in engineering are highly reliable with large mean lifetimes to failure. Performing lifetests under normal operations conditions would thus require long experimentation times and high experimentation costs. Alternatively, accelerated lifetests shorten the experimentation time by running the tests at higher than normal stress conditions, thus inducing more failures. Additionally, a log-linear regression model can be used to relate the lifetime distribution of the product to the level of stress it experiences. After estimating the parameters of this relationship, results can be extrapolated to normal operating conditions. On the other hand, censored data is common in reliability analysis. Interval-censored data arise when continuous inspection is difficult or infeasible due to technical or budgetary constraints. In this paper, we develop robust restricted estimators based on the density power divergence for step-stress accelerated life-tests under Weibull distributions with interval-censored data. We present theoretical asymptotic properties of the estimators and develop robust Rao-type test statistics based on the proposed robust estimators for testing composite null hypothesis on the model parameters.

Accelerated life-tests (ALTs) are used for inferring lifetime characteristics of highly reliable products. In particular, step-stress ALTs increase the stress level at which units under test are subject at certain pre-fixed times, thus accelerating the product's wear and inducing its failure. In some cases, due to cost or product nature constraints, continuous monitoring of devices is infeasible, and so the units are inspected for failures at particular inspection time points. In a such setup, the ALT response is interval-censored. Furthermore, when a test unit fails, there are often more than one fatal cause for the failure, known as competing risks. In this paper, we assume that all competing risks are independent and follow exponential distributions with scale parameters depending on the stress level. Under this setup, we present a family of robust estimators based on density power divergence, including the classical maximum likelihood estimator (MLE) as a particular case. We derive asymptotic and robustness properties of the Minimum Density Power Divergence Estimator (MDPDE), showing its consistency for large samples. Based on these MDPDEs, estimates of the lifetime characteristics of the product as well as estimates of cause-specific lifetime characteristics are then developed. Direct asymptotic, transformed and, bootstrap confidence intervals for the mean lifetime to failure, reliability at a mission time and, distribution quantiles are proposed, and their performance is then compared through Monte Carlo simulations. Moreover, the performance of the MDPDE family has been examined through an extensive numerical study and the methods of inference discussed here are finally illustrated with a real-data example concerning electronic devices.

Accelerated life tests (ALTs) play a crucial role in reliability analyses, providing lifetime estimates of highly reliable products. Among ALTs, step-stress design increases the stress level at predefined times, while maintaining a constant stress level between successive changes. This approach accelerates the occurrence of failures, reducing experimental duration and cost. While many studies assume a specific form for the lifetime distribution, in certain applications instead a general form satisfying certain properties should be preferred. Proportional hazard model assumes that applied stresses act multiplicatively on the hazard rate, so the hazards function may be divided into two factors, with one representing the effect of the stress, and the other representing the baseline hazard. In this work we examine two particular forms of baseline hazards, namely, linear and quadratic. Moreover, certain experiments may face practical constraints making continuous monitoring of devices infeasible. Instead, devices under test are inspected at predetermined intervals, leading to interval-censoring data. On the other hand, recent works have shown an appealing trade-off between the efficiency and robustness of divergence-based estimators. This paper introduces the step-stress ALT model under proportional hazards and presents a robust family of minimum density power divergence estimators (MDPDEs) for estimating device reliability and related lifetime characteristics such as mean lifetime and distributional quantiles. The asymptotic distributions of these estimates are derived, providing approximate confidence intervals. Empirical evaluations through Monte Carlo simulations demonstrate their performance in terms of robustness and efficiency. Finally, an illustrative example is provided to demonstrate the usefulness of the model and associated methods developed.

We present a family of policies that, integrated within a runtime task scheduler (Nanox), pursue the goal of improving the energy efficiency of task-parallel executions with no intervention from the programmer. The proposed policies tackle the problem by modifying the core operating frequency via DVFS mechanisms, or by enabling/disabling the mapping of tasks to specific cores at selected execution points, depending on the internal status of the scheduler. Experimental results on an asymmetric SoC (Exynos 5422) and for a specific operation (Cholesky factorization) reveal gains up to 29% in terms of energy efficiency and considerable reductions in average power.

We provide a new theoretical framework for the variable-step deferred correction (DC) methods based on the well-known BDF2 formula. By using the discrete orthogonal convolution kernels, some high-order BDF2-DC methods are proven to be stable on arbitrary time grids according to the recent definition of stability (SINUM, 60: 2253-2272). It significantly relaxes the existing step-ratio restrictions for the BDF2-DC methods (BIT, 62: 1789-1822). The associated sharp error estimates are established by taking the numerical effects of the starting approximations into account, and they suggest that the BDF2-DC methods have no aftereffect, that is, the lower-order starting scheme for the BDF2 scheme will not cause a loss in the accuracy of the high-order BDF2-DC methods. Extensive tests on the graded and random time meshes are presented to support the new theory.

Computationally efficient surrogates for parametrized physical models play a crucial role in science and engineering. Operator learning provides data-driven surrogates that map between function spaces. However, instead of full-field measurements, often the available data are only finite-dimensional parametrizations of model inputs or finite observables of model outputs. Building off of Fourier Neural Operators, this paper introduces the Fourier Neural Mappings (FNMs) framework that is able to accommodate such finite-dimensional inputs and outputs. The paper develops universal approximation theorems for the method. Moreover, in many applications the underlying parameter-to-observable (PtO) map is defined implicitly through an infinite-dimensional operator, such as the solution operator of a partial differential equation. A natural question is whether it is more data-efficient to learn the PtO map end-to-end or first learn the solution operator and subsequently compute the observable from the full-field solution. A theoretical analysis of Bayesian nonparametric regression of linear functionals, which is of independent interest, suggests that the end-to-end approach can actually have worse sample complexity. Extending beyond the theory, numerical results for the FNM approximation of three nonlinear PtO maps demonstrate the benefits of the operator learning perspective that this paper adopts.

Understanding the mechanisms through which neural networks extract statistics from input-label pairs is one of the most important unsolved problems in supervised learning. Prior works have identified that the gram matrices of the weights in trained neural networks of general architectures are proportional to the average gradient outer product of the model, in a statement known as the Neural Feature Ansatz (NFA). However, the reason these quantities become correlated during training is poorly understood. In this work, we explain the emergence of this correlation. We identify that the NFA is equivalent to alignment between the left singular structure of the weight matrices and a significant component of the empirical neural tangent kernels associated with those weights. We establish that the NFA introduced in prior works is driven by a centered NFA that isolates this alignment. We show that the speed of NFA development can be predicted analytically at early training times in terms of simple statistics of the inputs and labels. Finally, we introduce a simple intervention to increase NFA correlation at any given layer, which dramatically improves the quality of features learned.

We study the integration problem on Hilbert spaces of (multivariate) periodic functions. The standard technique to prove lower bounds for the error of quadrature rules uses bump functions and the pigeon hole principle. Recently, several new lower bounds have been obtained using a different technique which exploits the Hilbert space structure and a variant of the Schur product theorem. The purpose of this paper is to (a) survey the new proof technique, (b) show that it is indeed superior to the bump-function technique, and (c) sharpen and extend the results from the previous papers.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司