亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Traditional statistical inference in cluster randomized trials typically invokes the asymptotic theory that requires the number of clusters to approach infinity. In this article, we propose an alternative conformal causal inference framework for analyzing cluster randomized trials that achieves the target inferential goal in finite samples without the need for asymptotic approximations. Different from traditional inference focusing on estimating the average treatment effect, our conformal causal inference aims to provide prediction intervals for the difference of counterfactual outcomes, thereby providing a new decision-making tool for clusters and individuals in the same target population. We prove that this framework is compatible with arbitrary working outcome models -- including data-adaptive machine learning methods that maximally leverage information from baseline covariates, and enjoys robustness against misspecification of working outcome models. Under our conformal causal inference framework, we develop efficient computation algorithms to construct prediction intervals for treatment effects at both the cluster and individual levels, and further extend to address inferential targets defined based on pre-specified covariate subgroups. Finally, we demonstrate the properties of our methods via simulations and a real data application based on a completed cluster randomized trial for treating chronic pain.

相關內容

In this paper, we propose high order numerical methods to solve a 2D advection diffusion equation, in the highly oscillatory regime. We use an integrator strategy that allows the construction of arbitrary high-order schemes {leading} to an accurate approximation of the solution without any time step-size restriction. This paper focuses on the multiscale challenges {in time} of the problem, that come from the velocity, an $\varepsilon-$periodic function, whose expression is explicitly known. $\varepsilon$-uniform third order in time numerical approximations are obtained. For the space discretization, this strategy is combined with high order finite difference schemes. Numerical experiments show that the proposed methods {achieve} the expected order of accuracy, and it is validated by several tests across diverse domains and boundary conditions. The novelty of the paper consists of introducing a numerical scheme that is high order accurate in space and time, with a particular attention to the dependency on a small parameter in the time scale. The high order in space is obtained enlarging the interpolation stencil already established in [44], and further refined in [46], with a special emphasis on the squared boundary, especially when a Dirichlet condition is assigned. In such case, we compute an \textit{ad hoc} Taylor expansion of the solution to ensure that there is no degradation of the accuracy order at the boundary. On the other hand, the high accuracy in time is obtained extending the work proposed in [19]. The combination of high-order accuracy in both space and time is particularly significant due to the presence of two small parameters-$\delta$ and $\varepsilon$-in space and time, respectively.

The unpredictability of random numbers is fundamental to both digital security and applications that fairly distribute resources. However, existing random number generators have limitations-the generation processes cannot be fully traced, audited, and certified to be unpredictable. The algorithmic steps used in pseudorandom number generators are auditable, but they cannot guarantee that their outputs were a priori unpredictable given knowledge of the initial seed. Device-independent quantum random number generators can ensure that the source of randomness was unknown beforehand, but the steps used to extract the randomness are vulnerable to tampering. Here, for the first time, we demonstrate a fully traceable random number generation protocol based on device-independent techniques. Our protocol extracts randomness from unpredictable non-local quantum correlations, and uses distributed intertwined hash chains to cryptographically trace and verify the extraction process. This protocol is at the heart of a public traceable and certifiable quantum randomness beacon that we have launched. Over the first 40 days of operation, we completed the protocol 7434 out of 7454 attempts -- a success rate of 99.7%. Each time the protocol succeeded, the beacon emitted a pulse of 512 bits of traceable randomness. The bits are certified to be uniform with error times actual success probability bounded by $2^{-64}$. The generation of certifiable and traceable randomness represents one of the first public services that operates with an entanglement-derived advantage over comparable classical approaches.

Statistical learning under distribution shift is challenging when neither prior knowledge nor fully accessible data from the target distribution is available. Distributionally robust learning (DRL) aims to control the worst-case statistical performance within an uncertainty set of candidate distributions, but how to properly specify the set remains challenging. To enable distributional robustness without being overly conservative, in this paper, we propose a shape-constrained approach to DRL, which incorporates prior information about the way in which the unknown target distribution differs from its estimate. More specifically, we assume the unknown density ratio between the target distribution and its estimate is isotonic with respect to some partial order. At the population level, we provide a solution to the shape-constrained optimization problem that does not involve the isotonic constraint. At the sample level, we provide consistency results for an empirical estimator of the target in a range of different settings. Empirical studies on both synthetic and real data examples demonstrate the improved accuracy of the proposed shape-constrained approach.

In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.

In this work is considered an elliptic problem, referred to as the Ventcel problem, involvinga second order term on the domain boundary (the Laplace-Beltrami operator). A variationalformulation of the Ventcel problem is studied, leading to a finite element discretization. Thefocus is on the construction of high order curved meshes for the discretization of the physicaldomain and on the definition of the lift operator, which is aimed to transform a functiondefined on the mesh domain into a function defined on the physical one. This lift is definedin a way as to satisfy adapted properties on the boundary, relatively to the trace operator.The Ventcel problem approximation is investigated both in terms of geometrical error and offinite element approximation error. Error estimates are obtained both in terms of the meshorder r $\ge$ 1 and to the finite element degree k $\ge$ 1, whereas such estimates usually have beenconsidered in the isoparametric case so far, involving a single parameter k = r. The numericalexperiments we led, both in dimension 2 and 3, allow us to validate the results obtained andproved on the a priori error estimates depending on the two parameters k and r. A numericalcomparison is made between the errors using the former lift definition and the lift defined inthis work establishing an improvement in the convergence rate of the error in the latter case.

We consider linear models with scalar responses and covariates from a separable Hilbert space. The aim is to detect change points in the error distribution, based on sequential residual empirical distribution functions. Expansions for those estimated functions are more challenging in models with infinite-dimensional covariates than in regression models with scalar or vector-valued covariates due to a slower rate of convergence of the parameter estimators. Yet the suggested change point test is asymptotically distribution-free and consistent for one-change point alternatives. In the latter case we also show consistency of a change point estimator.

We develop both first and second order numerical optimization methods to solve non-smooth optimization problems featuring a shared sparsity penalty, constrained by differential equations with uncertainty. To alleviate the curse of dimensionality we use tensor product approximations. To handle the non-smoothness of the objective function we introduce a smoothed version of the shared sparsity objective. We consider both a benchmark elliptic PDE constraint, and a more realistic topology optimization problem. We demonstrate that the error converges linearly in iterations and the smoothing parameter, and faster than algebraically in the number of degrees of freedom, consisting of the number of quadrature points in one variable and tensor ranks. Moreover, in the topology optimization problem, the smoothed shared sparsity penalty actually reduces the tensor ranks compared to the unpenalised solution. This enables us to find a sparse high-resolution design under a high-dimensional uncertainty.

When studying the dynamics of incompressible fluids in bounded domains the only available data often provide average flow rate conditions on portions of the domain's boundary. In engineering applications a common practice to complete these conditions is to prescribe a Dirichlet condition by assuming a-priori a spatial profile for the velocity field. However, this strongly influence the accuracy of the numerical solution. A more mathematically sound approach is to prescribe the flow rate conditions using Lagrange multipliers, resulting in an augmented weak formulation of the Navier-Stokes problem. In this paper we start from the SIMPLE preconditioner, introduced so far for the standard Navier-Stokes equations, and we derive two preconditioners for the monolithic solution of the augmented problem. This can be useful in complex applications where splitting the computation of the velocity/pressure and Lagrange multipliers numerical solutions can be very expensive. In particular, we investigate the numerical performance of the preconditioners in both idealized and real-life scenarios. Finally, we highlight the advantages of treating flow rate conditions with a Lagrange multipliers approach instead of prescribing a Dirichlet condition.

We propose a novel, highly efficient, second-order accurate, long-time unconditionally stable numerical scheme for a class of finite-dimensional nonlinear models that are of importance in geophysical fluid dynamics. The scheme is highly efficient in the sense that only a (fixed) symmetric positive definite linear problem (with varying right hand sides) is involved at each time-step. The solutions to the scheme are uniformly bounded for all time. We show that the scheme is able to capture the long-time dynamics of the underlying geophysical model, with the global attractors as well as the invariant measures of the scheme converge to those of the original model as the step size approaches zero. In our numerical experiments, we take an indirect approach, using long-term statistics to approximate the invariant measures. Our results suggest that the convergence rate of the long-term statistics, as a function of terminal time, is approximately first order using the Jensen-Shannon metric and half-order using the L1 metric. This implies that very long time simulation is needed in order to capture a few significant digits of long time statistics (climate) correct. Nevertheless, the second order scheme's performance remains superior to that of the first order one, requiring significantly less time to reach a small neighborhood of statistical equilibrium for a given step size.

Gate-defined quantum dots are a promising candidate system for realizing scalable, coupled qubit systems and serving as a fundamental building block for quantum computers. However, present-day quantum dot devices suffer from imperfections that must be accounted for, which hinders the characterization, tuning, and operation process. Moreover, with an increasing number of quantum dot qubits, the relevant parameter space grows sufficiently to make heuristic control infeasible. Thus, it is imperative that reliable and scalable autonomous tuning approaches are developed. This meeting report outlines current challenges in automating quantum dot device tuning and operation with a particular focus on datasets, benchmarking, and standardization. We also present insights and ideas put forward by the quantum dot community on how to overcome them. We aim to provide guidance and inspiration to researchers invested in automation efforts.

北京阿比特科技有限公司