亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The effects of continuous treatments are often characterized through the average dose response function, which is challenging to estimate from observational data due to confounding and positivity violations. Modified treatment policies (MTPs) are an alternative approach that aim to assess the effect of a modification to observed treatment values and work under relaxed assumptions. Estimators for MTPs generally focus on estimating the conditional density of treatment given covariates and using it to construct weights. However, weighting using conditional density models has well-documented challenges. Further, MTPs with larger treatment modifications have stronger confounding and no tools exist to help choose an appropriate modification magnitude. This paper investigates the role of weights for MTPs showing that to control confounding, weights should balance the weighted data to an unobserved hypothetical target population, that can be characterized with observed data. Leveraging this insight, we present a versatile set of tools to enhance estimation for MTPs. We introduce a distance that measures imbalance of covariate distributions under the MTP and use it to develop new weighting methods and tools to aid in the estimation of MTPs. We illustrate our methods through an example studying the effect of mechanical power of ventilation on in-hospital mortality.

相關內容

Robust inferential methods based on divergences measures have shown an appealing trade-off between efficiency and robustness in many different statistical models. In this paper, minimum density power divergence estimators (MDPDEs) for the scale and shape parameters of the log-logistic distribution are considered. The log-logistic is a versatile distribution modeling lifetime data which is commonly adopted in survival analysis and reliability engineering studies when the hazard rate is initially increasing but then it decreases after some point. Further, it is shown that the classical estimators based on maximum likelihood (MLE) are included as a particular case of the MDPDE family. Moreover, the corresponding influence function of the MDPDE is obtained, and its boundlessness is proved, thus leading to robust estimators. A simulation study is carried out to illustrate the slight loss in efficiency of MDPDE with respect to MLE and, at besides, the considerable gain in robustness.

Monitoring the correctness of distributed cyber-physical systems is essential. Detecting possible safety violations can be hard when some samples are uncertain or missing. We monitor here black-box cyber-physical system, with logs being uncertain both in the state and timestamp dimensions: that is, not only the logged value is known with some uncertainty, but the time at which the log was made is uncertain too. In addition, we make use of an over-approximated yet expressive model, given by a non-linear extension of dynamical systems. Given an offline log, our approach is able to monitor the log against safety specifications with a limited number of false alarms. As a second contribution, we show that our approach can be used online to minimize the number of sample triggers, with the aim at energetic efficiency. We apply our approach to three benchmarks, an anesthesia model, an adaptive cruise controller and an aircraft orbiting system.

Guided ultrasonic wave based structural health monitoring has been of interest over decades. However, the influence of pre-stress states on the propagation of Lamb waves in thin-walled structures is not fully covered, yet. So far experimental work presented in the literature only focuses on a few individual frequencies, which does not allow a comprehensive verification of the numerous numerical investigations. Furthermore, most work is based on the strain-energy density function by Murnaghan. To validate the common modeling approach and to investigate the suitability of other non-linear strain-energy density functions an extensive experimental and numerical investigation covering a large frequency range is presented here. The numerical simulation comprises the use of the Neo-Hooke as well as the Murnaghan material model. It is found that these two material models show qualitatively similar results. Furthermore, the comparison with the experimental results reveals, that the Neo-Hooke material model reproduces the effect of pre-stress on the difference in the Lamb wave phase velocity very well in most cases. For the $A_0$ wave mode at higher frequencies, however, the sign of this difference is only correctly predicted by the Murnaghan model. In contrast to this the Murnaghan material model fails to predict the sign change for the $S_0$ wave mode.

Although the process variables of epoxy resins alter their mechanical properties, the visual identification of the characteristic features of X-ray images of samples of these materials is challenging. To facilitate the identification, we approximate the magnitude of the gradient of the intensity field of the X-ray images of different kinds of epoxy resins and then we use deep learning to discover the most representative features of the transformed images. In this solution of the inverse problem to finding characteristic features to discriminate samples of heterogeneous materials, we use the eigenvectors obtained from the singular value decomposition of all the channels of the feature maps of the early layers in a convolutional neural network. While the strongest activated channel gives a visual representation of the characteristic features, often these are not robust enough in some practical settings. On the other hand, the left singular vectors of the matrix decomposition of the feature maps, barely change when variables such as the capacity of the network or network architecture change. High classification accuracy and robustness of characteristic features are presented in this work.

The solution to a stochastic optimal control problem can be determined by computing the value function from a discretisation of the associated Hamilton-Jacobi-Bellman equation. Alternatively, the problem can be reformulated in terms of a pair of forward-backward SDEs, which makes Monte-Carlo techniques applicable. More recently, the problem has also been viewed from the perspective of forward and reverse time SDEs and their associated Fokker-Planck equations. This approach is closely related to techniques used in score generative models. Forward and reverse time formulations express the value function as the ratio of two probability density functions; one stemming from a forward McKean-Vlasov SDE and another one from a reverse McKean-Vlasov SDE. In this note, we extend this approach to a more general class of stochastic optimal control problems and combine it with ensemble Kalman filter type and diffusion map approximation techniques in order to obtain efficient and robust particle-based algorithms.

The Horvitz-Thompson (H-T) estimator is widely used for estimating various types of average treatment effects under network interference. We systematically investigate the optimality properties of H-T estimator under network interference, by embedding it in the class of all linear estimators. In particular, we show that in presence of any kind of network interference, H-T estimator is in-admissible in the class of all linear estimators when using a completely randomized and a Bernoulli design. We also show that the H-T estimator becomes admissible under certain restricted randomization schemes termed as ``fixed exposure designs''. We give examples of such fixed exposure designs. It is well known that the H-T estimator is unbiased when correct weights are specified. Here, we derive the weights for unbiased estimation of various causal effects, and illustrate how they depend not only on the design, but more importantly, on the assumed form of interference (which in many real world situations is unknown at design stage), and the causal effect of interest.

Due to their intrinsic capabilities on parallel signal processing, optical neural networks (ONNs) have attracted extensive interests recently as a potential alternative to electronic artificial neural networks (ANNs) with reduced power consumption and low latency. Preliminary confirmation of the parallelism in optical computing has been widely done by applying the technology of wavelength division multiplexing (WDM) in the linear transformation part of neural networks. However, inter-channel crosstalk has obstructed WDM technologies to be deployed in nonlinear activation in ONNs. Here, we propose a universal WDM structure called multiplexed neuron sets (MNS) which apply WDM technologies to optical neurons and enable ONNs to be further compressed. A corresponding back-propagation (BP) training algorithm is proposed to alleviate or even cancel the influence of inter-channel crosstalk on MNS-based WDM-ONNs. For simplicity, semiconductor optical amplifiers (SOAs) are employed as an example of MNS to construct a WDM-ONN trained with the new algorithm. The result shows that the combination of MNS and the corresponding BP training algorithm significantly downsize the system and improve the energy efficiency to tens of times while giving similar performance to traditional ONNs.

Understanding whether and how treatment effects vary across subgroups is crucial to inform clinical practice and recommendations. Accordingly, the assessment of heterogeneous treatment effects (HTE) based on pre-specified potential effect modifiers has become a common goal in modern randomized trials. However, when one or more potential effect modifiers are missing, complete-case analysis may lead to bias and under-coverage. While statistical methods for handling missing data have been proposed and compared for individually randomized trials with missing effect modifier data, few guidelines exist for the cluster-randomized setting, where intracluster correlations in the effect modifiers, outcomes, or even missingness mechanisms may introduce further threats to accurate assessment of HTE. In this article, the performance of several missing data methods are compared through a simulation study of cluster-randomized trials with continuous outcome and missing binary effect modifier data, and further illustrated using real data from the Work, Family, and Health Study. Our results suggest that multilevel multiple imputation (MMI) and Bayesian MMI have better performance than other available methods, and that Bayesian MMI has lower bias and closer to nominal coverage than standard MMI when there are model specification or compatibility issues.

In the field of medical imaging, the scarcity of large-scale datasets due to privacy restrictions stands as a significant barrier to develop large models for medical. To address this issue, we introduce SynFundus-1M, a high-quality synthetic dataset with over 1 million retinal fundus images and extensive disease and pathologies annotations, which is generated by a Denoising Diffusion Probabilistic Model. The SynFundus-Generator and SynFundus-1M achieve superior Frechet Inception Distance (FID) scores compared to existing methods on main-stream public real datasets. Furthermore, the ophthalmologists evaluation validate the difficulty in discerning these synthetic images from real ones, confirming the SynFundus-1M's authenticity. Through extensive experiments, we demonstrate that both CNN and ViT can benifit from SynFundus-1M by pretraining or training directly. Compared to datasets like ImageNet or EyePACS, models train on SynFundus-1M not only achieve better performance but also faster convergence on various downstream tasks.

The relationship between the thermodynamic and computational characteristics of dynamical physical systems has been a major theoretical interest since at least the 19th century, and has been of increasing practical importance as the energetic cost of digital devices has exploded over the last half century. One of the most important thermodynamic features of real-world computers is that they operate very far from thermal equilibrium, in finite time, with many quickly (co-)evolving degrees of freedom. Such computers also must almost always obey multiple physical constraints on how they work. For example, all modern digital computers are periodic processes, governed by a global clock. Another example is that many computers are modular, hierarchical systems, with strong restrictions on the connectivity of their subsystems. This properties hold both for naturally occurring computers, like brains or Eukaryotic cells, as well as digital systems. These features of real-world computers are absent in 20th century analyses of the thermodynamics of computational processes, which focused on quasi-statically slow processes. However, the field of stochastic thermodynamics has been developed in the last few decades - and it provides the formal tools for analyzing systems that have exactly these features of real-world computers. We argue here that these tools, together with other tools currently being developed in stochastic thermodynamics, may help us understand at a far deeper level just how the fundamental physical properties of dynamic systems are related to the computation that they perform.

北京阿比特科技有限公司