亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a novel model agnostic data-driven reliability analysis framework for time-dependent reliability analysis. The proposed approach -- referred to as MAntRA -- combines interpretable machine learning, Bayesian statistics, and identifying stochastic dynamic equation to evaluate reliability of stochastically-excited dynamical systems for which the governing physics is \textit{apriori} unknown. A two-stage approach is adopted: in the first stage, an efficient variational Bayesian equation discovery algorithm is developed to determine the governing physics of an underlying stochastic differential equation (SDE) from measured output data. The developed algorithm is efficient and accounts for epistemic uncertainty due to limited and noisy data, and aleatoric uncertainty because of environmental effect and external excitation. In the second stage, the discovered SDE is solved using a stochastic integration scheme and the probability failure is computed. The efficacy of the proposed approach is illustrated on three numerical examples. The results obtained indicate the possible application of the proposed approach for reliability analysis of in-situ and heritage structures from on-site measurements.

相關內容

Existing statistical methods can estimate a policy, or a mapping from covariates to decisions, which can then instruct decision makers (e.g., whether to administer hypotension treatment based on covariates blood pressure and heart rate). There is great interest in using such data-driven policies in healthcare. However, it is often important to explain to the healthcare provider, and to the patient, how a new policy differs from the current standard of care. This end is facilitated if one can pinpoint the aspects of the policy (i.e., the parameters for blood pressure and heart rate) that change when moving from the standard of care to the new, suggested policy. To this end, we adapt ideas from Trust Region Policy Optimization (TRPO). In our work, however, unlike in TRPO, the difference between the suggested policy and standard of care is required to be sparse, aiding with interpretability. This yields ``relative sparsity," where, as a function of a tuning parameter, $\lambda$, we can approximately control the number of parameters in our suggested policy that differ from their counterparts in the standard of care (e.g., heart rate only). We propose a criterion for selecting $\lambda$, perform simulations, and illustrate our method with a real, observational healthcare dataset, deriving a policy that is easy to explain in the context of the current standard of care. Our work promotes the adoption of data-driven decision aids, which have great potential to improve health outcomes.

Regularization is a common tool in variational inverse problems to impose assumptions on the parameters of the problem. One such assumption is sparsity, which is commonly promoted using lasso and total variation-like regularization. Although the solutions to many such regularized inverse problems can be considered as points of maximum probability of well-chosen posterior distributions, samples from these distributions are generally not sparse. In this paper, we present a framework for implicitly defining a probability distribution that combines the effects of sparsity imposing regularization with Gaussian distributions. Unlike continuous distributions, these implicit distributions can assign positive probability to sparse vectors. We study these regularized distributions for various regularization functions including total variation regularization and piecewise linear convex functions. We apply the developed theory to uncertainty quantification for Bayesian linear inverse problems and derive a Gibbs sampler for a Bayesian hierarchical model. To illustrate the difference between our sparsity-inducing framework and continuous distributions, we apply our framework to small-scale deblurring and computed tomography examples.

During recent years, mediation analysis has become increasingly popular in many research fields. Basically, the aim of mediation analysis is to investigate the direct effect of exposure on outcome together with indirect effects along the pathways from exposure to outcome. There has been a great number of articles that applied mediation analysis to data from hundreds or thousands of individuals. With the rapid development of technology, the volume of avaliable data increases exponentially, which brings new challenges to researchers. It is often computationally infeasible to directly conduct statistical analysis for large datasets. However, there are very few results on mediation analysis with massive data. In this paper, we propose to use the subsampled double bootstrap as well as divide-and-conquer algorithm to perform statistical mediation analysis for large-scale dataset. Extensive numerical simulations are conducted to evaluate the performance of our method. Two real data examples are also provided to illustrate the usefulness of our approach in practical application.

Continual learning protocols are attracting increasing attention from the medical imaging community. In continual environments, datasets acquired under different conditions arrive sequentially; and each is only available for a limited period of time. Given the inherent privacy risks associated with medical data, this setup reflects the reality of deployment for deep learning diagnostic radiology systems. Many techniques exist to learn continuously for image classification, and several have been adapted to semantic segmentation. Yet most struggle to accumulate knowledge in a meaningful manner. Instead, they focus on preventing the problem of catastrophic forgetting, even when this reduces model plasticity and thereon burdens the training process. This puts into question whether the additional overhead of knowledge preservation is worth it - particularly for medical image segmentation, where computation requirements are already high - or if maintaining separate models would be a better solution. We propose UNEG, a simple and widely applicable multi-model benchmark that maintains separate segmentation and autoencoder networks for each training stage. The autoencoder is built from the same architecture as the segmentation network, which in our case is a full-resolution nnU-Net, to bypass any additional design decisions. During inference, the reconstruction error is used to select the most appropriate segmenter for each test image. Open this concept, we develop a fair evaluation scheme for different continual learning settings that moves beyond the prevention of catastrophic forgetting. Our results across three regions of interest (prostate, hippocampus, and right ventricle) show that UNEG outperforms several continual learning methods, reinforcing the need for strong baselines in continual learning research.

This article develops new tools and new statistical theory for a statistical problem we call Scale Reliant Inference (SRI). Many scientific fields collect multivariate data that lack scale: where the size, sum, or total of each measurement is arbitrary and is not representative of the scale of the underlying system being measured. For example, in the analysis of high-throughput sequencing data, it is well known that the number of sequencing reads (the sequencing depth) varies substantially due non-biological (technical) factors. This article develops a formal problem statement for SRI which unifies problems seen in multiple scientific fields. Informally, we define SRI as an estimation problem in which an estimand of interest cannot be uniquely identified due to the lack of scale information in the observed data. This problem statement represents a reformulation of the related field of Compositional Data Analysis and allows us to prove fundamental limits on SRI. For example, we prove that inferential criteria such as consistency, calibration, and bias are unattainable for common SRI tasks. Moreover, we show that common methods often applied to SRI implicitly assume infinite knowledge of the system scale and can lead to a troubling phenomena termed unacknowledged bias. Counter-intuitively, we show that this problem worsens with more data and can lead to substantially elevated Type-I and Type-II error rates. Still, we show that rigorous statistical inference is possible so long as models acknowledge the fundamental uncertainty in the system scale. We introduce a class of models we call Scale Simulation Random Variables (SSRVs) as flexible, rigorous, and computationally efficient approach to SRI.

We propose a sparse vector autoregressive (VAR) hidden semi-Markov model (HSMM) for modeling temporal and contemporaneous (e.g. spatial) dependencies in multivariate nonstationary time series. The HSMM's generic state distribution is embedded in a special transition matrix structure, facilitating efficient likelihood evaluations and arbitrary approximation accuracy. To promote sparsity of the VAR coefficients, we deploy an $l_1$-ball projection prior, which combines differentiability with a positive probability of obtaining exact zeros, achieving variable selection within each switching state. This also facilitates posterior estimation via Hamiltonian Monte Carlo (HMC). We further place non-local priors on the parameters of the HSMM dwell distribution improving the ability of Bayesian model selection to distinguish whether the data is better supported by the simpler hidden Markov model (HMM), or the more flexible HSMM. Our proposed methodology is illustrated via an application to human gesture phase segmentation based on sensor data, where we successfully identify and characterize the periods of rest and active gesturing, as well as the dynamical patterns involved in the gesture movements associated with each of these states.

We study the interplay between the data distribution and Q-learning-based algorithms with function approximation. We provide a unified theoretical and empirical analysis as to how different properties of the data distribution influence the performance of Q-learning-based algorithms. We connect different lines of research, as well as validate and extend previous results. We start by reviewing theoretical bounds on the performance of approximate dynamic programming algorithms. We then introduce a novel four-state MDP specifically tailored to highlight the impact of the data distribution in the performance of Q-learning-based algorithms with function approximation, both online and offline. Finally, we experimentally assess the impact of the data distribution properties on the performance of two offline Q-learning-based algorithms under different environments. According to our results: (i) high entropy data distributions are well-suited for learning in an offline manner; and (ii) a certain degree of data diversity (data coverage) and data quality (closeness to optimal policy) are jointly desirable for offline learning.

Tool wear monitoring is crucial for quality control and cost reduction in manufacturing processes, of which drilling applications are one example. In this paper, we present a U-Net based semantic image segmentation pipeline, deployed on microscopy images of cutting inserts, for the purpose of wear detection. The wear area is differentiated in two different types, resulting in a multiclass classification problem. Joining the two wear types in one general wear class, on the other hand, allows the problem to be formulated as a binary classification task. Apart from the comparison of the binary and multiclass problem, also different loss functions, i. e., Cross Entropy, Focal Cross Entropy, and a loss based on the Intersection over Union (IoU), are investigated. Furthermore, models are trained on image tiles of different sizes, and augmentation techniques of varying intensities are deployed. We find, that the best performing models are binary models, trained on data with moderate augmentation and an IoU-based loss function.

We study asymptotic statistical inference in the space of bounded functions endowed with the supremum norm over an arbitrary metric space $S$ using a novel concept: Simultaneous Confidence Probability Excursion (SCoPE) sets. Given an estimator SCoPE sets simultaneously quantify the uncertainty of several lower and upper excursion sets of a target function and thereby grant a unifying perspective on several statistical inference tools such as simultaneous confidence bands, quantification of uncertainties in level set estimation, for example, CoPE sets, and multiple hypothesis testing over $S$, for example, finding relevant differences or regions of equivalence within $S$. As a byproduct our abstract treatment allows us to refine and generalize the methodology and reduce the assumptions in recent articles in relevance and equivalence testing in functional data.

Time series classification is an important problem in real world. Due to its non-stationary property that the distribution changes over time, it remains challenging to build models for generalization to unseen distributions. In this paper, we propose to view the time series classification problem from the distribution perspective. We argue that the temporal complexity attributes to the unknown latent distributions within. To this end, we propose DIVERSIFY to learn generalized representations for time series classification. DIVERSIFY takes an iterative process: it first obtains the worst-case distribution scenario via adversarial training, then matches the distributions of the obtained sub-domains. We also present some theoretical insights. We conduct experiments on gesture recognition, speech commands recognition, wearable stress and affect detection, and sensor-based human activity recognition with a total of seven datasets in different settings. Results demonstrate that DIVERSIFY significantly outperforms other baselines and effectively characterizes the latent distributions by qualitative and quantitative analysis. Code is available at: //github.com/microsoft/robustlearn.

北京阿比特科技有限公司