亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The classical approach to design a system is based on a deterministic perspective where the assumption is that the system and its environment are fully predictable, and their behaviour is completely known to the designer. Although this approach may work fairly well for regular design problems, it is not satisfactory for the design of highly sensitive and complex systems where significant resources and even lives are at risk. In addition it can results in extra costs of over-designing for the sake of safety and reliability. In this paper, a risk-based design framework using Simulation Based Probabilistic Risk Assessment (SIMPRA) methodology is proposed. SIMPRA allows the designer to use the knowledge that can be expected to exist at the design stage to identify how deviations can occur; and then apply these high-level scenarios to a rich simulation model of the system to generate detailed scenarios and identify the probability and consequences of these scenarios. SIMPRA has three main modules including Simulator, Planner and Scheduler, and it approach is much more efficient in covering the large space of possible scenarios as compared with, for example, biased Monte Carlo simulations because of the Planner module which uses engineering knowledge to guide the simulation process. The value-added of this approach is that it enables the designer to observe system behaviour under many different conditions. This process will lead to a risk-informed design in which the risk of negative consequences is either eliminated entirely or reduced to an acceptable range. For illustrative purposes, an earth observation satellite system example is introduced.

相關內容

Causal inference from longitudinal studies is central to epidemiologic research. Targeted Maximum Likelihood Estimation (TMLE) is an established double-robust causal effect estimation method, but how missing data should be handled when using TMLE with data-adaptive approaches is unclear. Based on motivating data from the Victorian Adolescent Health Cohort Study, we conducted simulation and case studies to evaluate the performance of methods for handling missing data when using TMLE. These were complete-case analysis; an extended TMLE method incorporating a model for outcome missingness mechanism; missing indicator method for missing covariate data; and six multiple imputation (MI) approaches using parametric or machine-learning approaches to handle missing outcome, exposure, and covariate data. The simulation study considered a simple scenario (the exposure and outcome generated from main-effects regressions), and two complex scenarios (models also included interactions), alongside eleven missingness mechanisms defined using causal diagrams. No approach performed well across all scenarios and missingness mechanisms. For non-MI methods, bias depended on missingness mechanism (little when outcome did not influence missingness in any variable). For parametric MI, bias depended on missingness mechanism (smaller when outcome did not directly influence outcome missingness) and data generation scenario (larger for the complex scenarios). Including interaction terms in the imputation model improved performance. For MI using machine learning, bias depended on missingness mechanism (smaller when no variable with missing data directly influenced outcome missingness). We recommend considering missing data mechanism and, if using MI, opting for a saturated parametric or data-adaptive imputation model for handling missing data in TMLE estimation.

Estimating free energy differences, an important problem in computational drug discovery and in a wide range of other application areas, commonly involves a computationally intensive process of sampling a family of high-dimensional probability distributions and a procedure for computing estimates based on those samples. The variance of the free energy estimate of interest typically depends strongly on how the total computational resources available for sampling are divided among the distributions, but determining an efficient allocation is difficult without sampling the distributions. Here we introduce the Times Square sampling algorithm, a novel on-the-fly estimation method that dynamically allocates resources in such a way as to significantly accelerate the estimation of free energies and other observables, while providing rigorous convergence guarantees for the estimators. We also show that it is possible, surprisingly, for on-the-fly free energy estimation to achieve lower asymptotic variance than the maximum-likelihood estimator MBAR, raising the prospect that on-the-fly estimation could reduce variance in a variety of other statistical applications.

A solution manifold is the collection of points in a $d$-dimensional space satisfying a system of $s$ equations with $s<d$. Solution manifolds occur in several statistical problems including hypothesis testing, curved-exponential families, constrained mixture models, partial identifications, and nonparametric set estimation. We analyze solution manifolds both theoretically and algorithmically. In terms of theory, we derive five useful results: the smoothness theorem, the stability theorem (which implies the consistency of a plug-in estimator), the convergence of a gradient flow, the local center manifold theorem and the convergence of the gradient descent algorithm. To numerically approximate a solution manifold, we propose a Monte Carlo gradient descent algorithm. In the case of likelihood inference, we design a manifold constraint maximization procedure to find the maximum likelihood estimator on the manifold. We also develop a method to approximate a posterior distribution defined on a solution manifold.

Power and sample size analysis comprises a critical component of clinical trial study design. There is an extensive collection of methods addressing this problem from diverse perspectives. The Bayesian paradigm, in particular, has attracted noticeable attention and includes different perspectives for sample size determination. Building upon a cost-effectiveness analysis undertaken by O'Hagan and Stevens (2001) with different priors in the design and analysis stage, we develop a general Bayesian framework for simulation-based sample size determination that can be easily implemented on modest computing architectures. We further qualify the need for different priors for the design and analysis stage. We work primarily in the context of conjugate Bayesian linear regression models, where we consider the situation with known and unknown variances. Throughout, we draw parallels with frequentist solutions, which arise as special cases, and alternate Bayesian approaches with an emphasis on how the numerical results from existing methods arise as special cases in our framework.

It is often of interest to estimate regression functions non-parametrically. Penalized regression (PR) is one statistically-effective, well-studied solution to this problem. Unfortunately, in many cases, finding exact solutions to PR problems is computationally intractable. In this manuscript, we propose a mesh-based approximate solution (MBS) for those scenarios. MBS transforms the complicated functional minimization of NPR, to a finite parameter, discrete convex minimization; and allows us to leverage the tools of modern convex optimization. We show applications of MBS in a number of explicit examples (including both uni- and multi-variate regression), and explore how the number of parameters must increase with our sample-size in order for MBS to maintain the rate-optimality of NPR. We also give an efficient algorithm to minimize the MBS objective while effectively leveraging the sparsity inherent in MBS.

In this paper a problem of numerical simulation of hydraulic fractures is considered. An efficient algorithm of solution, based on the universal scheme introduced earlier by the authors for the fractures propagating in elastic solids, is proposed. The algorithm utilizes a FEM based subroutine to compute deformation of the fractured material. Consequently, the computational scheme retains the relative simplicity of its original version and simultaneously enables one to deal with more advanced cases of the fractured material properties and configurations. In particular, the problems of poroelasticity, plasticity and spatially varying properties of the fractured material can be analyzed. The accuracy and efficiency of the proposed algorithm are verified against analytical benchmark solutions. The algorithm capabilities are demonstrated using the example of the hydraulic fracture propagating in complex geological settings.

Probabilistic time series forecasting is crucial in many application domains such as retail, ecommerce, finance, or biology. With the increasing availability of large volumes of data, a number of neural architectures have been proposed for this problem. In particular, Transformer-based methods achieve state-of-the-art performance on real-world benchmarks. However, these methods require a large number of parameters to be learned, which imposes high memory requirements on the computational resources for training such models. To address this problem, we introduce a novel Bidirectional Temporal Convolutional Network (BiTCN), which requires an order of magnitude less parameters than a common Transformer-based approach. Our model combines two Temporal Convolutional Networks (TCNs): the first network encodes future covariates of the time series, whereas the second network encodes past observations and covariates. We jointly estimate the parameters of an output distribution via these two networks. Experiments on four real-world datasets show that our method performs on par with four state-of-the-art probabilistic forecasting methods, including a Transformer-based approach and WaveNet, on two point metrics (sMAPE, NRMSE) as well as on a set of range metrics (quantile loss percentiles) in the majority of cases. Secondly, we demonstrate that our method requires significantly less parameters than Transformer-based methods, which means the model can be trained faster with significantly lower memory requirements, which as a consequence reduces the infrastructure cost for deploying these models.

The sum-utility maximization problem is known to be important in the energy systems literature. The conventional assumption to address this problem is that the utility is concave. But for some key applications, such an assumption is not reasonable and does not reflect well the actual behavior of the consumer. To address this issue, the authors pose and address a more general optimization problem, namely by assuming the consumer's utility to be sigmoidal and in a given class of functions. The considered class of functions is very attractive for at least two reasons. First, the classical NP-hardness issue associated with sum-utility maximization is circumvented. Second, the considered class of functions encompasses well-known performance metrics used to analyze the problems of pricing and energy-efficiency. This allows one to design a new and optimal inclining block rates (IBR) pricing policy which also has the virtue of flattening the power consumption and reducing the peak power. We also show how to maximize the energy-efficiency by a low-complexity algorithm. When compared to existing policies, simulations fully support the benefit from using the proposed approach.

Knowledge graph reasoning, which aims at predicting the missing facts through reasoning with the observed facts, is critical to many applications. Such a problem has been widely explored by traditional logic rule-based approaches and recent knowledge graph embedding methods. A principled logic rule-based approach is the Markov Logic Network (MLN), which is able to leverage domain knowledge with first-order logic and meanwhile handle their uncertainty. However, the inference of MLNs is usually very difficult due to the complicated graph structures. Different from MLNs, knowledge graph embedding methods (e.g. TransE, DistMult) learn effective entity and relation embeddings for reasoning, which are much more effective and efficient. However, they are unable to leverage domain knowledge. In this paper, we propose the probabilistic Logic Neural Network (pLogicNet), which combines the advantages of both methods. A pLogicNet defines the joint distribution of all possible triplets by using a Markov logic network with first-order logic, which can be efficiently optimized with the variational EM algorithm. In the E-step, a knowledge graph embedding model is used for inferring the missing triplets, while in the M-step, the weights of logic rules are updated based on both the observed and predicted triplets. Experiments on multiple knowledge graphs prove the effectiveness of pLogicNet over many competitive baselines.

Most image completion methods produce only one result for each masked input, although there may be many reasonable possibilities. In this paper, we present an approach for pluralistic image completion - the task of generating multiple and diverse plausible solutions for image completion. A major challenge faced by learning-based approaches is that usually only one ground truth training instance per label. As such, sampling from conditional VAEs still leads to minimal diversity. To overcome this, we propose a novel and probabilistically principled framework with two parallel paths. One is a reconstructive path that extends the VAE through a latent space that covers all partial images with different mask sizes, and imposes priors that adapt to the number of pixels. The other is a generative path for which the conditional prior is coupled to distributions obtained in the reconstructive path. Both are supported by GANs. We also introduce a new short+long term attention layer that exploits distant relations among decoder and encoder features, improving appearance consistency. When tested on datasets with buildings (Paris), faces (CelebAHQ), and natural images (ImageNet), our method not only generated higher-quality completion results, but also with multiple and diverse plausible outputs.

北京阿比特科技有限公司