亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work introduces a novel cause-effect relation in Markov decision processes using the probability-raising principle. Initially, sets of states as causes and effects are considered, which is subsequently extended to regular path properties as effects and then as causes. The paper lays the mathematical foundations and analyzes the algorithmic properties of these cause-effect relations. This includes algorithms for checking cause conditions given an effect and deciding the existence of probability-raising causes. As the definition allows for sub-optimal coverage properties, quality measures for causes inspired by concepts of statistical analysis are studied. These include recall, coverage ratio and f-score. The computational complexity for finding optimal causes with respect to these measures is analyzed.

相關內容

Statistical significance of both the original and the replication study is a commonly used criterion to assess replication attempts, also known as the two-trials rule in drug development. However, replication studies are sometimes conducted although the original study is non-significant, in which case Type-I error rate control across both studies is no longer guaranteed. We propose an alternative method to assess replicability using the sum of p-values from the two studies. The approach provides a combined p-value and can be calibrated to control the overall Type-I error rate at the same level as the two-trials rule but allows for replication success even if the original study is non-significant. The unweighted version requires a less restrictive level of significance at replication if the original study is already convincing which facilitates sample size reductions of up to 10%. Downweighting the original study accounts for possible bias and requires a more stringent significance level and larger samples sizes at replication. Data from four large-scale replication projects are used to illustrate and compare the proposed method with the two-trials rule, meta-analysis and Fisher's combination method.

Depth perception in volumetric visualization plays a crucial role in the understanding and interpretation of volumetric data. Numerous visualization techniques, many of which rely on physically based optical effects, promise to improve depth perception but often do so without considering camera movement or the content of the volume. As a result, the findings from previous studies may not be directly applicable to crowded volumes, where a large number of contained structures disrupts spatial perception. Crowded volumes therefore require special analysis and visualization tools with sparsification capabilities. Interactivity is an integral part of visualizing and exploring crowded spaces, but has received little attention in previous studies. To address this gap, we conducted a study to assess the impact of different rendering techniques on depth perception in crowded volumes, with a particular focus on the effects of camera movement. The results show that depth perception considering camera motion depends much more on the content of the volume than on the chosen visualization technique. Furthermore, we found that traditional rendering techniques, which have often performed poorly in previous studies, showed comparable performance to physically based methods in our study.

We propose a novel methodology for validating software product line (PL) models by integrating Statistical Model Checking (SMC) with Process Mining (PM). Our approach focuses on the feature-oriented language QFLan in the PL engineering domain, allowing modeling of PLs with rich cross-tree and quantitative constraints, as well as aspects of dynamic PLs like staged configurations. This richness leads to models with infinite state-space, requiring simulation-based analysis techniques like SMC. For instance, we illustrate with a running example involving infinite state space. SMC involves generating samples of system dynamics to estimate properties such as event probabilities or expected values. On the other hand, PM uses data-driven techniques on execution logs to identify and reason about the underlying execution process. In this paper, we propose, for the first time, applying PM techniques to SMC simulations' byproducts to enhance the utility of SMC analyses. Typically, when SMC results are unexpected, modelers must determine whether they stem from actual system characteristics or model bugs in a black-box manner. We improve on this by using PM to provide a white-box perspective on the observed system dynamics. Samples from SMC are fed into PM tools, producing a compact graphical representation of observed dynamics. The mined PM model is then transformed into a QFLan model, accessible to PL engineers. Using two well-known PL models, we demonstrate the effectiveness and scalability of our methodology in pinpointing issues and suggesting fixes. Additionally, we show its generality by applying it to the security domain.

Most existing neural network-based approaches for solving stochastic optimal control problems using the associated backward dynamic programming principle rely on the ability to simulate the underlying state variables. However, in some problems, this simulation is infeasible, leading to the discretization of state variable space and the need to train one neural network for each data point. This approach becomes computationally inefficient when dealing with large state variable spaces. In this paper, we consider a class of this type of stochastic optimal control problems and introduce an effective solution employing multitask neural networks. To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks. Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.

Efficient allocation of resources to activities is pivotal in executing business processes but remains challenging. While resource allocation methodologies are well-established in domains like manufacturing, their application within business process management remains limited. Existing methods often do not scale well to large processes with numerous activities or optimize across multiple cases. This paper aims to address this gap by proposing two learning-based methods for resource allocation in business processes. The first method leverages Deep Reinforcement Learning (DRL) to learn near-optimal policies by taking action in the business process. The second method is a score-based value function approximation approach, which learns the weights of a set of curated features to prioritize resource assignments. To evaluate the proposed approaches, we first designed six distinct business processes with archetypal process flows and characteristics. These business processes were then connected to form three realistically sized business processes. We benchmarked our methods against traditional heuristics and existing resource allocation methods. The results show that our methods learn adaptive resource allocation policies that outperform or are competitive with the benchmarks in five out of six individual business processes. The DRL approach outperforms all benchmarks in all three composite business processes and finds a policy that is, on average, 13.1% better than the best-performing benchmark.

Polynomial approximations of functions are widely used in scientific computing. In certain applications, it is often desired to require the polynomial approximation to be non-negative (resp. non-positive), or bounded within a given range, due to constraints posed by the underlying physical problems. Efficient numerical methods are thus needed to enforce such conditions. In this paper, we discuss effective numerical algorithms for polynomial approximation under non-negativity constraints. We first formulate the constrained optimization problem, its primal and dual forms, and then discuss efficient first-order convex optimization methods, with a particular focus on high dimensional problems. Numerical examples are provided, for up to $200$ dimensions, to demonstrate the effectiveness and scalability of the methods.

Graph Neural Networks (GNNs) have emerged in recent years as a powerful tool to learn tasks across a wide range of graph domains in a data-driven fashion; based on a message passing mechanism, GNNs have gained increasing popularity due to their intuitive formulation, closely linked with the Weisfeiler-Lehman (WL) test for graph isomorphism, to which they have proven equivalent. From a theoretical point of view, GNNs have been shown to be universal approximators, and their generalization capability (namely, bounds on the Vapnik Chervonekis (VC) dimension) has recently been investigated for GNNs with piecewise polynomial activation functions. The aim of our work is to extend this analysis on the VC dimension of GNNs to other commonly used activation functions, such as sigmoid and hyperbolic tangent, using the framework of Pfaffian function theory. Bounds are provided with respect to architecture parameters (depth, number of neurons, input size) as well as with respect to the number of colors resulting from the 1-WL test applied on the graph domain. The theoretical analysis is supported by a preliminary experimental study.

For appropriate Gaussian processes, as a corollary of the majorizing measure theorem, Michel Talagrand (1987) proved that the event that the supremum is significantly larger than its expectation can be covered by a set of half-spaces whose sum of measures is small. We prove a conjecture of Talagrand that is the analog of this result in the Bernoulli-$p$ setting, and answer a question of Talagrand on the analogous result for general positive empirical processes.

We address modelling and computational issues for multiple treatment effect inference under many potential confounders. A primary issue relates to preventing harmful effects from omitting relevant covariates (under-selection), while not running into over-selection issues that introduce substantial variance and a bias related to the non-random over-inclusion of covariates. We propose a novel empirical Bayes framework for Bayesian model averaging that learns from data the extent to which the inclusion of key covariates should be encouraged, specifically those highly associated to the treatments. A key challenge is computational. We develop fast algorithms, including an Expectation-Propagation variational approximation and simple stochastic gradient optimization algorithms, to learn the hyper-parameters from data. Our framework uses widely-used ingredients and largely existing software, and it is implemented within the R package mombf featured on CRAN. This work is motivated by and is illustrated in two applications. The first is the association between salary variation and discriminatory factors. The second, that has been debated in previous works, is the association between abortion policies and crime. Our approach provides insights that differ from previous analyses especially in situations with weaker treatment effects.

Systems of intelligent control of manual operations in industrial production are being implemented in many industries nowadays. Such systems use high-resolution cameras and computer vision algorithms to automatically track the operator's manipulations and prevent technological errors in the assembly process. At the same time compliance with safety regulations in the workspace is monitored. As a result, the defect rate of manufactured products and the number of accidents during the manual assembly of any device are decreased. Before implementing an intelligent control system into a real production it is necessary to calculate its efficiency. In order to do it experiments on the stand for manual operations control systems were carried out. This paper proposes the methodology for calculating the efficiency indicators. This mathematical approach is based on the IoU calculation of real- and predicted-time intervals between assembly stages. The results show high precision in tracking the validity of manual assembly and do not depend on the duration of the assembly process.

北京阿比特科技有限公司