亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Determination of posterior probability for go-no-go decision and predictive power are becoming increasingly common for resource optimization in clinical investigation. There are vast published literature on these topics; however, the terminologies are not consistently used across the literature. Further, there is a lack of consolidated presentation of various concepts of the probability of success. We attempted to fill this gap. This paper first provides a detailed derivation of these probability of success measures under the frequentist and Bayesian paradigms in a general setting. Subsequently, we have presented the analytical formula for these probability of success measures for continuous, binary, and time-to-event endpoints separately. This paper can be used as a single point reference to determine the following measures: (a) the conditional power (CP) based on interim results, (b) the predictive power of success (PPoS) based on interim results with or without prior distribution, and (d) the probability of success (PoS) for a prospective trial at the design stage. We have discussed both clinical success and trial success. This paper's discussion is mostly based on the normal approximation for prior distribution and the estimate of the parameter of interest. Besides, predictive power using the beta prior for the binomial case is also presented. Some examples are given for illustration. R functions to calculate CP and PPoS are available through the LongCART package. An R shiny app is also available at //ppos.herokuapp.com/.

相關內容

Robots applications in our daily life increase at an unprecedented pace. As robots will soon operate "out in the wild", we must identify the safety and security vulnerabilities they will face. Robotics researchers and manufacturers focus their attention on new, cheaper, and more reliable applications. Still, they often disregard the operability in adversarial environments where a trusted or untrusted user can jeopardize or even alter the robot's task. In this paper, we identify a new paradigm of security threats in the next generation of robots. These threats fall beyond the known hardware or network-based ones, and we must find new solutions to address them. These new threats include malicious use of the robot's privileged access, tampering with the robot sensors system, and tricking the robot's deliberation into harmful behaviors. We provide a taxonomy of attacks that exploit these vulnerabilities with realistic examples, and we outline effective countermeasures to prevent better, detect, and mitigate them.

Most computational approaches to Bayesian experimental design require making posterior calculations repeatedly for a large number of potential designs and/or simulated datasets. This can be expensive and prohibit scaling up these methods to models with many parameters, or designs with many unknowns to select. We introduce an efficient alternative approach without posterior calculations, based on optimising the expected trace of the Fisher information, as discussed by Walker (2016). We illustrate drawbacks of this approach, including lack of invariance to reparameterisation and encouraging designs in which one parameter combination is inferred accurately but not any others. We show these can be avoided by using an adversarial approach: the experimenter must select their design while a critic attempts to select the least favourable parameterisation. We present theoretical properties of this approach and show it can be used with gradient based optimisation methods to find designs efficiently in practice.

In the single winner determination problem, we have n voters and m candidates and each voter j incurs a cost c(i, j) if candidate i is chosen. Our objective is to choose a candidate that minimizes the expected total cost incurred by the voters; however as we only have access to the agents' preference rankings over the outcomes, a loss of efficiency is inevitable. This loss of efficiency is quantified by distortion. We give an instance of the metric single winner determination problem for which any randomized social choice function has distortion at least 2.063164. This disproves the long-standing conjecture that there exists a randomized social choice function that has a worst-case distortion of at most 2.

Building performance is commonly calculated during the last phases of design, where most design specifications get fixed and are unlikely to be majorly modified based on design programs. Predictive models could play a significant role in informing architects and designers of the impact of their design decisions on energy consumption in buildings during early design stages. A building outline is a significant predictor of the final energy consumption and is conceptually determined by architects in the early design phases. This paper evaluates the impact of a building's outline on energy consumption using synthetic data to achieve appropriate predictive models in estimating a building's energy consumption. Four office outlines are selected in this study, including square, T, U, and L shapes. Besides the shape parameter, other building features commonly used in literature (i.e., Window to Wall Ratio (WWR), external wall material properties, glazing U value, windows' shading depth, and building orientation) are utilized in generating data distribution with a probabilistic approach. The results show that buildings with square shapes, in general, are more energy-efficient compared to buildings with T, U, and L shapes of the same volume. Also, T, U, and L shape samples show very similar behavior in terms of energy consumption. Principal Component Analysis (PCA) is applied to assess the variables' correlations on data distribution; the results show that wall material specifications explain about 40% of data variation. Finally, we applied polynomial regression models with different degrees of complexity to predict the synthesized building models' energy consumptions based on their outlines. The results show that degree 2 polynomial models, fitting the data over 98% R squared (coefficient of determination), could be used to predict new samples with high accuracy.

Together with impressive advances touching every aspect of our society, AI technology based on Deep Neural Networks (DNN) is bringing increasing security concerns. While attacks operating at test time have monopolised the initial attention of researchers, backdoor attacks, exploiting the possibility of corrupting DNN models by interfering with the training process, represents a further serious threat undermining the dependability of AI techniques. In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time. Test time errors, however, are activated only in the presence of a triggering event corresponding to a properly crafted input sample. In this way, the corrupted network continues to work as expected for regular inputs, and the malicious behaviour occurs only when the attacker decides to activate the backdoor hidden within the network. In the last few years, backdoor attacks have been the subject of an intense research activity focusing on both the development of new classes of attacks, and the proposal of possible countermeasures. The goal of this overview paper is to review the works published until now, classifying the different types of attacks and defences proposed so far. The classification guiding the analysis is based on the amount of control that the attacker has on the training process, and the capability of the defender to verify the integrity of the data used for training, and to monitor the operations of the DNN at training and test time. As such, the proposed analysis is particularly suited to highlight the strengths and weaknesses of both attacks and defences with reference to the application scenarios they are operating in.

Trajectory optimization and model predictive control are essential techniques underpinning advanced robotic applications, ranging from autonomous driving to full-body humanoid control. State-of-the-art algorithms have focused on data-driven approaches that infer the system dynamics online and incorporate posterior uncertainty during planning and control. Despite their success, such approaches are still susceptible to catastrophic errors that may arise due to statistical learning biases, unmodeled disturbances, or even directed adversarial attacks. In this paper, we tackle the problem of dynamics mismatch and propose a distributionally robust optimal control formulation that alternates between two relative entropy trust-region optimization problems. Our method finds the worst-case maximum entropy Gaussian posterior over the dynamics parameters and the corresponding robust policy. Furthermore, we show that our approach admits a closed-form backward-pass for a certain class of systems. Finally, we demonstrate the resulting robustness on linear and nonlinear numerical examples.

Deep Learning algorithms have achieved the state-of-the-art performance for Image Classification and have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass the human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms in order to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been constantly proposed in literature. Nevertheless, devising an efficient defense mechanism has proven to be a difficult task, since many approaches have already shown to be ineffective to adaptive attackers. Thus, this self-containing paper aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, however with a defender's perspective. Here, novel taxonomies for categorizing adversarial attacks and defenses are introduced and discussions about the existence of adversarial examples are provided. Further, in contrast to exisiting surveys, it is also given relevant guidance that should be taken into consideration by researchers when devising and evaluating defenses. Finally, based on the reviewed literature, it is discussed some promising paths for future research.

There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers, which only model the conditional distribution of the labels given the inputs. In this paper we propose the deep Bayes classifier, which improves classical naive Bayes with conditional deep generative models. We further develop detection methods for adversarial examples, which reject inputs that have negative log-likelihood under the generative model exceeding a threshold pre-specified using training data. Experimental results suggest that deep Bayes classifiers are more robust than deep discriminative classifiers, and the proposed detection methods achieve high detection rates against many recently proposed attacks.

While attributes have been widely used for person re-identification (Re-ID) that matches the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image person matching task. However, how to find a set of person images according to a given attribute description, which is very practical in many surveillance applications, remains a rarely investigated cross-modal matching problem in Person Re-ID. In this work, we present this challenge and employ adversarial learning to formulate the attribute-image cross-modal person Re-ID model. By imposing the regularization on the semantic consistency constraint across modalities, the adversarial learning enables generating image-analogous concepts for query attributes and getting it matched with image in both global level and semantic ID level. We conducted extensive experiments on three attribute datasets and demonstrated that the adversarial modelling is so far the most effective for the attributeimage cross-modal person Re-ID problem.

Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.

北京阿比特科技有限公司