亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a data-driven approach to the quantitative verification of probabilistic programs and stochastic dynamical models. Our approach leverages neural networks to compute tight and sound bounds for the probability that a stochastic process hits a target condition within finite time. This problem subsumes a variety of quantitative verification questions, from the reachability and safety analysis of discrete-time stochastic dynamical models, to the study of assertion-violation and termination analysis of probabilistic programs. We rely on neural networks to represent supermartingale certificates that yield such probability bounds, which we compute using a counterexample-guided inductive synthesis loop: we train the neural certificate while tightening the probability bound over samples of the state space using stochastic optimisation, and then we formally check the certificate's validity over every possible state using satisfiability modulo theories; if we receive a counterexample, we add it to our set of samples and repeat the loop until validity is confirmed. We demonstrate on a diverse set of benchmarks that, thanks to the expressive power of neural networks, our method yields smaller or comparable probability bounds than existing symbolic methods in all cases, and that our approach succeeds on models that are entirely beyond the reach of such alternative techniques.

相關內容

神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)(Neural Networks)是世界(jie)上三個最(zui)古老的(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)建模學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)的(de)(de)(de)(de)檔案期(qi)刊:國(guo)際神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(INNS)、歐洲神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(ENNS)和(he)(he)日本(ben)神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(JNNS)。神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)提供了(le)(le)一(yi)個論(lun)(lun)壇,以發展和(he)(he)培育一(yi)個國(guo)際社(she)會(hui)(hui)的(de)(de)(de)(de)學(xue)(xue)(xue)(xue)(xue)(xue)者和(he)(he)實踐者感(gan)興(xing)趣(qu)的(de)(de)(de)(de)所(suo)有方面的(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)和(he)(he)相關方法的(de)(de)(de)(de)計(ji)(ji)算(suan)智(zhi)(zhi)能。神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)歡迎高(gao)質量論(lun)(lun)文的(de)(de)(de)(de)提交,有助(zhu)于全面的(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)研究,從行為和(he)(he)大(da)腦建模,學(xue)(xue)(xue)(xue)(xue)(xue)習算(suan)法,通(tong)過(guo)數(shu)學(xue)(xue)(xue)(xue)(xue)(xue)和(he)(he)計(ji)(ji)算(suan)分(fen)析,系(xi)統(tong)的(de)(de)(de)(de)工(gong)程和(he)(he)技(ji)術應用,大(da)量使用神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)的(de)(de)(de)(de)概念和(he)(he)技(ji)術。這一(yi)獨特而廣泛的(de)(de)(de)(de)范圍促(cu)進(jin)了(le)(le)生物和(he)(he)技(ji)術研究之間的(de)(de)(de)(de)思想交流,并有助(zhu)于促(cu)進(jin)對生物啟發的(de)(de)(de)(de)計(ji)(ji)算(suan)智(zhi)(zhi)能感(gan)興(xing)趣(qu)的(de)(de)(de)(de)跨學(xue)(xue)(xue)(xue)(xue)(xue)科(ke)(ke)(ke)社(she)區的(de)(de)(de)(de)發展。因此,神(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(luo)編(bian)委會(hui)(hui)代表(biao)的(de)(de)(de)(de)專家領域包括心(xin)理(li)學(xue)(xue)(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)生物學(xue)(xue)(xue)(xue)(xue)(xue),計(ji)(ji)算(suan)機科(ke)(ke)(ke)學(xue)(xue)(xue)(xue)(xue)(xue),工(gong)程,數(shu)學(xue)(xue)(xue)(xue)(xue)(xue),物理(li)。該雜志(zhi)發表(biao)文章、信件和(he)(he)評論(lun)(lun)以及給(gei)編(bian)輯的(de)(de)(de)(de)信件、社(she)論(lun)(lun)、時事(shi)、軟(ruan)件調查和(he)(he)專利信息。文章發表(biao)在五個部分(fen)之一(yi):認知科(ke)(ke)(ke)學(xue)(xue)(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)科(ke)(ke)(ke)學(xue)(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)(xue)習系(xi)統(tong),數(shu)學(xue)(xue)(xue)(xue)(xue)(xue)和(he)(he)計(ji)(ji)算(suan)分(fen)析、工(gong)程和(he)(he)應用。 官網(wang)地址:

We introduce a novel logic for asynchronous hyperproperties with a new mechanism to identify relevant positions on traces. While the new logic is more expressive than a related logic presented recently by Bozzelli et al., we obtain the same complexity of the model checking problem for finite state models. Beyond this, we study the model checking problem of our logic for pushdown models. We argue that the combination of asynchronicity and a non-regular model class studied in this paper constitutes the first suitable approach for hyperproperty model checking against recursive programs.

Matrix computation units have been equipped in current architectures to accelerate AI and high performance computing applications. The matrix multiplication and vector outer product are two basic instruction types. The latter one is lighter since the inputs are vectors. Thus it provides more opportunities to develop flexible algorithms for problems other than dense linear algebra computing and more possibilities to optimize the implementation. Stencil computations represent a common class of nested loops in scientific and engineering applications. This paper proposes a novel stencil algorithm using vector outer products. Unlike previous work, the new algorithm arises from the stencil definition in the scatter mode and is initially expressed with formulas of vector outer products. The implementation incorporates a set of optimizations to improve the memory reference pattern, execution pipeline and data reuse by considering various algorithmic options and the data sharing between input vectors. Evaluation on a simulator shows that our design achieves a substantial speedup compared with vectorized stencil algorithm.

Denoising diffusion models have found applications in image segmentation by generating segmented masks conditioned on images. Existing studies predominantly focus on adjusting model architecture or improving inference, such as test-time sampling strategies. In this work, we focus on improving the training strategy and propose a novel recycling method. During each training step, a segmentation mask is first predicted given an image and a random noise. This predicted mask, which replaces the conventional ground truth mask, is used for denoising task during training. This approach can be interpreted as aligning the training strategy with inference by eliminating the dependence on ground truth masks for generating noisy samples. Our proposed method significantly outperforms standard diffusion training, self-conditioning, and existing recycling strategies across multiple medical imaging data sets: muscle ultrasound, abdominal CT, prostate MR, and brain MR. This holds for two widely adopted sampling strategies: denoising diffusion probabilistic model and denoising diffusion implicit model. Importantly, existing diffusion models often display a declining or unstable performance during inference, whereas our novel recycling consistently enhances or maintains performance. We show that, under a fair comparison with the same network architectures and computing budget, the proposed recycling-based diffusion models achieved on-par performance with non-diffusion-based supervised training. By ensembling the proposed diffusion and the non-diffusion models, significant improvements to the non-diffusion models have been observed across all applications, demonstrating the value of this novel training method. This paper summarizes these quantitative results and discusses their values, with a fully reproducible JAX-based implementation, released at //github.com/mathpluscode/ImgX-DiffSeg.

The real-world data tends to be heavily imbalanced and severely skew the data-driven deep neural networks, which makes Long-Tailed Recognition (LTR) a massive challenging task. Existing LTR methods seldom train Vision Transformers (ViTs) with Long-Tailed (LT) data, while the off-the-shelf pretrain weight of ViTs always leads to unfair comparisons. In this paper, we systematically investigate the ViTs' performance in LTR and propose LiVT to train ViTs from scratch only with LT data. With the observation that ViTs suffer more severe LTR problems, we conduct Masked Generative Pretraining (MGP) to learn generalized features. With ample and solid evidence, we show that MGP is more robust than supervised manners. In addition, Binary Cross Entropy (BCE) loss, which shows conspicuous performance with ViTs, encounters predicaments in LTR. We further propose the balanced BCE to ameliorate it with strong theoretical groundings. Specially, we derive the unbiased extension of Sigmoid and compensate extra logit margins to deploy it. Our Bal-BCE contributes to the quick convergence of ViTs in just a few epochs. Extensive experiments demonstrate that with MGP and Bal-BCE, LiVT successfully trains ViTs well without any additional data and outperforms comparable state-of-the-art methods significantly, e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNaturalist 2018 without bells and whistles. Code is available at //github.com/XuZhengzhuo/LiVT.

Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications. Most empirical studies of GNNs directly take the observed graph as input, assuming the observed structure perfectly depicts the accurate and complete relations between nodes. However, graphs in the real world are inevitably noisy or incomplete, which could even exacerbate the quality of graph representations. In this work, we propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL, in the perspective of information theory. VIB-GSL advances the Information Bottleneck (IB) principle for graph structure learning, providing a more elegant and universal framework for mining underlying task-relevant relations. VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks. VIB-GSL deduces a variational approximation for irregular graph data to form a tractable IB objective function, which facilitates training stability. Extensive experimental results demonstrate that the superior effectiveness and robustness of VIB-GSL.

Graph Neural Networks (GNNs) have proven to be useful for many different practical applications. However, many existing GNN models have implicitly assumed homophily among the nodes connected in the graph, and therefore have largely overlooked the important setting of heterophily, where most connected nodes are from different classes. In this work, we propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily. The proposed framework incorporates an interpretable compatibility matrix for modeling the heterophily or homophily level in the graph, which can be learned in an end-to-end fashion, enabling it to go beyond the assumption of strong homophily. Theoretically, we show that replacing the compatibility matrix in our framework with the identity (which represents pure homophily) reduces to GCN. Our extensive experiments demonstrate the effectiveness of our approach in more realistic and challenging experimental settings with significantly less training data compared to previous works: CPGNN variants achieve state-of-the-art results in heterophily settings with or without contextual node features, while maintaining comparable performance in homophily settings.

Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.

It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

We propose a new method for event extraction (EE) task based on an imitation learning framework, specifically, inverse reinforcement learning (IRL) via generative adversarial network (GAN). The GAN estimates proper rewards according to the difference between the actions committed by the expert (or ground truth) and the agent among complicated states in the environment. EE task benefits from these dynamic rewards because instances and labels yield to various extents of difficulty and the gains are expected to be diverse -- e.g., an ambiguous but correctly detected trigger or argument should receive high gains -- while the traditional RL models usually neglect such differences and pay equal attention on all instances. Moreover, our experiments also demonstrate that the proposed framework outperforms state-of-the-art methods, without explicit feature engineering.

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.

北京阿比特科技有限公司