亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents the first comprehensive analysis of an emerging cryptocurrency scam named "arbitrage bot" disseminated on online social networks. The scam revolves around Decentralized Exchanges (DEX) arbitrage and aims to lure victims into executing a so-called "bot contract" to steal funds from them. To collect the scam at a large scale, we developed a fully automated scam detection system named CryptoScamHunter, which continuously collects YouTube videos and automatically detects scams. Meanwhile, CryptoScamHunter can download the source code of the bot contract from the provided links and extract the associated scam cryptocurrency address. Through deploying CryptoScamHunter from Jun. 2022 to Jun. 2023, we have detected 10,442 arbitrage bot scam videos published from thousands of YouTube accounts. Our analysis reveals that different strategies have been utilized in spreading the scam, including crafting popular accounts, registering spam accounts, and using obfuscation tricks to hide the real scam address in the bot contracts. Moreover, from the scam videos we have collected over 800 malicious bot contracts with source code and extracted 354 scam addresses. By further expanding the scam addresses with a similar contract matching technique, we have obtained a total of 1,697 scam addresses. Through tracing the transactions of all scam addresses on the Ethereum mainnet and Binance Smart Chain, we reveal that over 25,000 victims have fallen prey to this scam, resulting in a financial loss of up to 15 million USD. Overall, our work sheds light on the dissemination tactics and censorship evasion strategies adopted in the arbitrage bot scam, as well as on the scale and impact of such a scam on online social networks and blockchain platforms, emphasizing the urgent need for effective detection and prevention mechanisms against such fraudulent activity.

相關內容

代碼分析與操作(SCAM)國際工作會議的目的是將從事與計算機系統源代碼的分析和/或操作有關的理論、技術和應用的研究人員和實踐者聚集在一起。雖然在更廣泛的軟件工程界中,人們的注意力都集中在系統開發和演化的其他方面,如規范、設計和需求工程,但源代碼是對系統行為的唯一精確描述。因此,對源代碼的分析和操作仍然是一個緊迫的問題。 官網鏈接: · 查全率/召回率 · MoDELS · 可理解性 · Performer ·
2023 年 12 月 6 日

Mechanistic interpretability aims to understand model behaviors in terms of specific, interpretable features, often hypothesized to manifest as low-dimensional subspaces of activations. Specifically, recent studies have explored subspace interventions (such as activation patching) as a way to simultaneously manipulate model behavior and attribute the features behind it to given subspaces. In this work, we demonstrate that these two aims diverge, potentially leading to an illusory sense of interpretability. Counterintuitively, even if a subspace intervention makes the model's output behave as if the value of a feature was changed, this effect may be achieved by activating a dormant parallel pathway leveraging another subspace that is causally disconnected from model outputs. We demonstrate this phenomenon in a distilled mathematical example, in two real-world domains (the indirect object identification task and factual recall), and present evidence for its prevalence in practice. In the context of factual recall, we further show a link to rank-1 fact editing, providing a mechanistic explanation for previous work observing an inconsistency between fact editing performance and fact localization. However, this does not imply that activation patching of subspaces is intrinsically unfit for interpretability. To contextualize our findings, we also show what a success case looks like in a task (indirect object identification) where prior manual circuit analysis informs an understanding of the location of a feature. We explore the additional evidence needed to argue that a patched subspace is faithful.

We demonstrate how conditional generation from diffusion models can be used to tackle a variety of realistic tasks in the production of music in 44.1kHz stereo audio with sampling-time guidance. The scenarios we consider include continuation, inpainting and regeneration of musical audio, the creation of smooth transitions between two different music tracks, and the transfer of desired stylistic characteristics to existing audio clips. We achieve this by applying guidance at sampling time in a simple framework that supports both reconstruction and classification losses, or any combination of the two. This approach ensures that generated audio can match its surrounding context, or conform to a class distribution or latent representation specified relative to any suitable pre-trained classifier or embedding model. Audio samples are available at //machinelearning.apple.com/research/controllable-music

Motivated by the need for analysing large spatio-temporal panel data, we introduce a novel dimensionality reduction methodology for $n$-dimensional random fields observed across a number $S$ spatial locations and $T$ time periods. We call it General Spatio-Temporal Factor Model (GSTFM). First, we provide the probabilistic and mathematical underpinning needed for the representation of a random field as the sum of two components: the common component (driven by a small number $q$ of latent factors) and the idiosyncratic component (mildly cross-correlated). We show that the two components are identified as $n\to\infty$. Second, we propose an estimator of the common component and derive its statistical guarantees (consistency and rate of convergence) as $\min(n, S, T )\to\infty$. Third, we propose an information criterion to determine the number of factors. Estimation makes use of Fourier analysis in the frequency domain and thus we fully exploit the information on the spatio-temporal covariance structure of the whole panel. Synthetic data examples illustrate the applicability of GSTFM and its advantages over the extant generalized dynamic factor model that ignores the spatial correlations.

We develop a post-selection inference method for the Cox proportional hazards model with interval-censored data, which provides asymptotically valid p-values and confidence intervals conditional on the model selected by lasso. The method is based on a pivotal quantity that is shown to converge to a uniform distribution under local alternatives. The proof can be adapted to many other regression models, which is illustrated by the extension to generalized linear models and the Cox model with right-censored data. Our method involves estimation of the efficient information matrix, for which several approaches are proposed with proof of their consistency. Thorough simulation studies show that our method has satisfactory performance in samples of modest sizes. The utility of the method is illustrated via an application to an Alzheimer's disease study.

Unimodality, pivotal in statistical analysis, offers insights into dataset structures and drives sophisticated analytical procedures. While unimodality's confirmation is straightforward for one-dimensional data using methods like Silverman's approach and Hartigans' dip statistic, its generalization to higher dimensions remains challenging. By extrapolating one-dimensional unimodality principles to multi-dimensional spaces through linear random projections and leveraging point-to-point distancing, our method, rooted in $\alpha$-unimodality assumptions, presents a novel multivariate unimodality test named mud-pod. Both theoretical and empirical studies confirm the efficacy of our method in unimodality assessment of multidimensional datasets as well as in estimating the number of clusters.

Continuous Time Echo State Networks (CTESN) are a promising yet under-explored surrogate modeling technique for dynamical systems, particularly those governed by stiff Ordinary Differential Equations (ODEs). This paper critically investigates the effects of important hyper-parameters and algorithmic choices on the generalization capability of CTESN surrogates on two benchmark problems governed by Robertson's equations. The method is also used to parametrize the initial conditions of a system of ODEs that realistically model automobile collisions, solving them accurately up to 200 times faster than numerical ODE solvers. The results of this paper demonstrate the ability of CTESN surrogates to accurately predict sharp transients and highly nonlinear system responses, and their utility in speeding up the solution of stiff ODE systems, allowing for their use in diverse applications from accelerated design optimization to digital twins.

The fusion of causal models with deep learning introducing increasingly intricate data sets, such as the causal associations within images or between textual components, has surfaced as a focal research area. Nonetheless, the broadening of original causal concepts and theories to such complex, non-statistical data has been met with serious challenges. In response, our study proposes redefinitions of causal data into three distinct categories from the standpoint of causal structure and representation: definite data, semi-definite data, and indefinite data. Definite data chiefly pertains to statistical data used in conventional causal scenarios, while semi-definite data refers to a spectrum of data formats germane to deep learning, including time-series, images, text, and others. Indefinite data is an emergent research sphere inferred from the progression of data forms by us. To comprehensively present these three data paradigms, we elaborate on their formal definitions, differences manifested in datasets, resolution pathways, and development of research. We summarize key tasks and achievements pertaining to definite and semi-definite data from myriad research undertakings, present a roadmap for indefinite data, beginning with its current research conundrums. Lastly, we classify and scrutinize the key datasets presently utilized within these three paradigms.

The generalization mystery in deep learning is the following: Why do over-parameterized neural networks trained with gradient descent (GD) generalize well on real datasets even though they are capable of fitting random datasets of comparable size? Furthermore, from among all solutions that fit the training data, how does GD find one that generalizes well (when such a well-generalizing solution exists)? We argue that the answer to both questions lies in the interaction of the gradients of different examples during training. Intuitively, if the per-example gradients are well-aligned, that is, if they are coherent, then one may expect GD to be (algorithmically) stable, and hence generalize well. We formalize this argument with an easy to compute and interpretable metric for coherence, and show that the metric takes on very different values on real and random datasets for several common vision networks. The theory also explains a number of other phenomena in deep learning, such as why some examples are reliably learned earlier than others, why early stopping works, and why it is possible to learn from noisy labels. Moreover, since the theory provides a causal explanation of how GD finds a well-generalizing solution when one exists, it motivates a class of simple modifications to GD that attenuate memorization and improve generalization. Generalization in deep learning is an extremely broad phenomenon, and therefore, it requires an equally general explanation. We conclude with a survey of alternative lines of attack on this problem, and argue that the proposed approach is the most viable one on this basis.

We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual analysis and natural language processing. Our goal is different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment; instead, we aim to infer the latent emotional state of the user. Thus, we focus on predicting the emotion word tags attached by users to their Tumblr posts, treating these as "self-reported emotions." We demonstrate that our multimodal model combining both text and image features outperforms separate models based solely on either images or text. Our model's results are interpretable, automatically yielding sensible word lists associated with emotions. We explore the structure of emotions implied by our model and compare it to what has been posited in the psychology literature, and validate our model on a set of images that have been used in psychology studies. Finally, our work also provides a useful tool for the growing academic study of images - both photographs and memes - on social networks.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司