亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In recent years, blockchain technology has introduced decentralized finance (DeFi) as an alternative to traditional financial systems. DeFi aims to create a transparent and efficient financial ecosystem using smart contracts and emerging decentralized applications. However, the growing popularity of DeFi has made it a target for fraudulent activities, resulting in losses of billions of dollars due to various types of frauds. To address these issues, researchers have explored the potential of artificial intelligence (AI) approaches to detect such fraudulent activities. Yet, there is a lack of a systematic survey to organize and summarize those existing works and to identify the future research opportunities. In this survey, we provide a systematic taxonomy of various frauds in the DeFi ecosystem, categorized by the different stages of a DeFi project's life cycle: project development, introduction, growth, maturity, and decline. This taxonomy is based on our finding: many frauds have strong correlations in the stage of the DeFi project. According to the taxonomy, we review existing AI-powered detection methods, including statistical modeling, natural language processing and other machine learning techniques, etc. We find that fraud detection in different stages employs distinct types of methods and observe the commendable performance of tree-based and graph-related models in tackling fraud detection tasks. By analyzing the challenges and trends, we present the findings to provide proactive suggestion and guide future research in DeFi fraud detection. We believe that this survey is able to support researchers, practitioners, and regulators in establishing a secure and trustworthy DeFi ecosystem.

相關內容

分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)學是(shi)(shi)(shi)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)的(de)(de)(de)實踐(jian)和科(ke)學。Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)說明了(le)一(yi)種分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa),可(ke)(ke)以(yi)通過自(zi)動方式提取Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)的(de)(de)(de)完整分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)。截至2009年(nian),已經證明,可(ke)(ke)以(yi)使(shi)用人工構建(jian)的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(例(li)如像WordNet這樣的(de)(de)(de)計算詞(ci)典的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa))來改(gai)進和重(zhong)組(zu)(zu)Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)。 從(cong)廣義上講,分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)還適用于除父(fu)子(zi)層(ceng)次結(jie)構以(yi)外(wai)的(de)(de)(de)關系方案,例(li)如網絡(luo)結(jie)構。然后分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)可(ke)(ke)能(neng)(neng)包括(kuo)有多父(fu)母的(de)(de)(de)單身孩子(zi),例(li)如,“汽(qi)車(che)(che)(che)”可(ke)(ke)能(neng)(neng)與父(fu)母雙方一(yi)起出現“車(che)(che)(che)輛(liang)”和“鋼(gang)結(jie)構”;但(dan)是(shi)(shi)(shi)對(dui)某些(xie)人而言,這僅意味著“汽(qi)車(che)(che)(che)”是(shi)(shi)(shi)幾種不同(tong)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)的(de)(de)(de)一(yi)部(bu)分(fen)(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)也可(ke)(ke)能(neng)(neng)只(zhi)是(shi)(shi)(shi)將事物(wu)組(zu)(zu)織成(cheng)組(zu)(zu),或者是(shi)(shi)(shi)按字(zi)母順序排列的(de)(de)(de)列表;但(dan)是(shi)(shi)(shi)在(zai)這里,術語詞(ci)匯更合適。在(zai)知識管理中(zhong)的(de)(de)(de)當前用法(fa)中(zhong),分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)被認為比(bi)本(ben)體論(lun)窄,因為本(ben)體論(lun)應用了(le)各種各樣的(de)(de)(de)關系類(lei)(lei)(lei)(lei)(lei)型。 在(zai)數學上,分(fen)(fen)(fen)(fen)(fen)(fen)層(ceng)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)是(shi)(shi)(shi)給定對(dui)象集的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)樹結(jie)構。該(gai)結(jie)構的(de)(de)(de)頂(ding)部(bu)是(shi)(shi)(shi)適用于所有對(dui)象的(de)(de)(de)單個分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei),即(ji)根節(jie)點。此(ci)根下(xia)的(de)(de)(de)節(jie)點是(shi)(shi)(shi)更具體的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei),適用于總(zong)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)對(dui)象集的(de)(de)(de)子(zi)集。推(tui)理的(de)(de)(de)進展從(cong)一(yi)般(ban)到更具體。

知識薈萃

精品(pin)入門和(he)進(jin)階教程、論文(wen)和(he)代(dai)碼整理等

更多

查(cha)看相關VIP內容(rong)、論文(wen)、資(zi)訊等

Current recommendation systems are significantly affected by a serious issue of temporal data shift, which is the inconsistency between the distribution of historical data and that of online data. Most existing models focus on utilizing updated data, overlooking the transferable, temporal data shift-free information that can be learned from shifting data. We propose the Temporal Invariance of Association theorem, which suggests that given a fixed search space, the relationship between the data and the data in the search space keeps invariant over time. Leveraging this principle, we designed a retrieval-based recommendation system framework that can train a data shift-free relevance network using shifting data, significantly enhancing the predictive performance of the original model in the recommendation system. However, retrieval-based recommendation models face substantial inference time costs when deployed online. To address this, we further designed a distill framework that can distill information from the relevance network into a parameterized module using shifting data. The distilled model can be deployed online alongside the original model, with only a minimal increase in inference time. Extensive experiments on multiple real datasets demonstrate that our framework significantly improves the performance of the original model by utilizing shifting data.

Proof-of-Work (PoW) blockchains have emerged as a robust and effective consensus mechanism in open environments like the Internet, leading to widespread deployment with numerous cryptocurrency platforms and substantial investments. However, the current PoW implementation primarily focuses on validating the discovery of a winning nonce. Exploring the notion of replacing cryptographic puzzles with useful computing tasks becomes compelling, given the substantial computational capacity of blockchain networks and the global pursuit of a more sustainable IT infrastructure. In this study, we conduct a comprehensive analysis of the prerequisites for alternative classes of tasks, examining proposed designs from existing literature in light of these requirements. We distill pertinent techniques and address gaps in the current state-of-the-art, providing valuable insights into the evolution of consensus mechanisms beyond traditional PoW.

In recent years, badminton analytics has drawn attention due to the advancement of artificial intelligence and the efficiency of data collection. While there is a line of effective applications to improve and investigate player performance, there are only a few public badminton datasets that can be used by researchers outside the badminton domain. Existing badminton singles datasets focus on specific matchups; however, they cannot provide comprehensive studies on different players and various matchups. In this paper, we provide a badminton singles dataset, ShuttleSet22, which is collected from high-ranking matches in 2022. ShuttleSet22 consists of 30,172 strokes in 2,888 rallies in the training set, 1,400 strokes in 450 rallies in the validation set, and 2,040 strokes in 654 rallies in the testing set, with detailed stroke-level metadata within a rally. To benchmark existing work with ShuttleSet22, we hold a challenge, Track 2: Forecasting Future Turn-Based Strokes in Badminton Rallies, at CoachAI Badminton Challenge @ IJCAI 2023, to encourage researchers to tackle this real-world problem through innovative approaches and to summarize insights between the state-of-the-art baseline and improved techniques, exchanging inspiring ideas. The baseline codes and the dataset are made available at //github.com/wywyWang/CoachAI-Projects/tree/main/CoachAI-Challenge-IJCAI2023.

The advent of the era of big data provides new ideas for financial distress prediction. In order to evaluate the financial status of listed companies more accurately, this study establishes a financial distress prediction indicator system based on multi-source data by integrating three data sources: the company's internal management, the external market and online public opinion. This study addresses the redundancy and dimensional explosion problems of multi-source data integration, feature selection of the fused data, and a financial distress prediction model based on maximum relevance and minimum redundancy and support vector machine recursive feature elimination (MRMR-SVM-RFE). To verify the effectiveness of the model, we used back propagation (BP), support vector machine (SVM), and gradient boosted decision tree (GBDT) classification algorithms, and conducted an empirical study on China's listed companies based on different financial distress prediction indicator systems. MRMR-SVM-RFE feature selection can effectively extract information from multi-source fused data. The new feature dataset obtained by selection has higher prediction accuracy than the original data, and the BP classification model is better than linear regression (LR), decision tree (DT), and random forest (RF).

In recent years, modern techniques in deep learning and large-scale datasets have led to impressive progress in 3D instance segmentation, grasp pose estimation, and robotics. This allows for accurate detection directly in 3D scenes, object- and environment-aware grasp prediction, as well as robust and repeatable robotic manipulation. This work aims to integrate these recent methods into a comprehensive framework for robotic interaction and manipulation in human-centric environments. Specifically, we leverage 3D reconstructions from a commodity 3D scanner for open-vocabulary instance segmentation, alongside grasp pose estimation, to demonstrate dynamic picking of objects, and opening of drawers. We show the performance and robustness of our model in two sets of real-world experiments including dynamic object retrieval and drawer opening, reporting a 51% and 82% success rate respectively. Code of our framework as well as videos are available on: //spot-compose.github.io/.

In recent years, there has been a growing interest in using machine learning techniques for the estimation of treatment effects. Most of the best-performing methods rely on representation learning strategies that encourage shared behavior among potential outcomes to increase the precision of treatment effect estimates. In this paper we discuss and classify these models in terms of their algorithmic inductive biases and present a new model, NN-CGC, that considers additional information from the causal graph. NN-CGC tackles bias resulting from spurious variable interactions by implementing novel constraints on models, and it can be integrated with other representation learning methods. We test the effectiveness of our method using three different base models on common benchmarks. Our results indicate that our model constraints lead to significant improvements, achieving new state-of-the-art results in treatment effects estimation. We also show that our method is robust to imperfect causal graphs and that using partial causal information is preferable to ignoring it.

This paper explores the dual impact of digital banks and alternative lenders on financial inclusion and the regulatory challenges posed by their business models. It discusses the integration of digital platforms, machine learning (ML), and Large Language Models (LLMs) in enhancing financial services accessibility for underserved populations. Through a detailed analysis of operational frameworks and technological infrastructures, this research identifies key mechanisms that facilitate broader financial access and mitigate traditional barriers. Additionally, the paper addresses significant regulatory concerns involving data privacy, algorithmic bias, financial stability, and consumer protection. Employing a mixed-methods approach, which combines quantitative financial data analysis with qualitative insights from industry experts, this paper elucidates the complexities of leveraging digital technology to foster financial inclusivity. The findings underscore the necessity of evolving regulatory frameworks that harmonize innovation with comprehensive risk management. This paper concludes with policy recommendations for regulators, financial institutions, and technology providers, aiming to cultivate a more inclusive and stable financial ecosystem through prudent digital technology integration.

Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.

A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the remaining challenges. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding. In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the computational linguistics community.

One of the key requirements to facilitate semantic analytics of information regarding contemporary and historical events on the Web, in the news and in social media is the availability of reference knowledge repositories containing comprehensive representations of events and temporal relations. Existing knowledge graphs, with popular examples including DBpedia, YAGO and Wikidata, focus mostly on entity-centric information and are insufficient in terms of their coverage and completeness with respect to events and temporal relations. EventKG presented in this paper is a multilingual event-centric temporal knowledge graph that addresses this gap. EventKG incorporates over 690 thousand contemporary and historical events and over 2.3 million temporal relations extracted from several large-scale knowledge graphs and semi-structured sources and makes them available through a canonical representation.

北京阿比特科技有限公司