亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In recent years, blockchain technology has introduced decentralized finance (DeFi) as an alternative to traditional financial systems. DeFi aims to create a transparent and efficient financial ecosystem using smart contracts and emerging decentralized applications. However, the growing popularity of DeFi has made it a target for fraudulent activities, resulting in losses of billions of dollars due to various types of frauds. To address these issues, researchers have explored the potential of artificial intelligence (AI) approaches to detect such fraudulent activities. Yet, there is a lack of a systematic survey to organize and summarize those existing works and to identify the future research opportunities. In this survey, we provide a systematic taxonomy of various frauds in the DeFi ecosystem, categorized by the different stages of a DeFi project's life cycle: project development, introduction, growth, maturity, and decline. This taxonomy is based on our finding: many frauds have strong correlations in the stage of the DeFi project. According to the taxonomy, we review existing AI-powered detection methods, including statistical modeling, natural language processing and other machine learning techniques, etc. We find that fraud detection in different stages employs distinct types of methods and observe the commendable performance of tree-based and graph-related models in tackling fraud detection tasks. By analyzing the challenges and trends, we present the findings to provide proactive suggestion and guide future research in DeFi fraud detection. We believe that this survey is able to support researchers, practitioners, and regulators in establishing a secure and trustworthy DeFi ecosystem.

相關內容

Sharding is essential for improving blockchain scalability. Existing protocols overlook diverse adversarial attacks, limiting transaction throughput. This paper presents Reticulum, a groundbreaking sharding protocol addressing this issue, boosting blockchain scalability. Reticulum employs a two-phase approach, adapting transaction throughput based on runtime adversarial attacks. It comprises "control" and "process" shards in two layers. Process shards contain at least one trustworthy node, while control shards have a majority of trusted nodes. In the first phase, transactions are written to blocks and voted on by nodes in process shards. Unanimously accepted blocks are confirmed. In the second phase, blocks without unanimous acceptance are voted on by control shards. Blocks are accepted if the majority votes in favor, eliminating first-phase opponents and silent voters. Reticulum uses unanimous voting in the first phase, involving fewer nodes, enabling more parallel process shards. Control shards finalize decisions and resolve disputes. Experiments confirm Reticulum's innovative design, providing high transaction throughput and robustness against various network attacks, outperforming existing sharding protocols for blockchain networks.

Forecasting models for systematic trading strategies do not adapt quickly when financial market conditions change, as was seen in the advent of the COVID-19 pandemic in 2020, when market conditions changed dramatically causing many forecasting models to take loss-making positions. To deal with such situations, we propose a novel time-series trend-following forecaster that is able to quickly adapt to new market conditions, referred to as regimes. We leverage recent developments from the deep learning community and use few-shot learning. We propose the Cross Attentive Time-Series Trend Network - X-Trend - which takes positions attending over a context set of financial time-series regimes. X-Trend transfers trends from similar patterns in the context set to make predictions and take positions for a new distinct target regime. X-Trend is able to quickly adapt to new financial regimes with a Sharpe ratio increase of 18.9% over a neural forecaster and 10-fold over a conventional Time-series Momentum strategy during the turbulent market period from 2018 to 2023. Our strategy recovers twice as quickly from the COVID-19 drawdown compared to the neural-forecaster. X-Trend can also take zero-shot positions on novel unseen financial assets obtaining a 5-fold Sharpe ratio increase versus a neural time-series trend forecaster over the same period. X-Trend both forecasts next-day prices and outputs a trading signal. Furthermore, the cross-attention mechanism allows us to interpret the relationship between forecasts and patterns in the context set.

Process mining, a technique turning event data into business process insights, has traditionally operated on the assumption that each event corresponds to a singular case or object. However, many real-world processes are intertwined with multiple objects, making them object-centric. This paper focuses on the emerging domain of object-centric process mining, highlighting its potential yet underexplored benefits in actual operational scenarios. Through an in-depth case study of Borusan Cat's after-sales service process, this study emphasizes the capability of object-centric process mining to capture entangled business process details. Utilizing an event log of approximately 65,000 events, our analysis underscores the importance of embracing this paradigm for richer business insights and enhanced operational improvements.

Over the past few years, the division of gait phases has emerged as a complex area of research that carries significant importance for various applications in the field of gait technologies. The accurate partitioning of gait phases plays a crucial role in advancing these applications. Researchers have been exploring a range of sensors that can be employed to provide data for algorithms involved in gait phase partitioning. These sensors can be broadly categorized into two types: wearable and non-wearable, each offering unique advantages and capabilities. In our study aimed at examining the current approaches to gait analysis and detection specifically designed for implementation in ambulatory rehabilitation systems, we conducted a comprehensive meta-analysis of existing research studies. Our analysis revealed a diverse range of sensors and sensor combinations that demonstrate the ability to analyze gait patterns in ambulatory settings. These sensor options vary from basic force-based binary switches to more intricate setups incorporating multiple inertial sensors and sophisticated algorithms. The findings highlight the wide spectrum of available technologies and methodologies used in gait analysis for ambulatory applications. To conduct an extensive review, we systematically examined two prominent databases, IEEE and Scopus, with the aim of identifying relevant studies pertaining to gait analysis. The search criteria were limited to 189 papers published between 1999 and 2023. From this pool, we identified and included five papers that specifically focused on various techniques including Thresholding, Quasi-static method, adaptive classifier, and SVM-based approaches. These selected papers provided valuable insights for our review.

Robotic technologies are becoming increasingly popular in dentistry due to the high level of precision required in delicate dental procedures. Most dental robots available today are designed for implant surgery, helping dentists to accurately place implants in the desired position and depth. In this paper, we introduce the DentiBot, the first robot specifically designed for dental endodontic treatment. The DentiBot is equipped with a force and torque sensor, as well as a string-based Patient Tracking Module, allowing for real-time monitoring of endodontic file contact and patient movement. We propose a 6-DoF hybrid position/force controller that enables autonomous adjustment of the surgical path and compensation for patient movement, while also providing protection against endodontic file fracture. In addition, a file flexibility model is incorporated to compensate for file bending. Pre-clinical evaluations performed on acrylic root canal models and resin teeth confirm the feasibility of the DentiBot in assisting endodontic treatment.

In the era of digital markets, the challenge for consumers is discerning quality amidst information asymmetry. While traditional markets use brand mechanisms to address this issue, transferring such systems to internet-based P2P markets, where misleading practices like fake ratings are rampant, remains challenging. Current internet platforms strive to counter this through verification algorithms, but these efforts find themselves in a continuous tug-of-war with counterfeit actions. Exploiting the transparency, immutability, and traceability of blockchain technology, this paper introduces a robust reputation voting system grounded in it. Unlike existing blockchain-based reputation systems, our model harnesses an intrinsically economically incentivized approach to bolster agent integrity. We optimize this model to mirror real-world user behavior, preserving the reputation system's foundational sustainability. Through Monte-Carlo simulations, using both uniform and power-law distributions enabled by an innovative inverse transform method, we traverse a broad parameter landscape, replicating real-world complexity. The findings underscore the promise of a sustainable, transparent, and formidable reputation mechanism. Given its structure, our framework can potentially function as a universal, sustainable oracle for offchain-onchain bridging, aiding entities in perpetually cultivating their reputation. Future integration with technologies like Ring Signature and Zero Knowledge Proof could amplify the system's privacy facets, rendering it particularly influential in the ever-evolving digital domain.

In order to fully unlock the transformative power of distributed ledgers and blockchains, it is crucial to develop innovative consensus algorithms that can overcome the obstacles of security, scalability, and interoperability, which currently hinder their widespread adoption. This paper introduces HybridChain that combines the advantages of sharded blockchain and DAG distributed ledger, and a consensus algorithm that leverages decentralized learning. Our approach involves validators exchanging perceptions as votes to assess potential conflicts between transactions and the witness set, representing input transactions in the UTXO model. These perceptions collectively contribute to an intermediate belief regarding the validity of transactions. By integrating their beliefs with those of other validators, localized decisions are made to determine validity. Ultimately, a final consensus is achieved through a majority vote, ensuring precise and efficient validation of transactions. Our proposed approach is compared to the existing DAG-based scheme IOTA and the sharded blockchain Omniledger through extensive simulations. The results show that IOTA has high throughput and low latency but sacrifices accuracy and is vulnerable to orphanage attacks especially with low transaction rates. Omniledger achieves stable accuracy by increasing shards but has increased latency. In contrast, the proposed HybridChain exhibits fast, accurate, and secure transaction processing, and excellent scalability.

Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.

A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the remaining challenges. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding. In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the computational linguistics community.

One of the key requirements to facilitate semantic analytics of information regarding contemporary and historical events on the Web, in the news and in social media is the availability of reference knowledge repositories containing comprehensive representations of events and temporal relations. Existing knowledge graphs, with popular examples including DBpedia, YAGO and Wikidata, focus mostly on entity-centric information and are insufficient in terms of their coverage and completeness with respect to events and temporal relations. EventKG presented in this paper is a multilingual event-centric temporal knowledge graph that addresses this gap. EventKG incorporates over 690 thousand contemporary and historical events and over 2.3 million temporal relations extracted from several large-scale knowledge graphs and semi-structured sources and makes them available through a canonical representation.

北京阿比特科技有限公司