亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many financial jobs rely on news to learn about causal events in the past and present, to make informed decisions and predictions about the future. With the ever-increasing amount of news available online, there is a need to automate the extraction of causal events from unstructured texts. In this work, we propose a methodology to construct causal knowledge graphs (KGs) from news using two steps: (1) Extraction of Causal Relations, and (2) Argument Clustering and Representation into KG. We aim to build graphs that emphasize on recall, precision and interpretability. For extraction, although many earlier works already construct causal KGs from text, most adopt rudimentary pattern-based methods. We close this gap by using the latest BERT-based extraction models alongside pattern-based ones. As a result, we achieved a high recall, while still maintaining a high precision. For clustering, we utilized a topic modelling approach to cluster our arguments, so as to increase the connectivity of our graph. As a result, instead of 15,686 disconnected subgraphs, we were able to obtain 1 connected graph that enables users to infer more causal relationships from. Our final KG effectively captures and conveys causal relationships, validated through experiments, multiple use cases and user feedback.

相關內容

In the film industry, movie posters have been an essential part of advertising and marketing for many decades, and continue to play a vital role even today in the form of digital posters through online, social media and OTT platforms. Typically, movie posters can effectively promote and communicate the essence of a film, such as its genre, visual style/ tone, vibe and storyline cue/ theme, which are essential to attract potential viewers. Identifying the genres of a movie often has significant practical applications in recommending the film to target audiences. Previous studies on movie genre identification are limited to subtitles, plot synopses, and movie scenes that are mostly accessible after the movie release. Posters usually contain pre-release implicit information to generate mass interest. In this paper, we work for automated multi-label genre identification only from movie poster images, without any aid of additional textual/meta-data information about movies, which is one of the earliest attempts of its kind. Here, we present a deep transformer network with a probabilistic module to identify the movie genres exclusively from the poster. For experimental analysis, we procured 13882 number of posters of 13 genres from the Internet Movie Database (IMDb), where our model performances were encouraging and even outperformed some major contemporary architectures.

Access to online data has long been important for law enforcement agencies in their collection of electronic evidence and investigation of crimes. These activities have also long involved cross-border investigations and international cooperation between agencies and jurisdictions. However, technological advances such as cloud computing have complicated the investigations and cooperation arrangements. Therefore, several new laws have been passed and proposed both in the United States and the European Union for facilitating cross-border crime investigations in the context of cloud computing. These new laws and proposals have also brought many new legal challenges and controversies regarding extraterritoriality, data protection, privacy, and surveillance. With these challenges in mind and with a focus on Europe, this paper reviews the recent trends and policy initiatives for cross-border data access by law enforcement agencies.

The competitive nature of Cloud marketplaces as new concerns in delivery of services makes the pricing policies a crucial task for firms. so that, pricing strategies has recently attracted many researchers. Since game theory can handle such competing well this concern is addressed by designing a normal form game between providers in current research. A committee is considered in which providers register for improving their competition based pricing policies. The functionality of game theory is applied to design dynamic pricing policies. The usage of the committee makes the game a complete information one, in which each player is aware of every others payoff functions. The players enhance their pricing policies to maximize their profits. The contribution of this paper is the quantitative modeling of Cloud marketplaces in form of a game to provide novel dynamic pricing strategies; the model is validated by proving the existence and the uniqueness of Nash equilibrium of the game.

Mobile health studies often collect multiple within-day self-reported assessments of participants' behavior and well-being, spanning various metrics like physical activity (continuous), pain levels (truncated), mood states (ordinal), and life events (binary). These assessments, when categorized by time of day, become functional data of different types - continuous, truncated, ordinal, and binary. Inspired by this diversity, we introduce a unified approach called functional principal component analysis. It employs a semiparametric Gaussian copula model, assuming a generalized latent non-paranormal process as the underlying mechanism for these four types of functional data. We specify latent temporal dependence using a covariance estimated through Kendall's tau bridging method, incorporating smoothness during the bridging process. Simulation studies demonstrate the method's competitive performance under both dense and sparse sampling conditions. We then apply this approach to data from 497 participants in the National Institute of Mental Health Family Study of the Mood Disorder Spectrum to characterize within-day temporal patterns of mood differences among individuals with major mood disorder subtypes, including Major Depressive Disorder, Type 1, and Type 2 Bipolar Disorder.

We investigate in this work a recently emerging type of scam token called Trapdoor, which has caused the investors hundreds of millions of dollars in the period of 2020-2023. In a nutshell, by embedding logical bugs and/or owner-only features to the smart contract codes, a Trapdoor token allows users to buy but prevent them from selling. We develop the first systematic classification of Trapdoor tokens and a comprehensive list of their programming techniques, accompanied by a detailed analysis on representative scam contracts. We also construct the very first dataset of 1859 manually verified Trapdoor tokens on Uniswap and build effective opcode-based detection tools using popular machine learning classifiers such as Random Forest, XGBoost, and LightGBM, which achieve at least 0.98% accuracies, precisions, recalls, and F1-scores

Many social events and policy interventions generate treatment effects that persistently spill over into neighboring areas, resulting in a phenomenon statisticians refer to as "interference" both in time and space. In this paper, I put forward a design-based framework to identify and estimate these spillover effects in panel data with a spatial dimension, when temporal and spatial interference intertwine in intricate ways that are unknown to researchers. The framework defines estimands that enable researchers to measure the influence of each type of interference, and I propose estimators that are consistent and asymptotically normal under the assumption of sequential ignorability and mild regularity conditions. I show that fixed effects models in panel data analysis, such as the difference-in-differences (DID) estimator, can lead to significant biases in such scenarios. I test the method's performance on both simulated datasets and the replication of two empirical studies.

Despite recognition of the relationship between infrastructure resilience and community recovery, very limited empirical evidence exists regarding the extent to which the disruptions in and restoration of infrastructure services contribute to the speed of community recovery. To address this gap, this study investigates the relationship between community and infrastructure systems in the context of hurricane impacts, focusing on the recovery dynamics of population activity and power infrastructure restoration. Empirical observational data were utilized to analyze the extent of impact, recovery duration, and recovery types of both systems in the aftermath of Hurricane Ida. The study reveals three key findings. First, power outage duration positively correlates with outage extent until a certain impact threshold is reached. Beyond this threshold, restoration time remains relatively stable regardless of outage magnitude. This finding underscores the need to strengthen power infrastructure, particularly in extreme weather conditions, to minimize outage restoration time. Second, power was fully restored in 70\% of affected areas before population activity levels normalized. This finding suggests the role infrastructure functionality plays in post-disaster community recovery. Interestingly, quicker power restoration did not equate to rapid population activity recovery due to other possible factors such as transportation, housing damage, and business interruptions. Finally, if power outages last beyond two weeks, community activity resumes before complete power restoration, indicating adaptability in prolonged outage scenarios. This implies the capacity of communities to adapt to ongoing power outages and continue daily life activities...

Over the past few years, the rapid development of deep learning technologies for computer vision has greatly promoted the performance of medical image segmentation (MedISeg). However, the recent MedISeg publications usually focus on presentations of the major contributions (e.g., network architectures, training strategies, and loss functions) while unwittingly ignoring some marginal implementation details (also known as "tricks"), leading to a potential problem of the unfair experimental result comparisons. In this paper, we collect a series of MedISeg tricks for different model implementation phases (i.e., pre-training model, data pre-processing, data augmentation, model implementation, model inference, and result post-processing), and experimentally explore the effectiveness of these tricks on the consistent baseline models. Compared to paper-driven surveys that only blandly focus on the advantages and limitation analyses of segmentation models, our work provides a large number of solid experiments and is more technically operable. With the extensive experimental results on both the representative 2D and 3D medical image datasets, we explicitly clarify the effect of these tricks. Moreover, based on the surveyed tricks, we also open-sourced a strong MedISeg repository, where each of its components has the advantage of plug-and-play. We believe that this milestone work not only completes a comprehensive and complementary survey of the state-of-the-art MedISeg approaches, but also offers a practical guide for addressing the future medical image processing challenges including but not limited to small dataset learning, class imbalance learning, multi-modality learning, and domain adaptation. The code has been released at: //github.com/hust-linyi/MedISeg

Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices -- a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior. This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication.

Time series forecasting is widely used in business intelligence, e.g., forecast stock market price, sales, and help the analysis of data trend. Most time series of interest are macroscopic time series that are aggregated from microscopic data. However, instead of directly modeling the macroscopic time series, rare literature studied the forecasting of macroscopic time series by leveraging data on the microscopic level. In this paper, we assume that the microscopic time series follow some unknown mixture probabilistic distributions. We theoretically show that as we identify the ground truth latent mixture components, the estimation of time series from each component could be improved because of lower variance, thus benefitting the estimation of macroscopic time series as well. Inspired by the power of Seq2seq and its variants on the modeling of time series data, we propose Mixture of Seq2seq (MixSeq), an end2end mixture model to cluster microscopic time series, where all the components come from a family of Seq2seq models parameterized by different parameters. Extensive experiments on both synthetic and real-world data show the superiority of our approach.

北京阿比特科技有限公司