亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Empathy is a social skill that indicates an individual's ability to understand others. Over the past few years, empathy has drawn attention from various disciplines, including but not limited to Affective Computing, Cognitive Science and Psychology. Empathy is a context-dependent term; thus, detecting or recognising empathy has potential applications in society, healthcare and education. Despite being a broad and overlapping topic, the avenue of empathy detection studies leveraging Machine Learning remains underexplored from a holistic literature perspective. To this end, we systematically collect and screen 801 papers from 10 well-known databases and analyse the selected 54 papers. We group the papers based on input modalities of empathy detection systems, i.e., text, audiovisual, audio and physiological signals. We examine modality-specific pre-processing and network architecture design protocols, popular dataset descriptions and availability details, and evaluation protocols. We further discuss the potential applications, deployment challenges and research gaps in the Affective Computing-based empathy domain, which can facilitate new avenues of exploration. We believe that our work is a stepping stone to developing a privacy-preserving and unbiased empathic system inclusive of culture, diversity and multilingualism that can be deployed in practice to enhance the overall well-being of human life.

相關內容

機器(qi)學(xue)習(Machine Learning)是(shi)一個(ge)研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)計算學(xue)習方(fang)法(fa)的(de)(de)(de)(de)(de)國(guo)際論(lun)(lun)壇。該雜志發表(biao)文(wen)章(zhang),報告廣(guang)泛的(de)(de)(de)(de)(de)學(xue)習方(fang)法(fa)應(ying)用(yong)于(yu)各種學(xue)習問(wen)題(ti)的(de)(de)(de)(de)(de)實質性結果。該雜志的(de)(de)(de)(de)(de)特(te)色論(lun)(lun)文(wen)描述(shu)研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)的(de)(de)(de)(de)(de)問(wen)題(ti)和方(fang)法(fa),應(ying)用(yong)研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)和研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)方(fang)法(fa)的(de)(de)(de)(de)(de)問(wen)題(ti)。有(you)關學(xue)習問(wen)題(ti)或方(fang)法(fa)的(de)(de)(de)(de)(de)論(lun)(lun)文(wen)通過實證(zheng)研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)、理論(lun)(lun)分(fen)析或與心理現象的(de)(de)(de)(de)(de)比較提供了(le)(le)堅實的(de)(de)(de)(de)(de)支(zhi)持。應(ying)用(yong)論(lun)(lun)文(wen)展示(shi)了(le)(le)如何應(ying)用(yong)學(xue)習方(fang)法(fa)來解(jie)決重(zhong)要的(de)(de)(de)(de)(de)應(ying)用(yong)問(wen)題(ti)。研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)方(fang)法(fa)論(lun)(lun)文(wen)改(gai)進了(le)(le)機器(qi)學(xue)習的(de)(de)(de)(de)(de)研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)方(fang)法(fa)。所有(you)的(de)(de)(de)(de)(de)論(lun)(lun)文(wen)都以(yi)其他研(yan)(yan)(yan)究(jiu)(jiu)(jiu)(jiu)人員可(ke)以(yi)驗證(zheng)或復制的(de)(de)(de)(de)(de)方(fang)式描述(shu)了(le)(le)支(zhi)持證(zheng)據。論(lun)(lun)文(wen)還詳(xiang)細說明了(le)(le)學(xue)習的(de)(de)(de)(de)(de)組成(cheng)部分(fen),并討論(lun)(lun)了(le)(le)關于(yu)知識表(biao)示(shi)和性能任務的(de)(de)(de)(de)(de)假設。 官網地址:

Mediation analysis is an important statistical tool in many research fields. Its aim is to investigate the mechanism along the causal pathway between an exposure and an outcome. The joint significance test is widely utilized as a prominent statistical approach for examining mediation effects in practical applications. Nevertheless, the limitation of this mediation testing method stems from its conservative Type I error, which reduces its statistical power and imposes certain constraints on its popularity and utility. The proposed solution to address this gap is the adaptive joint significance test for one mediator, a novel data-adaptive test for mediation effect that exhibits significant advancements compared to traditional joint significance test. The proposed method is designed to be user-friendly, eliminating the need for complicated procedures. We have derived explicit expressions for size and power, ensuring the theoretical validity of our approach. Furthermore, we extend the proposed adaptive joint significance tests for small-scale mediation hypotheses with family-wise error rate (FWER) control. Additionally, a novel adaptive Sobel-type approach is proposed for the estimation of confidence intervals for the mediation effects, demonstrating significant advancements over conventional Sobel's confidence intervals in terms of achieving desirable coverage probabilities. Our mediation testing and confidence intervals procedure is evaluated through comprehensive simulations, and compared with numerous existing approaches. Finally, we illustrate the usefulness of our method by analysing three real-world datasets with continuous, binary and time-to-event outcomes, respectively.

Decision-makers often observe the occurrence of events through a reporting process. City governments, for example, rely on resident reports to find and then resolve urban infrastructural problems such as fallen street trees, flooded basements, or rat infestations. Without additional assumptions, there is no way to distinguish events that occur but are not reported from events that truly did not occur--a fundamental problem in settings with positive-unlabeled data. Because disparities in reporting rates correlate with resident demographics, addressing incidents only on the basis of reports leads to systematic neglect in neighborhoods that are less likely to report events. We show how to overcome this challenge by leveraging the fact that events are spatially correlated. Our framework uses a Bayesian spatial latent variable model to infer event occurrence probabilities and applies it to storm-induced flooding reports in New York City, further pooling results across multiple storms. We show that a model accounting for under-reporting and spatial correlation predicts future reports more accurately than other models, and further induces a more equitable set of inspections: its allocations better reflect the population and provide equitable service to non-white, less traditionally educated, and lower-income residents. This finding reflects heterogeneous reporting behavior learned by the model: reporting rates are higher in Census tracts with higher populations, proportions of white residents, and proportions of owner-occupied households. Our work lays the groundwork for more equitable proactive government services, even with disparate reporting behavior.

Vaccination is important to minimize the risk and spread of various diseases. In recent years, vaccination has been a key step in countering the COVID-19 pandemic. However, many people are skeptical about the use of vaccines for various reasons, including the politics involved, the potential side effects of vaccines, etc. The goal in this task is to build an effective multi-label classifier to label a social media post (particularly, a tweet) according to the specific concern(s) towards vaccines as expressed by the author of the post. We tried three different models-(a) Supervised BERT-large-uncased, (b) Supervised HateXplain model, and (c) Zero-Shot GPT-3.5 Turbo model. The Supervised BERT-large-uncased model performed best in our case. We achieved a macro-F1 score of 0.66, a Jaccard similarity score of 0.66, and received the sixth rank among other submissions. Code is available at-//github.com/anonmous1981/AISOME

Open-loop stable limit cycles are foundational to the dynamics of legged robots. They impart a self-stabilizing character to the robot's gait, thus alleviating the need for compute-heavy feedback-based gait correction. This paper proposes a general approach to rapidly generate limit cycles with explicit stability constraints for a given dynamical system. In particular, we pose the problem of open-loop limit cycle stability as a single-stage constrained-optimization problem (COP), and use Direct Collocation to transcribe it into a nonlinear program (NLP) with closed-form expressions for constraints, objectives, and their gradients. The COP formulations of stability are developed based (1) on the spectral radius of a discrete return map, and (2) on the spectral radius of the system's monodromy matrix, where the spectral radius is bounded using different constraint-satisfaction formulations of the eigenvalue problem. We compare the performance and solution qualities of each approach, but specifically highlight the Schur decomposition of the monodromy matrix as a formulation which boasts wider applicability through weaker assumptions and attractive numerical convergence properties. Moreover, we present results from our experiments on a spring-loaded inverted pendulum model of a robot, where our method generated actuation trajectories for open-loop stable hopping in under 2 seconds (on the Intel Core i7-6700K), and produced energy-minimizing actuation trajectories even under tight stability constraints.

In recent years, end-to-end speech recognition has emerged as a technology that integrates the acoustic, pronunciation dictionary, and language model components of the traditional Automatic Speech Recognition model. It is possible to achieve human-like recognition without the need to build a pronunciation dictionary in advance. However, due to the relative scarcity of training data on code-switching, the performance of ASR models tends to degrade drastically when encountering this phenomenon. Most past studies have simplified the learning complexity of the model by splitting the code-switching task into multiple tasks dealing with a single language and then learning the domain-specific knowledge of each language separately. Therefore, in this paper, we attempt to introduce language identification information into the middle layer of the ASR model's encoder. We aim to generate acoustic features that imply language distinctions in a more implicit way, reducing the model's confusion when dealing with language switching.

The goal of this work was to apply the ``Gale-Shapley'' algorithm to a real-world problem. We analyzed the pairing of influencers with merchants, and after a detailed specification of the variables involved, we conducted experiments to observe the validity of the approach. We conducted an analysis of the problem of aligning the interests of merchants to have digital influencers promote their products and services. We propose applying the matching algorithm approach to address this issue. We demonstrate that it is possible to apply the algorithm and still achieve corporate objectives by translating performance indicators into the desired ranking of influencers and product campaigns to be advertised by merchants.

Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem solving, decision making, and critical thinking. In recent years, large language models (LLMs) have made significant progress in natural language processing, and there is observation that these models may exhibit reasoning abilities when they are sufficiently large. However, it is not yet clear to what extent LLMs are capable of reasoning. This paper provides a comprehensive overview of the current state of knowledge on reasoning in LLMs, including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous research in this field, and suggestions on future directions. Our aim is to provide a detailed and up-to-date review of this topic and stimulate meaningful discussion and future work.

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.

We investigate the problem of automatically determining what type of shoe left an impression found at a crime scene. This recognition problem is made difficult by the variability in types of crime scene evidence (ranging from traces of dust or oil on hard surfaces to impressions made in soil) and the lack of comprehensive databases of shoe outsole tread patterns. We find that mid-level features extracted by pre-trained convolutional neural nets are surprisingly effective descriptors for this specialized domains. However, the choice of similarity measure for matching exemplars to a query image is essential to good performance. For matching multi-channel deep features, we propose the use of multi-channel normalized cross-correlation and analyze its effectiveness. Our proposed metric significantly improves performance in matching crime scene shoeprints to laboratory test impressions. We also show its effectiveness in other cross-domain image retrieval problems: matching facade images to segmentation labels and aerial photos to map images. Finally, we introduce a discriminatively trained variant and fine-tune our system through our proposed metric, obtaining state-of-the-art performance.

北京阿比特科技有限公司