亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a majority based preference diffusion model in which the members of a social network update their preferences based on those of their connections. Consider an undirected graph where each node has a strict linear order over a set of $\alpha$ alternatives. At each round, a node randomly selects two adjacent alternatives and updates their relative order with the majority view of its neighbors. We bound the convergence time of the process in terms of the number of nodes/edges and $\alpha$. Furthermore, we study the minimum cost to ensure that a desired alternative will ``win'' the process, where occupying each position in a preference order of a node has a cost. We prove tight bounds on the minimum cost for general graphs and graphs with strong expansion properties. Furthermore, we investigate a more light-weight process where each node chooses one of its neighbors uniformly at random and copies its order fully with some fixed probability and remains unchanged otherwise. We characterize the convergence properties of this process, namely convergence time and stable states, using Martingale and reversible Markov chain analysis. Finally, we present the outcomes of our experiments conducted on different synthetic random graph models and graph data from online social platforms. These experiments not only support our theoretical findings, but also shed some light on some other fundamental problems, such as designing powerful countermeasures.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

Table understanding capability of Large Language Models (LLMs) has been extensively studied through the task of question-answering (QA) over tables. Typically, only a small part of the whole table is relevant to derive the answer for a given question. The irrelevant parts act as noise and are distracting information, resulting in sub-optimal performance due to the vulnerability of LLMs to noise. To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) - a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information. CABINET comprises an Unsupervised Relevance Scorer (URS), trained differentially with the QA LLM, that weighs the table content based on its relevance to the input question before feeding it to the question-answering LLM (QA LLM). To further aid the relevance scorer, CABINET employs a weakly supervised module that generates a parsing statement describing the criteria of rows and columns relevant to the question and highlights the content of corresponding table cells. CABINET significantly outperforms various tabular LLM baselines, as well as GPT3-based in-context learning methods, is more robust to noise, maintains outperformance on tables of varying sizes, and establishes new SoTA performance on WikiTQ, FeTaQA, and WikiSQL datasets. We release our code and datasets at //github.com/Sohanpatnaik106/CABINET_QA.

The CTL learning problem consists in finding for a given sample of positive and negative Kripke structures a distinguishing CTL formula that is verified by the former but not by the latter. Further constraints may bound the size and shape of the desired formula or even ask for its minimality in terms of syntactic size. This synthesis problem is motivated by explanation generation for dissimilar models, e.g. comparing a faulty implementation with the original protocol. We devise a SAT-based encoding for a fixed size CTL formula, then provide an incremental approach that guarantees minimality. We further report on a prototype implementation whose contribution is twofold: first, it allows us to assess the efficiency of various output fragments and optimizations. Secondly, we can experimentally evaluate this tool by randomly mutating Kripke structures or syntactically introducing errors in higher-level models, then learning CTL distinguishing formulas.

Large language models (LLMs) predominantly employ decoder-only transformer architectures, necessitating the retention of keys/values information for historical tokens to provide contextual information and avoid redundant computation. However, the substantial size and parameter volume of these LLMs require massive GPU memory. This memory demand increases with the length of the input text, leading to an urgent need for more efficient methods of information storage and processing. This study introduces the Anchor-based LLM (AnLLM), which utilizes an innovative anchor-based self-attention network (AnSAN) and also an anchor-based inference strategy. This approach enables LLMs to compress sequence information into an anchor token, reducing the keys/values cache and enhancing inference efficiency. Experiments show that the AnLLM maintains comparable accuracy with up to 99% keys/values cache reduction and up to 3.5 times faster inference. Despite a minor compromise in accuracy, the AnLLM significantly improves computational efficiency and resource utilization, demonstrating the potential of the anchor-based attention approach in the context of LLMs for real-time inference in practical applications.

In data analysis, there continues to be a need for interpretable dimensionality reduction methods whereby instrinic meaning associated with the data is retained in the reduced space. Standard approaches such as Principal Component Analysis (PCA) and the Singular Value Decomposition (SVD) fail at this task. A popular alternative is the CUR decomposition. In an SVD-like manner, the CUR decomposition approximates a matrix $A \in \mathbb{R}^{m \times n}$ as $A \approx CUR$, where $C$ and $R$ are matrices whose columns and rows are selected from the original matrix \cite{goreinov1997theory}, \cite{mahoney2009cur}. The difficulty in constructing a CUR decomposition is in determining which columns and rows to select when forming $C$ and $R$. Current column/row selection algorithms, particularly those that rely on an SVD, become infeasible as the size of the data becomes large \cite{dong2021simpler}. We address this problem by reducing the column/row selection problem to a collection of smaller sub-problems. The basic idea is to first partition the rows/columns of a matrix, and then apply an existing selection algorithm on each piece; for illustration purposes we use the Discrete Empirical Interpolation Method (\textsf{DEIM}) \cite{sorensen2016deim}. For the first task, we consider two existing algorithms that construct a Voronoi Tessellation (VT) of the rows and columns of a given matrix. We then extend these methods to automatically adapt to the data. The result is four data-driven row/column selection methods that are well-suited for parallelization, and compatible with nearly any existing column/row selection strategy. Theory and numerical examples show the design to be competitive with the original \textsf{DEIM} routine.

The maximum absolute correlation between regressors, which is called mutual coherence, plays an essential role in sparse estimation. A regressor matrix whose columns are highly correlated may result from optimal input design, since there is no constraint on the mutual coherence, so when this regressor is used to estimate sparse parameter vectors of a system, it may yield a large estimation error. This paper aims to tackle this issue for fixed denominator models, which include Laguerre, Kautz, and generalized orthonormal basis function expansion models, for example. The paper proposes an optimal input design method where the achieved Fisher information matrix is fitted to the desired Fisher matrix, together with a coordinate transformation designed to make the regressors in the transformed coordinates have low mutual coherence. The method can be used together with any sparse estimation method and in a numerical study we show its potential for alleviating the problem of model order selection when used in conjunction with, for example, classical methods such as AIC and BIC.

Despite the recent progress in deep learning, most approaches still go for a silo-like solution, focusing on learning each task in isolation: training a separate neural network for each individual task. Many real-world problems, however, call for a multi-modal approach and, therefore, for multi-tasking models. Multi-task learning (MTL) aims to leverage useful information across tasks to improve the generalization capability of a model. This thesis is concerned with multi-task learning in the context of computer vision. First, we review existing approaches for MTL. Next, we propose several methods that tackle important aspects of multi-task learning. The proposed methods are evaluated on various benchmarks. The results show several advances in the state-of-the-art of multi-task learning. Finally, we discuss several possibilities for future work.

The new era of technology has brought us to the point where it is convenient for people to share their opinions over an abundance of platforms. These platforms have a provision for the users to express themselves in multiple forms of representations, including text, images, videos, and audio. This, however, makes it difficult for users to obtain all the key information about a topic, making the task of automatic multi-modal summarization (MMS) essential. In this paper, we present a comprehensive survey of the existing research in the area of MMS.

Link prediction on knowledge graphs (KGs) is a key research topic. Previous work mainly focused on binary relations, paying less attention to higher-arity relations although they are ubiquitous in real-world KGs. This paper considers link prediction upon n-ary relational facts and proposes a graph-based approach to this task. The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention. The fully-connected attention captures universal inter-vertex interactions, while with edge-aware attentive biases to particularly encode the graph structure and its heterogeneity. In this fashion, our approach fully models global and local dependencies in each n-ary fact, and hence can more effectively capture associations therein. Extensive evaluation verifies the effectiveness and superiority of our approach. It performs substantially and consistently better than current state-of-the-art across a variety of n-ary relational benchmarks. Our code is publicly available.

The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.

Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. Deep metric learning aims to learn deep neural networks for feature embeddings, distances of which satisfy given constraint. In deep metric learning, ensemble takes average of distances learned by multiple learners. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.

北京阿比特科技有限公司