亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This research introduces an innovative approach to explore the cognitive and biologically inspired underpinnings of feature vector splitting for analyzing the significance of different attributes in e-security biometric signature recognition applications. Departing from traditional methods of concatenating features into an extended set, we employ multiple splitting strategies, aligning with cognitive principles, to preserve control over the relative importance of each feature subset. Our methodology is applied to three diverse databases (MCYT100, MCYT300,and SVC) using two classifiers (vector quantization and dynamic time warping with one and five training samples). Experimentation demonstrates that the fusion of pressure data with spatial coordinates (x and y) consistently enhances performance. However, the inclusion of pen-tip angles in the same feature set yields mixed results, with performance improvements observed in select cases. This work delves into the cognitive aspects of feature fusion,shedding light on the cognitive relevance of feature vector splitting in e-security biometric applications.

相關內容

We propose a novel database model whose basic structure is a labeled, directed, acyclic graph with a single root, in which the nodes represent the data sets of an application and the edges represent functional relationships among the data sets. We call such a graph an application context or simply context. The query language of a context consists of two types of queries, traversal queries and analytic queries. Both types of queries are defined using a simple functional algebra whose operations are functional restriction, composition of functions, pairing of functions and Cartesian product of sets. Roughly speaking, traversal queries parallel relational algebra queries, whereas analytic queries parallel SQL Group-by queries. In other words, in our model, traversal queries and analytic queries, are both defined within the same formal framework - in contrast to the relational model, where analytic queries are defined outside the relational algebra. Therefore a distinctive feature of our model is that it supports data management and data analytics within the same formal framework. We demonstrate the expressive power of our model by showing: (a) how a relational database can be defined as a view over a context, with the context playing the role of an underlying semantic layer; (b) how an analytic query over a context can be rewritten at two orthogonal levels: at the level of the traversal queries that do the grouping and measuring, and at the level of the analytic query itself; and (c) how a context can be used as a user-friendly interface for querying relations and analysing relational data.

Combinatorial Optimization (CO) problems are fundamentally crucial in numerous practical applications across diverse industries, characterized by entailing enormous solution space and demanding time-sensitive response. Despite significant advancements made by recent neural solvers, their limited expressiveness does not conform well to the multi-modal nature of CO landscapes. While some research has pivoted towards diffusion models, they require simulating a Markov chain with many steps to produce a sample, which is time-consuming and does not meet the efficiency requirement of real applications, especially at scale. We propose DISCO, an efficient DIffusion Solver for Combinatorial Optimization problems that excels in both solution quality and inference speed. DISCO's efficacy is two-pronged: Firstly, it achieves rapid denoising of solutions through an analytically solvable form, allowing for direct sampling from the solution space with very few reverse-time steps, thereby drastically reducing inference time. Secondly, DISCO enhances solution quality by restricting the sampling space to a more constrained, meaningful domain guided by solution residues, while still preserving the inherent multi-modality of the output probabilistic distributions. DISCO achieves state-of-the-art results on very large Traveling Salesman Problems with 10000 nodes and challenging Maximal Independent Set benchmarks, with its per-instance denoising time up to 44.8 times faster. Through further combining a divide-and-conquer strategy, DISCO can be generalized to solve arbitrary-scale problem instances off the shelf, even outperforming models trained specifically on corresponding scales.

The elementary theory of bivariate linear Diophantine equations over polynomial rings is used to construct causal lifting factorizations for causal two-channel FIR perfect reconstruction filter banks and wavelet transforms. The Diophantine approach generates causal factorizations satisfying certain polynomial degree-reducing inequalities, enabling a new lifting factorization strategy called the Causal Complementation Algorithm. This provides a causal, hence realizable, alternative to the noncausal lifting scheme developed by Daubechies and Sweldens using the Extended Euclidean Algorithm for Laurent polynomials. The new approach replaces the Euclidean Algorithm with a slight generalization of polynomial division that ensures existence and uniqueness of quotients whose remainders satisfy user-specified divisibility constraints. The Causal Complementation Algorithm is shown to be more general than the causal (polynomial) version of the Euclidean Algorithm approach by generating additional causal lifting factorizations beyond those obtainable using the polynomial Euclidean Algorithm.

Satisfiability modulo nonlinear real arithmetic theory (SMT(NRA)) solving is essential to multiple applications, including program verification, program synthesis and software testing. In this context, recently model constructing satisfiability calculus (MCSAT) has been invented to directly search for models in the theory space. Although following papers discussed practical directions and updates on MCSAT, less attention has been paid to the detailed implementation. In this paper, we present an efficient implementation of dynamic variable orderings of MCSAT, called dnlsat. We show carefully designed data structures and promising mechanisms, such as branching heuristic, restart, and lemma management. Besides, we also give a theoretical study of potential influences brought by the dynamic variablr ordering. The experimental evaluation shows that dnlsat accelerates the solving speed and solves more satisfiable instances than other state-of-the-art SMT solvers. Demonstration Video: //youtu.be/T2Z0gZQjnPw Code: //github.com/yogurt-shadow/dnlsat/tree/master/code Benchmark //zenodo.org/records/10607722/files/QF_NRA.tar.zst?download=1

Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.

The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding research topic, VIS4ML, keeps growing at a fast pace. To better organize the enormous works and shed light on the developing trend of VIS4ML, we provide a systematic review of these works through this survey. Since data quality greatly impacts the performance of ML models, our survey focuses specifically on summarizing VIS4ML works from the data perspective. First, we categorize the common data handled by ML models into five types, explain the unique features of each type, and highlight the corresponding ML models that are good at learning from them. Second, from the large number of VIS4ML works, we tease out six tasks that operate on these types of data (i.e., data-centric tasks) at different stages of the ML pipeline to understand, diagnose, and refine ML models. Lastly, by studying the distribution of 143 surveyed papers across the five data types, six data-centric tasks, and their intersections, we analyze the prospective research directions and envision future research trends.

Deep learning techniques have led to remarkable breakthroughs in the field of generic object detection and have spawned a lot of scene-understanding tasks in recent years. Scene graph has been the focus of research because of its powerful semantic representation and applications to scene understanding. Scene Graph Generation (SGG) refers to the task of automatically mapping an image into a semantic structural scene graph, which requires the correct labeling of detected objects and their relationships. Although this is a challenging task, the community has proposed a lot of SGG approaches and achieved good results. In this paper, we provide a comprehensive survey of recent achievements in this field brought about by deep learning techniques. We review 138 representative works that cover different input modalities, and systematically summarize existing methods of image-based SGG from the perspective of feature extraction and fusion. We attempt to connect and systematize the existing visual relationship detection methods, to summarize, and interpret the mechanisms and the strategies of SGG in a comprehensive way. Finally, we finish this survey with deep discussions about current existing problems and future research directions. This survey will help readers to develop a better understanding of the current research status and ideas.

Learning disentanglement aims at finding a low dimensional representation which consists of multiple explanatory and generative factors of the observational data. The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations. However, in real scenarios, factors with semantics are not necessarily independent. Instead, there might be an underlying causal structure which renders these factors dependent. We thus propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent exogenous factors into causal endogenous ones that correspond to causally related concepts in data. We further analyze the model identifiabitily, showing that the proposed model learned from observations recovers the true one up to a certain degree. Experiments are conducted on various datasets, including synthetic and real word benchmark CelebA. Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy. Furthermore, we demonstrate that the proposed CausalVAE model is able to generate counterfactual data through "do-operation" to the causal factors.

The design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. In this paper, we propose a novel RNN-like deep graph neural network architecture by incorporating AdaBoost into the computation of network; and the proposed graph convolutional network called AdaGCN~(AdaBoosting Graph Convolutional Network) has the ability to efficiently extract knowledge from high-order neighbors and integrate knowledge from different hops of neighbors into the network in an AdaBoost way. We also present the architectural difference between AdaGCN and existing graph convolutional methods to show the benefits of our proposal. Finally, extensive experiments demonstrate the state-of-the-art prediction performance and the computational advantage of our approach AdaGCN.

Knowledge representation learning (KRL) aims to represent entities and relations in knowledge graph in low-dimensional semantic space, which have been widely used in massive knowledge-driven tasks. In this article, we introduce the reader to the motivations for KRL, and overview existing approaches for KRL. Afterwards, we extensively conduct and quantitative comparison and analysis of several typical KRL methods on three evaluation tasks of knowledge acquisition including knowledge graph completion, triple classification, and relation extraction. We also review the real-world applications of KRL, such as language modeling, question answering, information retrieval, and recommender systems. Finally, we discuss the remaining challenges and outlook the future directions for KRL. The codes and datasets used in the experiments can be found in //github.com/thunlp/OpenKE.

北京阿比特科技有限公司