亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A blockchain facilitates secure and atomic transactions between mutually untrusting parties on that chain. Today, there are multiple blockchains with differing interfaces and security properties. Programming in this multi-blockchain world is hindered by the lack of general and convenient abstractions for cross-chain communication and computation. Current cross-chain communication bridges have varied and low-level interfaces, making it difficult to develop portable applications. Current methods for multi-chain atomic transactions are limited in scope to cryptocurrency swaps. This work addresses these issues. We first define a uniform, high-level interface for communication between chains. Building on this interface, we formulate a protocol that guarantees atomicity for general transactions whose operations may span several chains. We formulate and prove the desired correctness and security properties of these protocols. Our prototype implementation is built using the LayerZero cross-chain bridge. Experience with this implementation shows that the new abstractions considerably simplify the design and implementation of multi-chain transactions. Experimental evaluation with multi-chain swap transactions demonstrates performance comparable to that of custom-built implementations.

相關內容

Sensors are the key to environmental monitoring, which impart benefits to smart cities in many aspects, such as providing real-time air quality information to assist human decision-making. However, it is impractical to deploy massive sensors due to the expensive costs, resulting in sparse data collection. Therefore, how to get fine-grained data measurement has long been a pressing issue. In this paper, we aim to infer values at non-sensor locations based on observations from available sensors (termed spatiotemporal inference), where capturing spatiotemporal relationships among the data plays a critical role. Our investigations reveal two significant insights that have not been explored by previous works. Firstly, data exhibits distinct patterns at both long- and short-term temporal scales, which should be analyzed separately. Secondly, short-term patterns contain more delicate relations including those across spatial and temporal dimensions simultaneously, while long-term patterns involve high-level temporal trends. Based on these observations, we propose to decouple the modeling of short-term and long-term patterns. Specifically, we introduce a joint spatiotemporal graph attention network to learn the relations across space and time for short-term patterns. Furthermore, we propose a graph recurrent network with a time skip strategy to alleviate the gradient vanishing problem and model the long-term dependencies. Experimental results on four public real-world datasets demonstrate that our method effectively captures both long- and short-term relations, achieving state-of-the-art performance against existing methods.

Light Detection and Ranging (LiDAR) technology has proven to be an important part of many robotics systems. Surface normals estimated from LiDAR data are commonly used for a variety of tasks in such systems. As most of the today's mechanical LiDAR sensors produce sparse data, estimating normals from a single scan in a robust manner poses difficulties. In this paper, we address the problem of estimating normals for sparse LiDAR data avoiding the typical issues of smoothing out the normals in high curvature areas. Mechanical LiDARs rotate a set of rigidly mounted lasers. One firing of such a set of lasers produces an array of points where each point's neighbor is known due to the known firing pattern of the scanner. We use this knowledge to connect these points to their neighbors and label them using the angles of the lines connecting them. When estimating normals at these points, we only consider points with the same label as neighbors. This allows us to avoid estimating normals in high curvature areas. We evaluate our approach on various data, both self-recorded and publicly available, acquired using various sparse LiDAR sensors. We show that using our method for normal estimation leads to normals that are more robust in areas with high curvature which leads to maps of higher quality. We also show that our method only incurs a constant factor runtime overhead with respect to a lightweight baseline normal estimation procedure and is therefore suited for operation in computationally demanding environments.

We consider a voting problem in which a set of agents have metric preferences over a set of alternatives, and are also partitioned into disjoint groups. Given information about the preferences of the agents and their groups, our goal is to decide an alternative to approximately minimize an objective function that takes the groups of agents into account. We consider two natural group-fair objectives known as Max-of-Avg and Avg-of-Max which are different combinations of the max and the average cost in and out of the groups. We show tight bounds on the best possible distortion that can be achieved by various classes of mechanisms depending on the amount of information they have access to. In particular, we consider group-oblivious full-information mechanisms that do not know the groups but have access to the exact distances between agents and alternatives in the metric space, group-oblivious ordinal-information mechanisms that again do not know the groups but are given the ordinal preferences of the agents, and group-aware mechanisms that have full knowledge of the structure of the agent groups and also ordinal information about the metric space.

Existence constraints were defined in the Relational Data Model, but, unfortunately, are not provided by any Relational Database Management System, except for their NOT NULL particular case. Our (Elementary) Mathematical Data Model extended them to function products and introduced their dual non-existence constraints. MatBase, an intelligent data and knowledge base management system prototype based on both these data models, not only provides existence and non-existence constraints, but also automatically generates code for their enforcement. This paper presents and discusses the algorithms used by MatBase to enforce these types of constraints.

General Type-2 (GT2) Fuzzy Logic Systems (FLSs) are perfect candidates to quantify uncertainty, which is crucial for informed decisions in high-risk tasks, as they are powerful tools in representing uncertainty. In this paper, we travel back in time to provide a new look at GT2-FLSs by adopting Zadeh's (Z) GT2 Fuzzy Set (FS) definition, intending to learn GT2-FLSs that are capable of achieving reliable High-Quality Prediction Intervals (HQ-PI) alongside precision. By integrating Z-GT2-FS with the \(\alpha\)-plane representation, we show that the design flexibility of GT2-FLS is increased as it takes away the dependency of the secondary membership function from the primary membership function. After detailing the construction of Z-GT2-FLSs, we provide solutions to challenges while learning from high-dimensional data: the curse of dimensionality, and integrating Deep Learning (DL) optimizers. We develop a DL framework for learning dual-focused Z-GT2-FLSs with high performances. Our study includes statistical analyses, highlighting that the Z-GT2-FLS not only exhibits high-precision performance but also produces HQ-PIs in comparison to its GT2 and IT2 fuzzy counterparts which have more learnable parameters. The results show that the Z-GT2-FLS has a huge potential in uncertainty quantification.

We construct a system, Sandi, to bring trust in online communication through accountability. Sandi is based on a unique "somewhat monotone" accountability score, with strong privacy and security properties. A registered sender can request from Sandi a cryptographic tag encoding its score. The score measures the sender's trustworthiness based on its previous communications. The tag is sent to a receiver with whom the sender wants to initiate a conversation and signals the sender's "endorsement" for the communication channel. Receivers can use the sender's score to decide how to proceed with the sender. If a receiver finds the sender's communication inappropriate, it can use the tag to report the sender to Sandi, thus decreasing the sender's score. Sandi aims to benefit both senders and receivers. Senders benefit, as receivers are more likely to react to communication on an endorsed channel. Receivers benefit, as they can make better choices regarding who they interact with based on indisputable evidence from prior receivers. Receivers do not need registered accounts. Neither senders nor receivers are required to maintain long-term secret keys. Sandi provides a score integrity guarantee for the senders, a full communication privacy guarantee for the senders and receivers, a reporter privacy guarantee to protect reporting receivers, and an unlinkability guarantee to protect senders. The design of Sandi ensures compatibility with any communication system that allows for small binary data transfer. Finally, we provide a game-theoretic analysis for the sender. We prove that Sandi drives rational senders towards a strategy that reduces the amount of inappropriate communication.

The new age of digital growth has marked all fields. This technological evolution has impacted data flows which have witnessed a rapid expansion over the last decade that makes the data traditional processing unable to catch up with the rapid flow of massive data. In this context, the implementation of a big data analytics system becomes crucial to make big data more relevant and valuable. Therefore, with these new opportunities appear new issues of processing very high data volumes requiring companies to look for big data-specialized solutions. These solutions are based on techniques to process these masses of information to facilitate decision-making. Among these solutions, we find data visualization which makes big data more intelligible allowing accurate illustrations that have become accessible to all. This paper examines the big data visualization project based on its characteristics, benefits, challenges and issues. The project, also, resulted in the provision of tools surging for beginners as well as well as experienced users.

Large Language Models (LLMs) have demonstrated remarkable capabilities in various domains, including data augmentation and synthetic data generation. This work explores the use of LLMs to generate rich textual descriptions for motion sequences, encompassing both actions and walking patterns. We leverage the expressive power of LLMs to align motion representations with high-level linguistic cues, addressing two distinct tasks: action recognition and retrieval of walking sequences based on appearance attributes. For action recognition, we employ LLMs to generate textual descriptions of actions in the BABEL-60 dataset, facilitating the alignment of motion sequences with linguistic representations. In the domain of gait analysis, we investigate the impact of appearance attributes on walking patterns by generating textual descriptions of motion sequences from the DenseGait dataset using LLMs. These descriptions capture subtle variations in walking styles influenced by factors such as clothing choices and footwear. Our approach demonstrates the potential of LLMs in augmenting structured motion attributes and aligning multi-modal representations. The findings contribute to the advancement of comprehensive motion understanding and open up new avenues for leveraging LLMs in multi-modal alignment and data augmentation for motion analysis. We make the code publicly available at //github.com/Radu1999/WalkAndText

Micro-expressions (MEs) are involuntary movements revealing people's hidden feelings, which has attracted numerous interests for its objectivity in emotion detection. However, despite its wide applications in various scenarios, micro-expression recognition (MER) remains a challenging problem in real life due to three reasons, including (i) data-level: lack of data and imbalanced classes, (ii) feature-level: subtle, rapid changing, and complex features of MEs, and (iii) decision-making-level: impact of individual differences. To address these issues, we propose a dual-branch meta-auxiliary learning method, called LightmanNet, for fast and robust micro-expression recognition. Specifically, LightmanNet learns general MER knowledge from limited data through a dual-branch bi-level optimization process: (i) In the first level, it obtains task-specific MER knowledge by learning in two branches, where the first branch is for learning MER features via primary MER tasks, while the other branch is for guiding the model obtain discriminative features via auxiliary tasks, i.e., image alignment between micro-expressions and macro-expressions since their resemblance in both spatial and temporal behavioral patterns. The two branches of learning jointly constrain the model of learning meaningful task-specific MER knowledge while avoiding learning noise or superficial connections between MEs and emotions that may damage its generalization ability. (ii) In the second level, LightmanNet further refines the learned task-specific knowledge, improving model generalization and efficiency. Extensive experiments on various benchmark datasets demonstrate the superior robustness and efficiency of LightmanNet.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司