亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a ground-based monocular UAV localisation system that detects and localises an LED marker attached to the underside of a UAV. Our system removes the need for extensive infrastructure and calibration unlike existing technologies such as UWB, radio frequency and multi-camera systems often used for localisation in GPS-denied environment. To improve deployablity for real-world applications without the need to collect extensive real dataset, we train a CNN on synthetic binary images as opposed to using real images in existing monocular UAV localisation methods, and factor in the camera's zoom to allow tracking of UAVs flying at further distances. We propose NoisyCutout algorithm for augmenting synthetic binary images to simulate binary images processed from real images and show that it improves localisation accuracy as compared to using existing salt-and-pepper and Cutout augmentation methods. We also leverage uncertainty propagation to modify the CNN's loss function and show that this also improves localisation accuracy. Real-world experiments are conducted to evaluate our methods and we achieve an overall 3D RMSE of approximately 0.41m.

相關內容

In this paper, we construct a winning condition $W$ over a finite set of colors such that, first, every finite arena has a strategy with 2 states of general memory which is optimal w.r.t.~$W$, and second, there exists no $k$ such that every finite arena has a strategy with $k$ states of chromatic memory which is optimal w.r.t.~$W$.

In this paper, we delve into the challenge of optimizing joint communication and computation for semantic communication over wireless networks using a probability graph framework. In the considered model, the base station (BS) extracts the small-sized compressed semantic information through removing redundant messages based on the stored knowledge base. Specifically, the knowledge base is encapsulated in a probability graph that encapsulates statistical relations. At the user side, the compressed information is accurately deduced using the same probability graph employed by the BS. While this approach introduces an additional computational overhead for semantic information extraction, it significantly curtails communication resource consumption by transmitting concise data. We derive both communication and computation cost models based on the inference process of the probability graph. Building upon these models, we introduce a joint communication and computation resource allocation problem aimed at minimizing the overall energy consumption of the network, while accounting for latency, power, and semantic constraints. To address this problem, we obtain a closed-form solution for transmission power under a fixed semantic compression ratio. Subsequently, we propose an efficient linear search-based algorithm to attain the optimal solution for the considered problem with low computational complexity. Simulation results underscore the effectiveness of our proposed system, showcasing notable improvements compared to conventional non-semantic schemes.

This paper establishes the equivalence between Local Differential Privacy (LDP) and a global limit on learning any knowledge about an object. However, an output from an LDP query is not necessarily required to provide exact amount of knowledge equal to the upper bound of the learning limit. Since the amount of knowledge gain should be proportional to the incurred privacy loss, the traditional approach of using DP guarantee to measure privacy loss can occasionally overestimate the actual privacy loss. This is especially problematic in privacy accounting in LDP, where privacy loss is computed by accumulating the DP guarantees. To address this issue, this paper introduces the concept of \textit{realized privacy loss}, which measures the actual knowledge gained by the analyst after a query, as a more accurate measure of privacy loss. The realized privacy loss is integrated into the privacy accounting of fully adaptive composition, where an adversary adaptively selects queries based on previous results. Bayesian Privacy Filter is implemented to continually accept queries until the realized privacy loss of the composed queries equals the DP guarantee of the composition, allowing the full utilization of the privacy budget. Tracking the realized privacy loss during the composition is achieved through Bayesian Privacy Odometer, and the gap between the privacy budget and the realized privacy loss measures the leeway of the DP guarantee for future queries. A branch-and-bound method is devised to enable the Bayesian Privacy Filter to safeguard objects with continuous values. The Bayesian Privacy Filter is proven to be at least as efficient as the basic composition, and more efficient if the queries are privacy-loss compactible. Experimental results indicate that Bayesian Privacy Filter outperforms the basic composition by a factor of one to four when composing linear and logistic regressions.

In this paper, we present a novel approach using the Auto GPT system alongside Design Sprint methodology to facilitate board game creation for inexperienced users. We introduce the implementation of Auto GPT for generating diverse board games and the subsequent optimization process through a customized Design Sprint. A user study is conducted to investigate the playability and enjoyment of the generated games, revealing both successes and challenges in employing systems like Auto GPT for board game design. Insights and future research directions are proposed to overcome identified limitations and enhance computational-driven game creation.

This paper concerns the development of metatheory for extensible languages. It uses as its starting point a view that programming languages tailored to specific application domains are to be constructed by composing components from an open library of independently-developed extensions to a host language. In the elaboration of this perspective, static analyses (such as typing) and dynamic semantics (such as evaluation) are described via relations whose specifications are distributed across the host language and extensions and are given in a rule-based fashion. Metatheoretic properties, which ensure that static analyses accurately gauge runtime behavior, are represented in this context by formulas over such relations. These properties may be fundamental to the language, introduced by the host language, or they may pertain to analyses introduced by individual extensions. We expose the problem of modular metatheory, i.e., the notion that proofs of relevant properties can be constructed by reasoning independently within each component in the library. To solve this problem, we propose the twin ideas of decomposing proofs around language fragments and of reasoning generically about extensions based on broad, a priori constraints imposed on their behavior. We establish the soundness of these styles of reasoning by showing how complete proofs of the properties can be automatically constructed for any language obtained by composing the independent parts. Mathematical precision is given to our discussions by framing them within a logic that encodes inductive rule-based specifications via least fixed-point definitions. We also sketch the structure of a practical system for metatheoretic reasoning for extensible languages based on the ideas developed.

Aiming at expanding few-shot relations' coverage in knowledge graphs (KGs), few-shot knowledge graph completion (FKGC) has recently gained more research interests. Some existing models employ a few-shot relation's multi-hop neighbor information to enhance its semantic representation. However, noise neighbor information might be amplified when the neighborhood is excessively sparse and no neighbor is available to represent the few-shot relation. Moreover, modeling and inferring complex relations of one-to-many (1-N), many-to-one (N-1), and many-to-many (N-N) by previous knowledge graph completion approaches requires high model complexity and a large amount of training instances. Thus, inferring complex relations in the few-shot scenario is difficult for FKGC models due to limited training instances. In this paper, we propose a few-shot relational learning with global-local framework to address the above issues. At the global stage, a novel gated and attentive neighbor aggregator is built for accurately integrating the semantics of a few-shot relation's neighborhood, which helps filtering the noise neighbors even if a KG contains extremely sparse neighborhoods. For the local stage, a meta-learning based TransH (MTransH) method is designed to model complex relations and train our model in a few-shot learning fashion. Extensive experiments show that our model outperforms the state-of-the-art FKGC approaches on the frequently-used benchmark datasets NELL-One and Wiki-One. Compared with the strong baseline model MetaR, our model achieves 5-shot FKGC performance improvements of 8.0% on NELL-One and 2.8% on Wiki-One by the metric Hits@10.

In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.

We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics. The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, called the judge, decides whether the fact is true or false. The two agents can be considered as sparse, adversarial feature generators that present interpretable evidence for either the thesis or the antithesis. In contrast to other black-box methods, the arguments allow users to get an understanding of the decision of the judge. Since the focus of this work is to create an explainable method that maintains a competitive predictive accuracy, we benchmark our method on the triple classification and link prediction task. Thereby, we find that our method outperforms several baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet. We also conduct a survey and find that the extracted arguments are informative for users.

In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR). We formulated ASR for different languages as different tasks, and meta-learned the initialization parameters from many pretraining languages to achieve fast adaptation on unseen target language, via recently proposed model-agnostic meta learning algorithm (MAML). We evaluated the proposed approach using six languages as pretraining tasks and four languages as target tasks. Preliminary results showed that the proposed method, MetaASR, significantly outperforms the state-of-the-art multitask pretraining approach on all target languages with different combinations of pretraining languages. In addition, since MAML's model-agnostic property, this paper also opens new research direction of applying meta learning to more speech-related applications.

We present a monocular Simultaneous Localization and Mapping (SLAM) using high level object and plane landmarks, in addition to points. The resulting map is denser, more compact and meaningful compared to point only SLAM. We first propose a high order graphical model to jointly infer the 3D object and layout planes from single image considering occlusions and semantic constraints. The extracted cuboid object and layout planes are further optimized in a unified SLAM framework. Objects and planes can provide more semantic constraints such as Manhattan and object supporting relationships compared to points. Experiments on various public and collected datasets including ICL NUIM and TUM mono show that our algorithm can improve camera localization accuracy compared to state-of-the-art SLAM and also generate dense maps in many structured environments.

北京阿比特科技有限公司