In this paper, we explore the concept of integrated sensing and communication (ISAC) within a downlink cell-free massive MIMO (multiple-input multiple-output) system featuring multi-static sensing and users requiring ultra-reliable low-latency communications (URLLC). Our focus involves the formulation of two non-convex algorithms that jointly solve power and blocklength allocation for end-to-end (E2E) minimization. The objectives are to jointly minimize sensing/communication processing and transmission energy consumption, while simultaneously meeting the requirements for sensing and URLLC. To address the inherent non-convexity of these optimization problems, we utilize techniques such as the Feasible Point Pursuit - Successive Convex Approximation (FPP-SCA), Concave-Convex Programming (CCP), and fractional programming. We conduct a comparative analysis of the performance of these algorithms in ISAC scenarios and against a URLLC-only scenario where sensing is not integrated. Our numerical results highlight the superior performance of the E2E energy minimization algorithm, especially in scenarios without sensing capability. Additionally, our study underscores the increasing prominence of energy consumption associated with sensing processing tasks as the number of sensing receive access points rises. Furthermore, the results emphasize that a higher sensing signal-to-interference-plus-noise ratio threshold is associated with an escalation in E2E energy consumption, thereby narrowing the performance gap between the two proposed algorithms.
In this paper, we propose a cost-effective strategy for heterogeneous UAV swarm systems for cooperative aerial inspection. Unlike previous swarm inspection works, the proposed method does not rely on precise prior knowledge of the environment and can complete full 3D surface coverage of objects in any shape. In this work, agents are partitioned into teams, with each drone assign a different task, including mapping, exploration, and inspection. Task allocation is facilitated by assigning optimal inspection volumes to each team, following best-first rules. A voxel map-based representation of the environment is used for pathfinding, and a rule-based path-planning method is the core of this approach. We achieved the best performance in all challenging experiments with the proposed approach, surpassing all benchmark methods for similar tasks across multiple evaluation trials. The proposed method is open source at //github.com/ntu-aris/caric_baseline and used as the baseline of the Cooperative Aerial Robots Inspection Challenge at the 62nd IEEE Conference on Decision and Control 2023.
In this work, we present novel protocols over rings for semi-honest secure three-party computation (3-PC) and malicious four-party computation (4-PC) with one corruption. Compared to state-of-the-art protocols in the same setting, our protocols require fewer low-latency and high-bandwidth links between the parties to achieve high throughput. Our protocols also reduce the computational complexity by requiring up to 50 percent fewer basic instructions per gate. Further, our protocols achieve the currently best-known communication complexity (3, resp. 5 elements per multiplication gate) with an optional preprocessing phase to reduce the communication complexity of the online phase to 2 (resp. 3) elements per multiplication gate. In homogeneous network settings, i.e. all links between the parties share similar network bandwidth and latency, our protocols achieve up to two times higher throughput than state-of-the-art protocols. In heterogeneous network settings, i.e. all links between the parties share different network bandwidth and latency, our protocols achieve even larger performance improvements. We implemented our protocols and multiple other state-of-the-art protocols (Replicated 3-PC, Astra, Fantastic Four, Tetrad) in a novel open-source C++ framework optimized for achieving high throughput. Five out of six implemented 3-PC and 4-PC protocols achieve more than one billion 32-bit multiplication or more than 32 billion AND gates per second using our implementation in a 25 Gbit/s LAN environment. This is the highest throughput achieved in 3-PC and 4-PC so far and between two and three orders of magnitude higher than the throughput MP-SPDZ achieves in the same settings.
In this paper, we revisit the Power Curves in ANOVA Simultaneous Component Analysis (ASCA) based on permutation testing, and introduce the Population Curves derived from population parameters describing the relative effect among factors and interactions. We distinguish Relative from Absolute Population Curves, where the former represent statistical power in terms of the normalized effect size between structure and noise, and the latter in terms of the sample size. Relative Population Curves are useful to find the optimal ASCA model (e.g., fixed/random factors, crossed/nested relationships, interactions, the test statistic, transformations, etc.) for the analysis of an experimental design at hand. Absolute Population Curves are useful to determine the sample size and the optimal number of levels for each factor during the planning phase on an experiment. We illustrate both types of curves through simulation. We expect Population Curves to become the go-to approach to plan the optimal analysis pipeline and the required sample size in an omics study analyzed with ASCA.
In this paper, we propose a deep learning based model for Acoustic Anomaly Detection of Machines, the task for detecting abnormal machines by analysing the machine sound. By conducting extensive experiments, we indicate that multiple techniques of pseudo audios, audio segment, data augmentation, Mahalanobis distance, and narrow frequency bands, which mainly focus on feature engineering, are effective to enhance the system performance. Among the evaluating techniques, the narrow frequency bands presents a significant impact. Indeed, our proposed model, which focuses on the narrow frequency bands, outperforms the DCASE baseline on the benchmark dataset of DCASE 2022 Task 2 Development set. The important role of the narrow frequency bands indicated in this paper inspires the research community on the task of Acoustic Anomaly Detection of Machines to further investigate and propose novel network architectures focusing on the frequency bands.
In this paper, we consider the generation and utilization of helper data for physical unclonable functions (PUFs) that provide real-valued readout symbols. Compared to classical binary PUFs, more entropy can be extracted from each basic building block (PUF node), resulting in longer keys/fingerprints and/or a higher reliability. To this end, a coded modulation and signal shaping scheme that matches the (approximately) Gaussian distribution of the readout has to be employed. A new helper data scheme is proposed that works with any type of coded modulation/shaping scheme. Compared to the permutation scheme from the literature, less amount of helper data has to be generated and a higher reliability is achieved. Moreover, the recently proposed idea of a two-metric helper data scheme is generalized to coded modulation and a general S-metric scheme. It is shown how extra helper data can be generated to improve decodability. The proposed schemes are assessed by numerical simulations and by evaluation of measurement data. We compare multi-level codes using a new rate design strategy with bit-interleaved coded modulation and trellis shaping with a distribution matcher. By selecting a suitable design, the rate per PUF node that can be reliably extracted can be as high as 2~bit/node.
In this paper, we propose an adaptive approach, based on mesh refinement or parametric enrichment with polynomial degree adaption, for numerical solution of convection dominated equations with random input data. A parametric system emerged from an application of stochastic Galerkin approach is discretized by using a symmetric interior penalty Galerkin (SIPG) method with upwinding for the convection term in the spatial domain. We derive a residual-based error estimator contributed by the error due to the SIPG discretization, the (generalized) polynomial chaos discretization in the stochastic space, and data oscillations. Then, the reliability of the proposed error estimator, an upper bound for the energy error up to a multiplicative constant, is shown. Moreover, to balance the errors stemmed from spatial and stochastic spaces, the truncation error coming from Karhunen--Lo\`{e}ve expansion is also considered in the numerical simulations. Last, several benchmark examples including a random diffusivity parameter, a random velocity parameter, random diffusivity/velocity parameters, and a random (jump) discontinuous diffusivity parameter, are tested to illustrate the performance of the proposed estimator.
This paper presents an exhaustive quantitative and qualitative evaluation of Large Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We employ eight distinct datasets that encompass aspects including entity, relation and event extraction, link prediction, and question answering. Empirically, our findings suggest that GPT-4 outperforms ChatGPT in the majority of tasks and even surpasses fine-tuned models in certain reasoning and question-answering datasets. Moreover, our investigation extends to the potential generalization ability of LLMs for information extraction, which culminates in the presentation of the Virtual Knowledge Extraction task and the development of the VINE dataset. Drawing on these empirical findings, we further propose AutoKG, a multi-agent-based approach employing LLMs for KG construction and reasoning, which aims to chart the future of this field and offer exciting opportunities for advancement. We anticipate that our research can provide invaluable insights for future undertakings of KG\footnote{Code and datasets will be available in //github.com/zjunlp/AutoKG.
In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.
In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR). We formulated ASR for different languages as different tasks, and meta-learned the initialization parameters from many pretraining languages to achieve fast adaptation on unseen target language, via recently proposed model-agnostic meta learning algorithm (MAML). We evaluated the proposed approach using six languages as pretraining tasks and four languages as target tasks. Preliminary results showed that the proposed method, MetaASR, significantly outperforms the state-of-the-art multitask pretraining approach on all target languages with different combinations of pretraining languages. In addition, since MAML's model-agnostic property, this paper also opens new research direction of applying meta learning to more speech-related applications.
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.