This paper investigates an emerging cache side channel attack defense approach involving the use of hardware performance counters (HPCs). These counters monitor microarchitectural events and analyze statistical deviations to differentiate between malicious and benign software. With numerous proposals and promising reported results, we seek to investigate whether published HPC-based detection methods are evaluated in a proper setting and under the right assumptions, such that their quality can be ensured for real-word deployment against cache side-channel attacks. To achieve this goal, this paper presents a comprehensive evaluation and scrutiny of existing literature on the subject matter in a form of a survey, accompanied by experimental evidences to support our evaluation.
Motivated by the increasing use of quadrotors for payload delivery, we consider a joint trajectory generation and feedback control design problem for a quadrotor experiencing aerodynamic wrenches. Unmodeled aerodynamic drag forces from carried payloads can lead to catastrophic outcomes. Prior work model aerodynamic effects as residual dynamics or external disturbances in the control problem leading to a reactive policy that could be catastrophic. Moreover, redesigning controllers and tuning control gains on hardware platforms is a laborious effort. In this paper, we argue that adapting the trajectory generation component keeping the controller fixed can improve trajectory tracking for quadrotor systems experiencing drag forces. To achieve this, we formulate a drag-aware planning problem by applying a suitable relaxation to an optimal quadrotor control problem, introducing a tracking cost function which measures the ability of a controller to follow a reference trajectory. This tracking cost function acts as a regularizer in trajectory generation and is learned from data obtained from simulation. Our experiments in both simulation and on the Crazyflie hardware platform show that changing the planner reduces tracking error by as much as 83%. Evaluation on hardware demonstrates that our planned path, as opposed to a baseline, avoids controller saturation and catastrophic outcomes during aggressive maneuvers.
This work proposes a novel approach for non-deterministic extension of experimental data that considers structural model inadequacy for conditions other than the calibration scenario while simultaneously resolving any significant prior-data discrepancy with information extracted from flight measurements. This functionality is achieved through methodical utilization of model error emulators and Bayesian model averaging studies with available response data. The outlined approach does not require prior flight data availability and introduces straightforward mechanisms for their assimilation in future predictions. Application of the methodology is demonstrated herein by extending material performance data captured at the HyMETS facility to the MSL scenario, where the described process yields results that exhibit significantly improved capacity for predictive uncertainty quantification studies. This work also investigates limitations associated with straightforward uncertainty propagation procedures onto calibrated model predictions for the flight scenario and manages computational requirements with sensitivity analysis and surrogate modeling techniques.
We propose a method for checking generalized reachability properties in Petri nets that takes advantage of structural reductions and that can be used, transparently, as a pre-processing step of existing model-checkers. Our approach is based on a new procedure that can project a property, about an initial Petri net, into an equivalent formula that only refers to the reduced version of this net. Our projection is defined as a variable elimination procedure for linear integer arithmetic tailored to the specific kind of constraints we handle. It has linear complexity, is guaranteed to return a sound property, and makes use of a simple condition to detect when the result is exact. Experimental results show that our approach works well in practice and that it can be useful even when there is only a limited amount of reductions.
The difficulty of the information extraction task lies in dealing with the task-specific label schemas and heterogeneous data structures. Recent work has proposed methods based on large language models to uniformly model different information extraction tasks. However, these existing methods are deficient in their information extraction capabilities for Chinese languages other than English. In this paper, we propose an end-to-end chat-enhanced instruction tuning framework for universal information extraction (YAYI-UIE), which supports both Chinese and English. Specifically, we utilize dialogue data and information extraction data to enhance the information extraction performance jointly. Experimental results show that our proposed framework achieves state-of-the-art performance on Chinese datasets while also achieving comparable performance on English datasets under both supervised settings and zero-shot settings.
Many adversarial attacks target natural language processing systems, most of which succeed through modifying the individual tokens of a document. Despite the apparent uniqueness of each of these attacks, fundamentally they are simply a distinct configuration of four components: a goal function, allowable transformations, a search method, and constraints. In this survey, we systematically present the different components used throughout the literature, using an attack-independent framework which allows for easy comparison and categorisation of components. Our work aims to serve as a comprehensive guide for newcomers to the field and to spark targeted research into refining the individual attack components.
In this paper, we study underlay device-to-device (D2D) communication systems empowered by a reconfigurable intelligent surface (RIS) for cognitive cellular networks. Considering Rayleigh fading channels and the general case where there exist both the direct and RIS-enabled D2D channels, the outage probability (OP) of the D2D communication link is presented in closed-form. Next, for the considered RIS-empowered underlaid D2D system, we frame an OP minimization problem. We target the joint optimization of the transmit power at the D2D source and the RIS placement, under constraints on the transmit power at the D2D source and on the limited interference imposed on the cellular user for two RIS deployment topologies. Due to the coupled optimization variables, the formulated optimization problem is extremely intractable. We propose an equivalent transformation which we are able to solve analytically. In the transformed problem, an expression for the average value of the signal-to-interference-noise ratio (SINR) at the D2D receiver is derived in closed-form. Our theoretical derivations are corroborated through simulation results, and various system design insights are deduced. It is indicatively showcased that the proposed RIS-empowered underlaid D2D system design outperforms the benchmark semi-adaptive optimal power and optimal distance schemes, offering $44\%$ and $20\%$ performance improvement, respectively.
We propose a data-driven approach for propagating uncertainty in stochastic power grid simulations and apply it to the estimation of transmission line failure probabilities. A reduced-order equation governing the evolution of the observed line energy probability density function is derived from the Fokker--Planck equation of the full-order continuous Markov process. Our method consists of estimates produced by numerically integrating this reduced equation. Numerical experiments for scalar- and vector-valued energy functions are conducted using the classical multimachine model under spatiotemporally correlated noise perturbation. The method demonstrates a more sample-efficient approach for computing probabilities of tail events when compared with kernel density estimation. Moreover, it produces vastly more accurate estimates of joint event occurrence when compared with independent models.
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.