Given the proximity of many wireless users and their diversity in consuming local resources (e.g., data-plans, computation and energy resources), device-to-device (D2D) resource sharing is a promising approach towards realizing a sharing economy. This paper adopts an easy-to-implement greedy matching algorithm with distributed fashion and only sub-linear O(log n) parallel complexity (in user number n) for large-scale D2D sharing. Practical cases indicate that the greedy matching's average performance is far better than the worst-case approximation ratio 50% as compared to the optimum. However, there is no rigorous average-case analysis in the literature to back up such encouraging findings and this paper is the first to present such analysis for multiple representative classes of graphs. For 1D linear networks, we prove that our greedy algorithm performs better than 86.5% of the optimum. For 2D grids, though dynamic programming cannot be directly applied, we still prove this average performance ratio to be above 76%. For the more challenging Erdos-Renyi random graphs, we equivalently reduce to the asymptotic analysis of random trees and successfully prove a ratio up to 79%. Finally, we conduct experiments using real data to simulate realistic D2D networks, and show that our analytical performance measure approximates well practical cases.
In this paper, we present a method to encrypt dynamic controllers that can be implemented through most homomorphic encryption schemes, including somewhat, leveled fully, and fully homomorphic encryption. To this end, we represent the output of the given controller as a linear combination of a fixed number of previous inputs and outputs. As a result, the encrypted controller involves only a limited number of homomorphic multiplications on every encrypted data, assuming that the output is re-encrypted and transmitted back from the actuator. A guidance for parameter choice is also provided, ensuring that the encrypted controller achieves predefined performance for an infinite time horizon. Furthermore, we propose a customization of the method for Ring-Learning With Errors (Ring-LWE) based cryptosystems, where a vector of messages can be encrypted into a single ciphertext and operated simultaneously, thus reducing computation and communication loads. Unlike previous results, the proposed customization does not require extra algorithms such as rotation, other than basic addition and multiplication. Simulation results demonstrate the effectiveness of the proposed method.
We present RETA (Relative Timing Analysis), a differential timing analysis technique to verify the impact of an update on the execution time of embedded software. Timing analysis is computationally expensive and labor intensive. Software updates render repeating the analysis from scratch a waste of resources and time, because their impact is inherently confined. To determine this boundary, in RETA we apply a slicing procedure that identifies all relevant code segments and a statement categorization that determines how to analyze each such line of code. We adapt a subset of RETA for integration into aiT, an industrial timing analysis tool, and also develop a complete implementation in a tool called DELTA. Based on staple benchmarks and realistic code updates from official repositories, we test the accuracy by analyzing the worst-case execution time (WCET) before and after an update, comparing the measures with the use of the unmodified aiT as well as real executions on embedded hardware. DELTA returns WCET information that ranges from exactly the WCET of real hardware to 148% of the new version's measured WCET. With the same benchmarks, the unmodified aiT estimates are 112% and 149% of the actual executions; therefore, even when DELTA is pessimistic, an industry-strength tool such as aiT cannot do better. Crucially, we also show that RETA decreases aiT's analysis time by 45% and its memory consumption by 8.9%, whereas removing RETA from DELTA, effectively rendering it a regular timing analysis tool, increases its analysis time by 27%.
Mobile edge computing (MEC) enables low-latency and high-bandwidth applications by bringing computation and data storage closer to end-users. Intelligent computing is an important application of MEC, where computing resources are used to solve intelligent task-related problems based on task requirements. However, efficiently offloading computing and allocating resources for intelligent tasks in MEC systems is a challenging problem due to complex interactions between task requirements and MEC resources. To address this challenge, we investigate joint computing offloading and resource allocation for intelligent tasks in MEC systems. Our goal is to optimize system utility by jointly considering computing accuracy and task delay to achieve maximum system performance. We focus on classification intelligence tasks and formulate an optimization problem that considers both the accuracy requirements of tasks and the parallel computing capabilities of MEC systems. To solve the optimization problem, we decompose it into three subproblems: subcarrier allocation, computing capacity allocation, and compression offloading. We use convex optimization and successive convex approximation to derive closed-form expressions for the subcarrier allocation, offloading decisions, computing capacity, and compressed ratio. Based on our solutions, we design an efficient computing offloading and resource allocation algorithm for intelligent tasks in MEC systems. Our simulation results demonstrate that our proposed algorithm significantly improves the performance of intelligent tasks in MEC systems and achieves a flexible trade-off between system revenue and cost considering intelligent tasks compared with the benchmarks.
This paper investigates the semantic extraction task-oriented dynamic multi-time scale user admission and resourceallocation in mobile edge computing (MEC) systems. Amid prevalence artifi cial intelligence applications in various industries,the offloading of semantic extraction tasks which are mainlycomposed of convolutional neural networks of computer vision isa great challenge for communication bandwidth and computing capacity allocation in MEC systems. Considering the stochasticnature of the semantic extraction tasks, we formulate a stochastic optimization problem by modeling it as the dynamic arrival of tasks in the temporal domain. We jointly optimize the system revenue and cost which are represented as user admission in the long term and resource allocation in the short term respectively. To handle the proposed stochastic optimization problem, we decompose it into short-time-scale subproblems and a long-time-scale subproblem by using the Lyapunov optimization technique. After that, the short-time-scale optimization variables of resource allocation, including user association, bandwidth allocation, and computing capacity allocation are obtained in closed form. The user admission optimization on long-time scales is solved by a heuristic iteration method. Then, the multi-time scale user admission and resource allocation algorithm is proposed for dynamic semantic extraction task computing in MEC systems. Simulation results demonstrate that, compared with the benchmarks, the proposed algorithm improves the performance of user admission and resource allocation efficiently and achieves a flexible trade-off between system revenue and cost at multi-time scales and considering semantic extraction tasks.
Vizing's theorem asserts the existence of a {$(\Delta+1)$-edge coloring} for any graph $G$, where $\Delta = \Delta(G)$ denotes the maximum degree of $G$. Several polynomial time $(\Delta+1)$-edge coloring algorithms are known, and the state-of-the-art running time (up to polylogarithmic factors) is $\tilde{O}(\min\{m \cdot \sqrt{n}, m \cdot \Delta\})$, by Gabow et al.\ from 1985, where $n$ and $m$ denote the number of vertices and edges in the graph, respectively. (The $\tilde{O}$ notation suppresses polylogarithmic factors.) Recently, Sinnamon shaved off a polylogarithmic factor from the time bound of Gabow et al. The {arboricity} $\alpha = \alpha(G)$ of a graph $G$ is the minimum number of edge-disjoint forests into which its edge set can be partitioned, and it is a measure of the graph's ``uniform density''. While $\alpha \le \Delta$ in any graph, many natural and real-world graphs exhibit a significant separation between $\alpha$ and $\Delta$. In this work we design a $(\Delta+1)$-edge coloring algorithm with a running time of $\tilde{O}(\min\{m \cdot \sqrt{n}, m \cdot \Delta\})\cdot \frac{\alpha}{\Delta}$, thus improving the longstanding time barrier by a factor of $\frac{\alpha}{\Delta}$. In particular, we achieve a near-linear runtime for bounded arboricity graphs (i.e., $\alpha = \tilde{O}(1)$) as well as when $\alpha = \tilde{O}(\frac{\Delta}{\sqrt{n}})$. Our algorithm builds on Sinnamon's algorithm, and can be viewed as a density-sensitive refinement of it.
This paper presents the FormAI dataset, a large collection of 112,000 AI-generated compilable and independent C programs with vulnerability classification. We introduce a dynamic zero-shot prompting technique, constructed to spawn a diverse set of programs utilizing Large Language Models (LLMs). The dataset is generated by GPT-3.5-turbo and comprises programs with varying levels of complexity. Some programs handle complicated tasks such as network management, table games, or encryption, while others deal with simpler tasks like string manipulation. Every program is labeled with the vulnerabilities found within the source code, indicating the type, line number, and vulnerable function name. This is accomplished by employing a formal verification method using the Efficient SMT-based Bounded Model Checker (ESBMC), which performs model checking, abstract interpretation, constraint programming, and satisfiability modulo theories, to reason over safety/security properties in programs. This approach definitively detects vulnerabilities and offers a formal model known as a counterexample, thus eliminating the possibility of generating false positive reports. This property of the dataset makes it suitable for evaluating the effectiveness of various static and dynamic analysis tools. Furthermore, we have associated the identified vulnerabilities with relevant Common Weakness Enumeration (CWE) numbers. We make the source code available for the 112,000 programs, accompanied by a comprehensive list detailing the vulnerabilities detected in each individual program including location and function name, which makes the dataset ideal to train LLMs and machine learning algorithms.
To improve the convergence property of the randomized Kaczmarz (RK) method for solving linear systems, Bai and Wu (SIAM J. Sci. Comput., 40(1):A592--A606, 2018) originally introduced a greedy probability criterion for effectively selecting the working row from the coefficient matrix and constructed the greedy randomized Kaczmarz (GRK) method. Due to its simplicity and efficiency, this approach has inspired numerous subsequent works in recent years, such as the capped adaptive sampling rule, the greedy augmented randomized Kaczmarz method, and the greedy randomized coordinate descent method. Since the iterates of the GRK method are actually random variables, existing convergence analyses are all related to the expectation of the error. In this note, we prove that the linear convergence rate of the GRK method is deterministic, i.e. not in the sense of expectation. Moreover, the Polyak's heavy ball momentum technique is incorporated to improve the performance of the GRK method. We propose a refined convergence analysis, compared with the technique used in Loizou and Richt\'{a}rik (Comput. Optim. Appl., 77(3):653--710, 2020), of momentum variants of randomized iterative methods, which shows that the proposed GRK method with momentum (mGRK) also enjoys a deterministic linear convergence. Numerical experiments show that the mGRK method is more efficient than the GRK method.
Exploiting the computational heterogeneity of mobile devices and edge nodes, mobile edge computation (MEC) provides an efficient approach to achieving real-time applications that are sensitive to information freshness, by offloading tasks from mobile devices to edge nodes. We use the metric Age-of-Information (AoI) to evaluate information freshness. An efficient solution to minimize the AoI for the MEC system with multiple users is non-trivial to obtain due to the random computing time. In this paper, we consider multiple users offloading tasks to heterogeneous edge servers in a MEC system. We first reformulate the problem as a Restless Multi-Arm-Bandit (RMAB) problem and establish a hierarchical Markov Decision Process (MDP) to characterize the updating of AoI for the MEC system. Based on the hierarchical MDP, we propose a nested index framework and design a nested index policy with provably asymptotic optimality. Finally, the closed form of the nested index is obtained, which enables the performance tradeoffs between computation complexity and accuracy. Our algorithm leads to an optimality gap reduction of up to 40%, compared to benchmarks. Our algorithm asymptotically approximates the lower bound as the system scalar gets large enough.
Entity resolution (ER) is the process of identifying records that refer to the same entities within one or across multiple databases. Numerous techniques have been developed to tackle ER challenges over the years, with recent emphasis placed on machine and deep learning methods for the matching phase. However, the quality of the benchmark datasets typically used in the experimental evaluations of learning-based matching algorithms has not been examined in the literature. To cover this gap, we propose four different approaches to assessing the difficulty and appropriateness of 13 established datasets: two theoretical approaches, which involve new measures of linearity and existing measures of complexity, and two practical approaches: the difference between the best non-linear and linear matchers, as well as the difference between the best learning-based matcher and the perfect oracle. Our analysis demonstrates that most of the popular datasets pose rather easy classification tasks. As a result, they are not suitable for properly evaluating learning-based matching algorithms. To address this issue, we propose a new methodology for yielding benchmark datasets. We put it into practice by creating four new matching tasks, and we verify that these new benchmarks are more challenging and therefore more suitable for further advancements in the field.
Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.