Despite Wi-Fi is at the eve of its seventh generation, security concerns regarding this omnipresent technology remain in the spotlight of the research community. This work introduces two new denial of service attacks against contemporary Wi-Fi 5 and 6 networks. Differently to similar works in the literature which focus on 802.11 management frames, the introduced assaults exploit control frames. Both the attacks target the central element of any infrastructure-based 802.11 network, i.e., the access point (AP), and result in depriving the associated stations from any service. We demonstrate that, at the very least, the attacks affect a great mass of off-the-self AP implementations by different renowned vendors, and it can be mounted with inexpensive equipment, little effort, and a low level of expertise. With reference to the latest standard, namely, 802.11-2020, we elaborate on the root cause of the respected vulnerabilities, pinpointing shortcomings. Following a coordinated vulnerability disclosure process, our findings have been promptly communicated to each affected AP vendor, already receiving positive feedback as well as a - currently reserved - common vulnerabilities and exposures (CVE) id, namely CVE-2022-32666.
Deep neural networks (DNNs) are of critical use in different domains. To accelerate DNN computation, tensor compilers are proposed to generate efficient code on different domain-specific accelerators. Existing tensor compilers mainly focus on optimizing computation efficiency. However, memory access is becoming a key performance bottleneck because the computational performance of accelerators is increasing much faster than memory performance. The lack of direct description of memory access and data dependence in current tensor compilers' intermediate representation (IR) brings significant challenges to generate memory-efficient code. In this paper, we propose IntelliGen, a tensor compiler that can generate high-performance code for memory-intensive operators by considering both computation and data movement optimizations. IntelliGen represent a DNN program using GIR, which includes primitives indicating its computation, data movement, and parallel strategies. This information will be further composed as an instruction-level dataflow graph to perform holistic optimizations by searching different memory access patterns and computation operations, and generating memory-efficient code on different hardware. We evaluate IntelliGen on NVIDIA GPU, AMD GPU, and Cambricon MLU, showing speedup up to 1.97x, 2.93x, and 16.91x(1.28x, 1.23x, and 2.31x on average), respectively, compared to current most performant frameworks.
In parametric lock-sharing systems processes can spawn new processes to run in parallel, and can create new locks. The behavior of every process is given by a pushdown automaton. We consider infinite behaviors of such systems under strong process fairness condition. A result of a potentially infinite execution of a system is a limit configuration, that is a potentially infinite tree. The verification problem is to determine if a given system has a limit configuration satisfying a given regular property. This formulation of the problem encompasses verification of reachability as well as of many liveness properties. We show that this verification problem, while undecidable in general, is decidable for nested lock usage. We show Exptime-completeness of the verification problem. The main source of complexity is the number of parameters in the spawn operation. If the number of parameters is bounded, our algorithm works in Ptime for properties expressed by parity automata with a fixed number of ranks.
In group foraging situations, the conventional expectation is that increased food availability would enhance consumption, especially when animals prioritize maximizing their food intake. This paper challenges this conventional wisdom by conducting an in-depth game-theoretic analysis of a basic producer-scrounger model, in which animals must choose between intensive food searching as producers or moderate searching while relying on group members as scroungers. Surprisingly, our study reveals that, under certain circumstances, increasing food availability can amplify the inclination to scrounge to such an extent that it paradoxically leads to a reduction in animals' food consumption compared to scenarios with limited food availability. We further illustrate a similar phenomenon in a model capturing free-riding dynamics among workers in a company. We demonstrate that, under certain reward mechanisms, enhancing workers' production capacities can inadvertently trigger a surge in free-riding behavior, leading to both diminished group productivity and reduced individual payoffs. Our findings underscore the significance of contextual factors when comprehending and predicting the impact of resource availability on individual and collective outcomes.
Last-mile delivery of goods has gained a lot of attraction during the COVID-19 pandemic. However, current package delivery processes often lead to parking in the second lane, which in turn has negative effects on the urban environment in which the deliveries take place, i.e., traffic congestion and safety issues for other road users. To tackle these challenges, an effective autonomous delivery system is required that guarantees efficient, flexible and safe delivery of goods. The project LogiSmile, co-funded by EIT Urban Mobility, pilots an autonomous delivery vehicle dubbed the Autonomous Hub Vehicle (AHV) that works in cooperation with a small autonomous robot called the Autonomous Delivery Device (ADD). With the two cooperating robots, the project LogiSmile aims to find a possible solution to the challenges of urban goods distribution in congested areas and to demonstrate the future of urban mobility. As a member of Nieders\"achsische Forschungszentrum f\"ur Fahrzeugtechnik (NFF), the Institute for Software and Systems Engineering (ISSE) developed an integrated software safety architecture for runtime monitoring of the AHV, with (1) a dependability cage (DC) used for the on-board monitoring of the AHV, and (2) a remote command control center (CCC) which enables the remote off-board supervision of a fleet of AHVs. The DC supervises the vehicle continuously and in case of any safety violation, it switches the nominal driving mode to degraded driving mode or fail-safe mode. Additionally, the CCC also manages the communication of the AHV with the ADD and provides fail-operational solutions for the AHV when it cannot handle complex situations autonomously. The runtime monitoring concept developed for the AHV has been demonstrated in 2022 in Hamburg. We report on the obtained results and on the lessons learned.
Collaborative filtering is the de facto standard for analyzing users' activities and building recommendation systems for items. In this work we develop Sliced Anti-symmetric Decomposition (SAD), a new model for collaborative filtering based on implicit feedback. In contrast to traditional techniques where a latent representation of users (user vectors) and items (item vectors) are estimated, SAD introduces one additional latent vector to each item, using a novel three-way tensor view of user-item interactions. This new vector extends user-item preferences calculated by standard dot products to general inner products, producing interactions between items when evaluating their relative preferences. SAD reduces to state-of-the-art (SOTA) collaborative filtering models when the vector collapses to 1, while in this paper we allow its value to be estimated from data. Allowing the values of the new item vector to be different from 1 has profound implications. It suggests users may have nonlinear mental models when evaluating items, allowing the existence of cycles in pairwise comparisons. We demonstrate the efficiency of SAD in both simulated and real world datasets containing over 1M user-item interactions. By comparing with seven SOTA collaborative filtering models with implicit feedbacks, SAD produces the most consistent personalized preferences, in the meanwhile maintaining top-level of accuracy in personalized recommendations. We release the model and inference algorithms in a Python library //github.com/apple/ml-sad.
Community detection is a classic problem in network science with extensive applications in various fields. Among numerous approaches, the most common method is modularity maximization. Despite their design philosophy and wide adoption, heuristic modularity maximization algorithms rarely return an optimal partition or anything similar. We propose a specialized algorithm, Bayan, which returns partitions with a guarantee of either optimality or proximity to an optimal partition. At the core of the Bayan algorithm is a branch-and-cut scheme that solves an integer programming formulation of the modularity maximization problem to optimality or approximate it within a factor. We compare Bayan against 30 alternative community detection methods using structurally diverse synthetic and real networks. Our results demonstrate Bayan's distinctive accuracy and stability in retrieving ground-truth communities of standard benchmark graphs. Bayan is several times faster than open-source and commercial solvers for modularity maximization making it capable of finding optimal partitions for instances that cannot be optimized by any other existing method. Overall, our assessments point to Bayan as a suitable choice for exact maximization of modularity in real networks with up to 3000 edges (in their largest connected component) and approximating maximum modularity in larger instances on ordinary computers. A Python implementation of the Bayan algorithm (the bayanpy library) is publicly available through the package installer for Python (pip).
Long range wireless transmission techniques such as LoRa are preferential candidates for a substantial class of IoT applications, as they avoid the complexity of multi-hop wireless forwarding. The existing network solutions for LoRa, however, are not suitable for peer-to-peer communication, which is a key requirement for many IoT applications. In this work, we propose a networking system - 6LoRa, that enables IPv6 communication over LoRa. We present a full stack system implementation on RIOT OS and evaluate the system on a real testbed using realistic application scenarios with CoAP. Our findings confirm that our approach outperforms existing solutions in terms of transmission delay and packet reception ratio at comparable energy consumption.
Advancements in sensor technology, artificial intelligence (AI), and augmented reality (AR) have unlocked opportunities across various domains. AR and large language models like GPT have witnessed substantial progress and are increasingly being employed in diverse fields. One such promising application is in operations and maintenance (O&M). O&M tasks often involve complex procedures and sequences that can be challenging to memorize and execute correctly, particularly for novices or under high-stress situations. By marrying the advantages of superimposing virtual objects onto the physical world, and generating human-like text using GPT, we can revolutionize O&M operations. This study introduces a system that combines AR, Optical Character Recognition (OCR), and the GPT language model to optimize user performance while offering trustworthy interactions and alleviating workload in O&M tasks. This system provides an interactive virtual environment controlled by the Unity game engine, facilitating a seamless interaction between virtual and physical realities. A case study (N=15) is conducted to illustrate the findings and answer the research questions. The results indicate that users can complete similarly challenging tasks in less time using our proposed AR and AI system. Moreover, the collected data also suggests a reduction in cognitive load and an increase in trust when executing the same operations using the AR and AI system.
User selection has become crucial for decreasing the communication costs of federated learning (FL) over wireless networks. However, centralized user selection causes additional system complexity. This study proposes a network intrinsic approach of distributed user selection that leverages the radio resource competition mechanism in random access. Taking the carrier sensing multiple access (CSMA) mechanism as an example of random access, we manipulate the contention window (CW) size to prioritize certain users for obtaining radio resources in each round of training. Training data bias is used as a target scenario for FL with user selection. Prioritization is based on the distance between the newly trained local model and the global model of the previous round. To avoid excessive contribution by certain users, a counting mechanism is used to ensure fairness. Simulations with various datasets demonstrate that this method can rapidly achieve convergence similar to that of the centralized user selection approach.
We employ pressure point analysis and roofline modeling to identify performance bottlenecks and determine an upper bound on the performance of the Canonical Polyadic Alternating Poisson Regression Multiplicative Update (CP-APR MU) algorithm in the SparTen software library. Our analyses reveal that a particular matrix computation, $\Phi^{(n)}$, is the critical performance bottleneck in the SparTen CP-APR MU implementation. Moreover, we find that atomic operations are not a critical bottleneck while higher cache reuse can provide a non-trivial performance improvement. We also utilize grid search on the Kokkos library parallel policy parameters to achieve 2.25x average speedup over the SparTen default for $\Phi^{(n)}$ computation on CPU and 1.70x on GPU. We conclude our investigations by comparing Kokkos implementations of the STREAM benchmark and the matricized tensor times Khatri-Rao product (MTTKRP) benchmark from the Parallel Sparse Tensor Algorithm (PASTA) benchmark suite to implementations using vendor libraries. We show that with a single implementation Kokkos achieves performance comparable to hand-tuned code for fundamental operations that make up tensor decomposition kernels on a wide range of CPU and GPU systems. Overall, we conclude that Kokkos demonstrates good performance portability for simple data-intensive operations but requires tuning for algorithms with more complex dependencies and data access patterns.