Cyber-physical systems (CPSs) are now widely deployed in many industrial domains, e.g., manufacturing systems and autonomous vehicles. To further enhance the capability and applicability of CPSs, there comes a recent trend from both academia and industry to utilize learning-based AI controllers for the system control process, resulting in an emerging class of AI-enabled cyber-physical systems (AI-CPSs). Although such AI-CPSs could achieve obvious performance enhancement from the lens of some key industrial requirement indicators, due to the random exploration nature and lack of systematic explanations for their behavior, such AI-based techniques also bring uncertainties and safety risks to the controlled system, posing an urgent need for effective safety analysis techniques for AI-CPSs. Hence in this work, we propose Mosaic, a model-based safety analysis framework for AI-CPSs. Mosaic first constructs a Markov decision process (MDP) model as an abstract model of the AI-CPS, which tries to characterize the behaviors of the original AI-CPS. Then, based on the derived abstract model, safety analysis is designed in two aspects: online safety monitoring and offline model-guided falsification. The usefulness of Mosaic is evaluated on diverse and representative industry-level AI-CPSs, the results of which demonstrate that Mosaic is effective in providing safety monitoring to AI-CPSs and enables to outperform the state-of-the-art falsification techniques, providing the basis for advanced safety analysis of AI-CPSs.
Generative Artificial Intelligence (GenAI) has emerged as a powerful technology capable of autonomously producing highly realistic content in various domains, such as text, images, audio, and videos. With its potential for positive applications in creative arts, content generation, virtual assistants, and data synthesis, GenAI has garnered significant attention and adoption. However, the increasing adoption of GenAI raises concerns about its potential misuse for crafting convincing phishing emails, generating disinformation through deepfake videos, and spreading misinformation via authentic-looking social media posts, posing a new set of challenges and risks in the realm of cybersecurity. To combat the threats posed by GenAI, we propose leveraging the Cyber Kill Chain (CKC) to understand the lifecycle of cyberattacks, as a foundational model for cyber defense. This paper aims to provide a comprehensive analysis of the risk areas introduced by the offensive use of GenAI techniques in each phase of the CKC framework. We also analyze the strategies employed by threat actors and examine their utilization throughout different phases of the CKC, highlighting the implications for cyber defense. Additionally, we propose GenAI-enabled defense strategies that are both attack-aware and adaptive. These strategies encompass various techniques such as detection, deception, and adversarial training, among others, aiming to effectively mitigate the risks posed by GenAI-induced cyber threats.
Collaborative robots must simultaneously be safe enough to operate in close proximity to human operators and powerful enough to assist users in industrial tasks such as lifting heavy equipment. The requirement for safety necessitates that collaborative robots are designed with low-powered actuators. However, some industrial tasks may require the robot to have high payload capacity and/or long reach. For collaborative robot designs to be successful, they must find ways of addressing these conflicting design requirements. One promising strategy for navigating this tradeoff is through the use of static balancing mechanisms to offset the robot's self weight, thus enabling the selection of lower-powered actuators. In this paper, we introduce a novel, 2 degree of freedom static balancing mechanism based on spring-loaded, wire-wrapped cams. We also present an optimization-based cam design method that guarantees the cams stay convex, ensures the springs stay below their extensions limits, and minimizes sensitivity to unmodeled deviations from the nominal spring constant. Additionally, we present a model of the effect of friction between the wire and the cam. Lastly, we show experimentally that the torque generated by the cam mechanism matches the torque predicted in our modeling approach. Our results also suggest that the effects of wire-cam friction are significant for non-circular cams.
In today's world, many technologically advanced countries have realized that real power lies not in physical strength but in educated minds. As a result, every country has embarked on restructuring its education system to meet the demands of technology. As a country in the midst of these developments, we cannot remain indifferent to this transformation in education. In the Information Age of the 21st century, rapid access to information is crucial for the development of individuals and societies. To take our place among the knowledge societies in a world moving rapidly towards globalization, we must closely follow technological innovations and meet the requirements of technology. This can be achieved by providing learning opportunities to anyone interested in acquiring education in their area of interest. This study focuses on the advantages and disadvantages of internet-based learning compared to traditional teaching methods, the importance of computer usage in internet-based learning, negative factors affecting internet-based learning, and the necessary recommendations for addressing these issues. In today's world, it is impossible to talk about education without technology or technology without education.
Autonomous driving systems have witnessed a significant development during the past years thanks to the advance in machine learning-enabled sensing and decision-making algorithms. One critical challenge for their massive deployment in the real world is their safety evaluation. Most existing driving systems are still trained and evaluated on naturalistic scenarios collected from daily life or heuristically-generated adversarial ones. However, the large population of cars, in general, leads to an extremely low collision rate, indicating that the safety-critical scenarios are rare in the collected real-world data. Thus, methods to artificially generate scenarios become crucial to measure the risk and reduce the cost. In this survey, we focus on the algorithms of safety-critical scenario generation in autonomous driving. We first provide a comprehensive taxonomy of existing algorithms by dividing them into three categories: data-driven generation, adversarial generation, and knowledge-based generation. Then, we discuss useful tools for scenario generation, including simulation platforms and packages. Finally, we extend our discussion to five main challenges of current works -- fidelity, efficiency, diversity, transferability, controllability -- and research opportunities lighted up by these challenges.
Most autonomous navigation systems assume wheeled robots are rigid bodies and their 2D planar workspaces can be divided into free spaces and obstacles. However, recent wheeled mobility research, showing that wheeled platforms have the potential of moving over vertically challenging terrain (e.g., rocky outcroppings, rugged boulders, and fallen tree trunks), invalidate both assumptions. Navigating off-road vehicle chassis with long suspension travel and low tire pressure in places where the boundary between obstacles and free spaces is blurry requires precise 3D modeling of the interaction between the chassis and the terrain, which is complicated by suspension and tire deformation, varying tire-terrain friction, vehicle weight distribution and momentum, etc. In this paper, we present a learning approach to model wheeled mobility, i.e., in terms of vehicle-terrain forward dynamics, and plan feasible, stable, and efficient motion to drive over vertically challenging terrain without rolling over or getting stuck. We present physical experiments on two wheeled robots and show that planning using our learned model can achieve up to 60% improvement in navigation success rate and 46% reduction in unstable chassis roll and pitch angles.
Prediction-oriented machine learning is becoming increasingly valuable to organizations, as it may drive applications in crucial business areas. However, decision-makers from companies across various industries are still largely reluctant to employ applications based on modern machine learning algorithms. We ascribe this issue to the widely held view on advanced machine learning algorithms as "black boxes" whose complexity does not allow for uncovering the factors that drive the output of a corresponding system. To contribute to overcome this adoption barrier, we argue that research in information systems should devote more attention to the design of prototypical prediction-oriented machine learning applications (i.e., artifacts) whose predictions can be explained to human decision-makers. However, despite the recent emergence of a variety of tools that facilitate the development of such artifacts, there has so far been little research on their development. We attribute this research gap to the lack of methodological guidance to support the creation of these artifacts. For this reason, we develop a methodology which unifies methodological knowledge from design science research and predictive analytics with state-of-the-art approaches to explainable artificial intelligence. Moreover, we showcase the methodology using the example of price prediction in the sharing economy (i.e., on Airbnb).
Reed-Solomon (RS) codes have been increasingly adopted by distributed storage systems in place of replication,because they provide the same level of availability with much lower storage overhead. However, a key drawback of those RS-coded distributed storage systems is the poor latency of degraded reads, which can be incurred by data failures or hot spots,and are not rare in production environments. To address this issue, we propose a novel parallel reconstruction solution called APLS. APLS leverages all surviving source nodes to send the data needed by degraded reads and chooses light-loaded starter nodes to receive the reconstructed data of those degraded reads. Hence, the latency of the degraded reads can be improved.Prototyping-based experiments are conducted to compare APLS with ECPipe, the state-of-the-art solution of improving the latency of degraded reads. The experimental results demonstrate that APLS effectively reduces the latency, particularly under heavy or medium workloads.
As a subjective metric to evaluate the quality of synthesized speech, Mean opinion score~(MOS) usually requires multiple annotators to score the same speech. Such an annotation approach requires a lot of manpower and is also time-consuming. MOS prediction model for automatic evaluation can significantly reduce labor cost. In previous works, it is difficult to accurately rank the quality of speech when the MOS scores are close. However, in practical applications, it is more important to correctly rank the quality of synthesis systems or sentences than simply predicting MOS scores. Meanwhile, as each annotator scores multiple audios during annotation, the score is probably a relative value based on the first or the first few speech scores given by the annotator. Motivated by the above two points, we propose a general framework for MOS prediction based on pair comparison (MOSPC), and we utilize C-Mixup algorithm to enhance the generalization performance of MOSPC. The experiments on BVCC and VCC2018 show that our framework outperforms the baselines on most of the correlation coefficient metrics, especially on the metric KTAU related to quality ranking. And our framework also surpasses the strong baseline in ranking accuracy on each fine-grained segment. These results indicate that our framework contributes to improving the ranking accuracy of speech quality.
This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI's output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementarity of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop," the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration of AI-assisted learning in classrooms
Online platforms employ recommendation systems to enhance customer engagement and drive revenue. However, in a multi-sided platform where the platform interacts with diverse stakeholders such as sellers (items) and customers (users), each with their own desired outcomes, finding an appropriate middle ground becomes a complex operational challenge. In this work, we investigate the ``price of fairness'', which captures the platform's potential compromises when balancing the interests of different stakeholders. Motivated by this, we propose a fair recommendation framework where the platform maximizes its revenue while interpolating between item and user fairness constraints. We further examine the fair recommendation problem in a more realistic yet challenging online setting, where the platform lacks knowledge of user preferences and can only observe binary purchase decisions. To address this, we design a low-regret online optimization algorithm that preserves the platform's revenue while achieving fairness for both items and users. Finally, we demonstrate the effectiveness of our framework and proposed method via a case study on MovieLens data.