The increased use of Unmanned Aerial Vehicles (UAVs) in numerous domains will result in high traffic densities in the low-altitude airspace. Consequently, UAVs Traffic Management (UTM) systems that allow the integration of UAVs in the low-altitude airspace are gaining a lot of momentum. Furthermore, the 5th generation of mobile networks (5G) will most likely provide the underlying support for UTM systems by providing connectivity to UAVs, enabling the control, tracking and communication with remote applications and services. However, UAVs may need to communicate with services with different communication Quality of Service (QoS) requirements, ranging form best-effort services to Ultra-Reliable Low-Latency Communications (URLLC) services. Indeed, 5G can ensure efficient Quality of Service (QoS) enhancements using new technologies, such as network slicing and Multi-access Edge Computing (MEC). In this context, Network Functions Virtualization (NFV) is considered as one of the pillars of 5G systems, by providing a QoS-aware Management and Orchestration (MANO) of softwarized services across cloud and MEC platforms. The MANO process of UAV's services can be enhanced further using the information provided by the UTM system, such as the UAVs'flight plans. In this paper,we propose an extended framework for the management and orchestration of UAVs'services in MECNFV environment by combining the functionalities provided by the MEC-NFV management and orchestration framework with the functionalities of a UTM system. Moreover, we propose an Integer Linear Programming (ILP) model of the placement scheme of our framework and we evaluate its performances.
The blockchain-based smart contract lacks privacy since the contract state and instruction code are exposed to the public. Combining smart-contract execution with Trusted Execution Environments (TEEs) provides an efficient solution, called TEE-assisted smart contracts, for protecting the confidentiality of contract states. However, the combination approaches are varied, and a systematic study is absent. Newly released systems may fail to draw upon the experience learned from existing protocols, such as repeating known design mistakes or applying TEE technology in insecure ways. In this paper, we first investigate and categorize the existing systems into two types: the layer-one solution and layer-two solution. Then, we establish an analysis framework to capture their common lights, covering the desired properties (for contract services), threat models, and security considerations (for underlying systems). Based on our taxonomy, we identify their ideal functionalities and uncover the fundamental flaws and reasons for the challenges in each specification design. We believe that this work would provide a guide for the development of TEE-assisted smart contracts, as well as a framework to evaluate future TEE-assisted confidential contract systems.
Emerging distributed cloud architectures, e.g., fog and mobile edge computing, are playing an increasingly important role in the efficient delivery of real-time stream-processing applications such as augmented reality, multiplayer gaming, and industrial automation. While such applications require processed streams to be shared and simultaneously consumed by multiple users/devices, existing technologies lack efficient mechanisms to deal with their inherent multicast nature, leading to unnecessary traffic redundancy and network congestion. In this paper, we establish a unified framework for distributed cloud network control with generalized (mixed-cast) traffic flows that allows optimizing the distributed execution of the required packet processing, forwarding, and replication operations. We first characterize the enlarged multicast network stability region under the new control framework (with respect to its unicast counterpart). We then design a novel queuing system that allows scheduling data packets according to their current destination sets, and leverage Lyapunov drift-plus-penalty theory to develop the first fully decentralized, throughput- and cost-optimal algorithm for multicast cloud network flow control. Numerical experiments validate analytical results and demonstrate the performance gain of the proposed design over existing cloud network control techniques.
Emotion recognition in smart eyewear devices is highly valuable but challenging. One key limitation of previous works is that the expression-related information like facial or eye images is considered as the only emotional evidence. However, emotional status is not isolated; it is tightly associated with people's visual perceptions, especially those sentimental ones. However, little work has examined such associations to better illustrate the cause of different emotions. In this paper, we study the emotionship analysis problem in eyewear systems, an ambitious task that requires not only classifying the user's emotions but also semantically understanding the potential cause of such emotions. To this end, we devise EMOShip, a deep-learning-based eyewear system that can automatically detect the wearer's emotional status and simultaneously analyze its associations with semantic-level visual perceptions. Experimental studies with 20 participants demonstrate that, thanks to the emotionship awareness, EMOShip not only achieves superior emotion recognition accuracy over existing methods (80.2% vs. 69.4%), but also provides a valuable understanding of the cause of emotions. Pilot studies with 20 participants further motivate the potential use of EMOShip to empower emotion-aware applications, such as emotionship self-reflection and emotionship life-logging.
Enterprise cloud developers have to build applications that are resilient to failures and interruptions. We advocate for, formalize, implement, and evaluate a simple, albeit effective, fault-tolerant programming model for the cloud based on actors, reliable message delivery, and retry orchestration. Our model simultaneously guarantees that (1) failed actor invocations are retried until success and (2) that a strict happens before relationship is preserved across failures within each distributed chain of invocations and retries. These guarantees make it possible to productively develop fault-tolerant distributed applications leveraging cloud services, ranging from classic problems of concurrency theory to enterprise applications. Built as a service mesh, our runtime can compose application components written in any programming language and scale with the application. We measure overhead relative to reliable message queues. Using an application inspired by a typical enterprise scenario, we assess fault tolerance and the impact of fault recovery on performance.
We consider M-estimation problems, where the target value is determined using a minimizer of an expected functional of a Levy process. With discrete observations from the Levy process, we can produce a "quasi-path" by shuffling increments of the Levy process, we call it a quasi-process. Under a suitable sampling scheme, a quasi-process can converge weakly to the true process according to the properties of the stationary and independent increments. Using this resampling technique, we can estimate objective functionals similar to those estimated using the Monte Carlo simulations, and it is available as a contrast function. The M-estimator based on these quasi-processes can be consistent and asymptotically normal.
The concept of federated learning (FL) was first proposed by Google in 2016. Thereafter, FL has been widely studied for the feasibility of application in various fields due to its potential to make full use of data without compromising the privacy. However, limited by the capacity of wireless data transmission, the employment of federated learning on mobile devices has been making slow progress in practical. The development and commercialization of the 5th generation (5G) mobile networks has shed some light on this. In this paper, we analyze the challenges of existing federated learning schemes for mobile devices and propose a novel cross-device federated learning framework, which utilizes the anonymous communication technology and ring signature to protect the privacy of participants while reducing the computation overhead of mobile devices participating in FL. In addition, our scheme implements a contribution-based incentive mechanism to encourage mobile users to participate in FL. We also give a case study of autonomous driving. Finally, we present the performance evaluation of the proposed scheme and discuss some open issues in federated learning.
Multi-camera vehicle tracking is one of the most complicated tasks in Computer Vision as it involves distinct tasks including Vehicle Detection, Tracking, and Re-identification. Despite the challenges, multi-camera vehicle tracking has immense potential in transportation applications including speed, volume, origin-destination (O-D), and routing data generation. Several recent works have addressed the multi-camera tracking problem. However, most of the effort has gone towards improving accuracy on high-quality benchmark datasets while disregarding lower camera resolutions, compression artifacts and the overwhelming amount of computational power and time needed to carry out this task on its edge and thus making it prohibitive for large-scale and real-time deployment. Therefore, in this work we shed light on practical issues that should be addressed for the design of a multi-camera tracking system to provide actionable and timely insights. Moreover, we propose a real-time city-scale multi-camera vehicle tracking system that compares favorably to computationally intensive alternatives and handles real-world, low-resolution CCTV instead of idealized and curated video streams. To show its effectiveness, in addition to integration into the Regional Integrated Transportation Information System (RITIS), we participated in the 2021 NVIDIA AI City multi-camera tracking challenge and our method is ranked among the top five performers on the public leaderboard.
Soft robotic grippers facilitate contact-rich manipulation, including robust grasping of varied objects. Yet the beneficial compliance of a soft gripper also results in significant deformation that can make precision manipulation challenging. We present visual pressure estimation & control (VPEC), a method that uses a single RGB image of an unmodified soft gripper from an external camera to directly infer pressure applied to the world by the gripper. We present inference results for a pneumatic gripper and a tendon-actuated gripper making contact with a flat surface. We also show that VPEC enables precision manipulation via closed-loop control of inferred pressure. We present results for a mobile manipulator (Stretch RE1 from Hello Robot) using visual servoing to do the following: achieve target pressures when making contact; follow a spatial pressure trajectory; and grasp small objects, including a microSD card, a washer, a penny, and a pill. Overall, our results show that VPEC enables grippers with high compliance to perform precision manipulation.
Microservices have become the de-facto software architecture for cloud-native applications. A contentious architectural decision in microservices is to compose them using choreography or orchestration. In choreography, every service works independently, whereas, in orchestration, there is a controller that coordinates service interactions. This paper makes a case for orchestration. The promise of microservices is that each microservice can be independently developed, deployed, tested, upgraded, and scaled. This makes them suitable for systems running on cloud infrastructures. However, microservice-based systems become complicated due to the complex interactions of various services, concurrent events, failing components, developers' lack of global view, and configurations of the environment. This makes maintaining and debugging such systems very challenging. We hypothesize that orchestrated services are easier to debug and to test this we ported the largest publicly available microservices' benchmark TrainTicket, which is implemented using choreography, to a fault-oblivious stateful workflow framework Temporal. We report our experience in porting the code from traditional choreographed microservice architecture to one orchestrated by Temporal and present our initial findings of time to debug the 22 bugs present in the benchmark. Our findings suggest that an effort towards making a transition to orchestrated approach is worthwhile, making the ported code easier to debug.
Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.