We analyze the utilization of publish-subscribe protocols in IoT and Fog Computing and challenges around security configuration, performance, and qualitative characteristics. Such problems with security configuration lead to significant disruptions and high operation costs. Yet, These issues can be prevented by selecting the appropriate transmission technology for each configuration, considering the variations in sizing, installation, sensor profile, distribution, security, networking, and locality. This work aims to present a comparative qualitative and quantitative analysis around diverse configurations, focusing on Smart Agriculture's scenario and specifically the case of fish-farming. As result, we applied a data generation workbench to create datasets of relevant research data and compared the results in terms of performance, resource utilization, security, and resilience. Also, we provide a qualitative analysis of use case scenarios for the quantitative data produced. As a contribution, this robust analysis provides a blueprint to decision support for Fog Computing engineers analyzing the best protocol to apply in various configurations.
Mobile and Internet of Things devices are generating enormous amounts of multi-modal data due to their exponential growth and accessibility. As a result, these data sources must be directly analyzed in real time at the network edge rather than relying on the cloud. Significant processing power at the network's edge has made it possible to gather data and make decisions prior to data being sent to the cloud. Moreover, security problems have significantly towered as a result of the rapid expansion of mobile devices, Internet of Things (IoT) devices, and various network points. It's much harder than ever to guarantee the privacy of sensitive data, including customer information. This systematic literature review depicts the fact that new technologies are a great weapon to fight with the attack and threats to the edge computing security.
The paper traces the development of the use of martingale methods in survival analysis from the mid 1970's to the early 1990's. This development was initiated by Aalen's Berkeley PhD-thesis in 1975, progressed through the work on estimation of Markov transition probabilities, non-parametric tests and Cox's regression model in the late 1970's and early 1980's, and it was consolidated in the early 1990's with the publication of the monographs by Fleming and Harrington (1991) and Andersen, Borgan, Gill and Keiding (1993). The development was made possible by an unusually fast technology transfer of pure mathematical concepts, primarily from French probability, into practical biostatistical methodology, and we attempt to outline some of the personal relationships that helped this happen. We also point out that survival analysis was ready for this development since the martingale ideas inherent in the deep understanding of temporal development so intrinsic to the French theory of processes were already quite close to the surface in survival analysis.
We provide a framework consisting of tools and metatheorems for the end-to-end verification of security protocols, which bridges the gap between automated protocol verification and code-level proofs. We automatically translate a Tamarin protocol model into a set of I/O specifications expressed in separation logic. Each such specification describes a protocol role's intended I/O behavior against which the role's implementation is then verified. Our soundness result guarantees that the verified implementation inherits all security (trace) properties proved for the Tamarin model. Our framework thus enables us to leverage the substantial body of prior verification work in Tamarin to verify new and existing implementations. The possibility to use any separation logic code verifier provides flexibility regarding the target language. To validate our approach and show that it scales to real-world protocols, we verify a substantial part of the official Go implementation of the WireGuard VPN key exchange protocol.
Blokchain is a promising technology to enable distributed and reliable data sharing at the network edge. The high security in blockchain is undoubtedly a critical factor for the network to handle important data item. On the other hand, according to the dilemma in blockchain, an overemphasis on distributed security will lead to poor transaction-processing capability, which limits the application of blockchain in data sharing scenarios with high-throughput and low-latency requirements. To enable demand-oriented distributed services, this paper investigates the relationship between capability and security in blockchain from the perspective of block propagation and forking problem. First, a Markov chain is introduced to analyze the gossiping-based block propagation among edge servers, which aims to derive block propagation delay and forking probability. Then, we study the impact of forking on blockchain capability and security metrics, in terms of transaction throughput, confirmation delay, fault tolerance, and the probability of malicious modification. The analytical results show that with the adjustment of block generation time or block size, transaction throughput improves at the sacrifice of fault tolerance, and vice versa. Meanwhile, the decline in security can be offset by adjusting confirmation threshold, at the cost of increasing confirmation delay. The analysis of capability-security trade-off can provide a theoretical guideline to manage blockchain performance based on the requirements of data sharing scenarios.
We propose finite-time measures to compute the divergence, the curl and the velocity gradient tensor of the point particle velocity for two- and three-dimensional moving particle clouds. To this end, tessellation of the particle positions is applied to associate a volume to each particle. Considering then two subsequent time instants, the dynamics of the volume can be assessed. Determining the volume change of tessellation cells yields the divergence of the particle velocity and the rotation of the cells evaluates its curl. Thus the helicity of particle velocity can be likewise computed and swirling motion of particle clouds can be quantified. We propose a modified version of Voronoi tessellation and which overcomes some drawbacks of the classical Voronoi tessellation. First we assess the numerical accuracy for randomly distributed particles. We find strong Pearson correlation between the divergence computed with the the modified version, and the analytic value which confirms the validity of the method. Moreover the modified Voronoi-based method converges with first order in space and time is observed in two and three dimensions for randomly distributed particles, which is not the case for the classical Voronoi tessellation. Furthermore, we consider for advecting particles, random velocity fields with imposed power-law energy spectra, motivated by turbulence. We determine the number of particles necessary to guarantee a given precision. Finally, applications to fluid particles advected in three-dimensional fully developed isotropic turbulence show the utility of the approach for real world applications to quantify self-organization in particle clouds and their vortical or even swirling motion.
As an emerging service for in-browser content delivery, peer-assisted delivery network (PDN) is reported to offload up to 95\% of bandwidth consumption for video streaming, significantly reducing the cost incurred by traditional CDN services. With such benefits, PDN services significantly impact today's video streaming and content delivery model. However, their security implications have never been investigated. In this paper, we report the first effort to address this issue, which is made possible by a suite of methodologies, e.g., an automatic pipeline to discover PDN services and their customers, and a PDN analysis framework to test the potential security and privacy risks of these services. Our study has led to the discovery of 3 representative PDN providers, along with 134 websites and 38 mobile apps as their customers. Most of these PDN customers are prominent video streaming services with millions of monthly visits or app downloads (from Google Play). Also found in our study are another 9 top video/live streaming websites with each equipped with a proprietary PDN solution. Most importantly, our analysis on these PDN services has brought to light a series of security risks, which have never been reported before, including free riding of the public PDN services, video segment pollution, exposure of video viewers' IPs to other peers, and resource squatting. All such risks have been studied through controlled experiments and measurements, under the guidance of our institution's IRB. We have responsibly disclosed these security risks to relevant PDN providers, who have acknowledged our findings, and also discussed the avenues to mitigate these risks.
Relational verification encompasses information flow security, regression verification, translation validation for compilers, and more. Effective alignment of the programs and computations to be related facilitates use of simpler relational invariants and relational procedure specs, which in turn enables automation and modular reasoning. Alignment has been explored in terms of trace pairs, deductive rules of relational Hoare logics (RHL), and several forms of product automata. This article shows how a simple extension of Kleene Algebra with Tests (KAT), called BiKAT, subsumes prior formulations, including alignment witnesses for forall-exists properties, which brings to light new RHL-style rules for such properties. Alignments can be discovered algorithmically or devised manually but, in either case, their adequacy with respect to the original programs must be proved; an explicit algebra enables constructive proof by equational reasoning. Furthermore our approach inherits algorithmic benefits from existing KAT-based techniques and tools, which are applicable to a range of semantic models.
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
Learning on big data brings success for artificial intelligence (AI), but the annotation and training costs are expensive. In future, learning on small data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans. A series of machine learning models is going on this way such as active learning, few-shot learning, deep clustering. However, there are few theoretical guarantees for their generalization performance. Moreover, most of their settings are passive, that is, the label distribution is explicitly controlled by one specified sampling scenario. This survey follows the agnostic active sampling under a PAC (Probably Approximately Correct) framework to analyze the generalization error and label complexity of learning on small data using a supervised and unsupervised fashion. With these theoretical analyses, we categorize the small data learning models from two geometric perspectives: the Euclidean and non-Euclidean (hyperbolic) mean representation, where their optimization solutions are also presented and discussed. Later, some potential learning scenarios that may benefit from small data learning are then summarized, and their potential learning scenarios are also analyzed. Finally, some challenging applications such as computer vision, natural language processing that may benefit from learning on small data are also surveyed.
Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.