亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recently, intermittent computing (IC) has received tremendous attention due to its high potential in perpetual sensing for Internet-of-Things (IoT). By harvesting ambient energy, battery-free devices can perform sensing intermittently without maintenance, thus significantly improving IoT sustainability. To build a practical intermittently-powered sensing system, efficient routing across battery-free devices for data delivery is essential. However, the intermittency of these devices brings new challenges, rendering existing routing protocols inapplicable. In this paper, we propose RICS, the first-of-its-kind routing scheme tailored for intermittently-powered sensing systems. RICS features two major designs, with the goal of achieving low-latency data delivery on a network built with battery-free devices. First, RICS incorporates a fast topology construction protocol for each IC node to establish a path towards the sink node with the least hop count. Second, RICS employs a low-latency message forwarding protocol, which incorporates an efficient synchronization mechanism and a novel technique called pendulum-sync to avoid the time-consuming repeated node synchronization. Our evaluation based on an implementation in OMNeT++ and comprehensive experiments with varying system settings show that RICS can achieve orders of magnitude latency reduction in data delivery compared with the baselines.

相關內容

Open and permissionless blockchains are distributed systems with thousands to tens of thousands of nodes, establishing novel platforms for decentralized applications. When realizing such an application, data might be stored and retrieved from one or more blockchains by distributed network nodes without relying on centralized coordination and trusted third parties. Data access could be provided through a query language such as SQL at the application level, establishing a unified view on application-level data that is verifiably stored. However, when accessing multiple blockchains through their node software and APIs, interoperability cannot be assumed today, resulting in challenges of inhomogeneous data access. In addition, different feature sets and trade-offs exist, e.g., regarding smart contract functionality, availability, distribution, scalability, and security. For increasing interoperability, the paper at hand suggests pursuing the development of a cross-chain query language at the application level. The language abstracts from implementation by providing a standardized syntax, an integrated data model, and a processing architecture for data queries. This research is an extended and updated paper demonstrating the language syntax, data model, and architecture with an evaluation of compatibility against the largest open and permissionless blockchains today.

As Internet of Things (IoT) devices proliferate, sustainable methods for powering them are becoming indispensable. The wireless provision of power enables battery-free operation and is crucial for complying with weight and size restrictions. For the energy harvesting components of these devices to be small, a high operating frequency is necessary. In conjunction with an electrically large antenna, the receivers may be located in the radiating near-field (Fresnel) region, e.g., in indoor scenarios. In this paper, we propose a wireless power transfer system to ensure a reliable supply of power to an arbitrary number of mobile, low-power, and single-antenna receivers, which are located in a three-dimensional cuboid room. To this end, we formulate a max-min optimisation problem to determine the optimal allocation of transmit power among an infinite number of radiating elements of the system's transmit antenna array. Thereby, the optimal deployment, i.e, the set of transmit antenna positions that are allocated non-zero transmit power according to the optimal allocation, is obtained implicitly. Generally, the set of transmit antenna positions corresponding to the optimal deployment has Lebesgue measure zero and the closure of the set has empty interior. Moreover, for a one-dimensional transmit antenna array, the set of transmit antenna positions is proven to be finite. The proposed optimal solution is validated through simulation. Simulation results indicate that the optimal deployment requires a finite number of transmit antennas and depends on the geometry of the environment and the dimensionality of the transmit antenna array. The robustness of the solution, which is obtained under a line-of-sight (LoS) assumption between the transmitter and receiver, is assessed in an isotropic scattering environment containing a strong LoS component.

Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy. FL is effective when dealing with independent and identically distributed (iid) datasets, but struggles with non-iid datasets. Various personalized approaches have been proposed, but such approaches fail to handle underlying shifts in data distribution, such as data distribution skew commonly observed in real-world scenarios (e.g., driver behavior in smart transportation systems changing across time and location). Additionally, trust concerns among unacquainted devices and security concerns with the centralized aggregator pose additional challenges. To address these challenges, this paper presents a dynamically optimized personal deep learning scheme based on blockchain and federated learning. Specifically, the innovative smart contract implemented in the blockchain allows distributed edge devices to reach a consensus on the optimal weights of personalized models. Experimental evaluations using multiple models and real-world datasets demonstrate that the proposed scheme achieves higher accuracy and faster convergence compared to traditional federated and personalized learning approaches.

The edge clique cover (ECC) problem -- where the goal is to find a minimum cardinality set of cliques that cover all the edges of a graph -- is a classic NP-hard problem that has received much attention from both the theoretical and experimental algorithms communities. While small sparse graphs can be solved exactly via the branch-and-reduce algorithm of Gramm et al. [JEA 2009], larger instances can currently only be solved inexactly using heuristics with unknown overall solution quality. We revisit computing minimum ECCs exactly in practice by combining data reduction for both the ECC \emph{and} vertex clique cover (VCC) problems. We do so by modifying the polynomial-time reduction of Kou et al. [Commun. ACM 1978] to transform a reduced ECC instance to a VCC instance; alternatively, we show it is possible to ``lift'' some VCC reductions to the ECC problem. Our experiments show that combining data reduction for both problems (which we call \emph{synergistic data reduction}) enables finding exact minimum ECCs orders of magnitude faster than the technique of Gramm et al., and allows solving large sparse graphs on up to millions of vertices and edges that have never before been solved. With these new exact solutions, we evaluate the quality of recent heuristic algorithms on large instances for the first time. The most recent of these, \textsf{EO-ECC} by Abdullah et al. [ICCS 2022], solves 8 of the 27 instances for which we have exact solutions. It is our hope that our strategy rallies researchers to seek improved algorithms for the ECC problem.

The hardware computing landscape is changing. What used to be distributed systems can now be found on a chip with highly configurable, diverse, specialized and general purpose units. Such Systems-on-a-Chip (SoC) are used to control today's cyber-physical systems, being the building blocks of critical infrastructures. They are deployed in harsh environments and are connected to the cyberspace, which makes them exposed to both accidental faults and targeted cyberattacks. This is in addition to the changing fault landscape that continued technology scaling, emerging devices and novel application scenarios will bring. In this paper, we discuss how the very features, distributed, parallelized, reconfigurable, heterogeneous, that cause many of the imminent and emerging security and resilience challenges, also open avenues for their cure though SoC replication, diversity, rejuvenation, adaptation, and hybridization. We show how to leverage these techniques at different levels across the entire SoC hardware/software stack, calling for more research on the topic.

Graph Neural Networks (GNNs) have gained growing interest in miscellaneous applications owing to their outstanding ability in extracting latent representation on graph structures. To render GNN-based service for IoT-driven smart applications, traditional model serving paradigms usually resort to the cloud by fully uploading geo-distributed input data to remote datacenters. However, our empirical measurements reveal the significant communication overhead of such cloud-based serving and highlight the profound potential in applying the emerging fog computing. To maximize the architectural benefits brought by fog computing, in this paper, we present Fograph, a novel distributed real-time GNN inference framework that leverages diverse and dynamic resources of multiple fog nodes in proximity to IoT data sources. By introducing heterogeneity-aware execution planning and GNN-specific compression techniques, Fograph tailors its design to well accommodate the unique characteristics of GNN serving in fog environments. Prototype-based evaluation and case study demonstrate that Fograph significantly outperforms the state-of-the-art cloud serving and fog deployment by up to 5.39x execution speedup and 6.84x throughput improvement.

The Data Science domain has expanded monumentally in both research and industry communities during the past decade, predominantly owing to the Big Data revolution. Artificial Intelligence (AI) and Machine Learning (ML) are bringing more complexities to data engineering applications, which are now integrated into data processing pipelines to process terabytes of data. Typically, a significant amount of time is spent on data preprocessing in these pipelines, and hence improving its e fficiency directly impacts the overall pipeline performance. The community has recently embraced the concept of Dataframes as the de-facto data structure for data representation and manipulation. However, the most widely used serial Dataframes today (R, pandas) experience performance limitations while working on even moderately large data sets. We believe that there is plenty of room for improvement by taking a look at this problem from a high-performance computing point of view. In a prior publication, we presented a set of parallel processing patterns for distributed dataframe operators and the reference runtime implementation, Cylon [1]. In this paper, we are expanding on the initial concept by introducing a cost model for evaluating the said patterns. Furthermore, we evaluate the performance of Cylon on the ORNL Summit supercomputer.

Despite the advancement of machine learning techniques in recent years, state-of-the-art systems lack robustness to "real world" events, where the input distributions and tasks encountered by the deployed systems will not be limited to the original training context, and systems will instead need to adapt to novel distributions and tasks while deployed. This critical gap may be addressed through the development of "Lifelong Learning" systems that are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability. Unfortunately, efforts to improve these capabilities are typically treated as distinct areas of research that are assessed independently, without regard to the impact of each separate capability on other aspects of the system. We instead propose a holistic approach, using a suite of metrics and an evaluation framework to assess Lifelong Learning in a principled way that is agnostic to specific domains or system techniques. Through five case studies, we show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems. We highlight how the proposed suite of metrics quantifies performance trade-offs present during Lifelong Learning system development - both the widely discussed Stability-Plasticity dilemma and the newly proposed relationship between Sample Efficient and Robust Learning. Further, we make recommendations for the formulation and use of metrics to guide the continuing development of Lifelong Learning systems and assess their progress in the future.

Neural architecture-based recommender systems have achieved tremendous success in recent years. However, when dealing with highly sparse data, they still fall short of expectation. Self-supervised learning (SSL), as an emerging technique to learn with unlabeled data, recently has drawn considerable attention in many fields. There is also a growing body of research proceeding towards applying SSL to recommendation for mitigating the data sparsity issue. In this survey, a timely and systematical review of the research efforts on self-supervised recommendation (SSR) is presented. Specifically, we propose an exclusive definition of SSR, on top of which we build a comprehensive taxonomy to divide existing SSR methods into four categories: contrastive, generative, predictive, and hybrid. For each category, the narrative unfolds along its concept and formulation, the involved methods, and its pros and cons. Meanwhile, to facilitate the development and evaluation of SSR models, we release an open-source library SELFRec, which incorporates multiple benchmark datasets and evaluation metrics, and has implemented a number of state-of-the-art SSR models for empirical comparison. Finally, we shed light on the limitations in the current research and outline the future research directions.

Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.

北京阿比特科技有限公司