亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Identifying the closest fog node is crucial for mobile clients to benefit from fog computing. Relying on geographical location alone us insufficient for this as it ignores real observed client access latency. In this paper, we analyze the performance of the Meridian and Vivaldi network coordinate systems in identifying nearest fog nodes. To that end, we simulate a dense fog environment with mobile clients. We find that while network coordinate systems really find fog nodes in close network proximity, a purely latency-oriented identification approach ignores the larger problem of balancing load across fog nodes.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網(wang)絡會議(yi)。 Publisher:IFIP。 SIT:

Traditional approaches for manipulation planning rely on an explicit geometric model of the environment to formulate a given task as an optimization problem. However, inferring an accurate model from raw sensor input is a hard problem in itself, in particular for articulated objects (e.g., closets, drawers). In this paper, we propose a Neural Field Representation (NFR) of articulated objects that enables manipulation planning directly from images. Specifically, after taking a few pictures of a new articulated object, we can forward simulate its possible movements, and, therefore, use this neural model directly for planning with trajectory optimization. Additionally, this representation can be used for shape reconstruction, semantic segmentation and image rendering, which provides a strong supervision signal during training and generalization. We show that our model, which was trained only on synthetic images, is able to extract a meaningful representation for unseen objects of the same class, both in simulation and with real images. Furthermore, we demonstrate that the representation enables robotic manipulation of an articulated object in the real world directly from images.

The heterogeneity of use cases that next-generation wireless systems need to support calls for flexible and programmable networks that can autonomously adapt to the application requirements. Specifically, traffic flows that support critical applications (e.g., vehicular control or safety communications) often come with a requirement in terms of guaranteed performance. At the same time, others are more elastic and can adapt to the resources made available by the network (e.g., video streaming). To this end, the Open Radio Access Network (RAN) paradigm is seen as an enabler of dynamic control and adaptation of the protocol stack of 3rd Generation Partnership Project (3GPP) networks in the 5th Generation (5G) and beyond. Through its embodiment in the O-RAN alliance specifications, it introduces the Ran Intelligent Controllers (RICs), which enable closed-loop control, leveraging a rich set of RAN Key Performance Measurements (KPMs) to build a representation of the network and enforcing dynamic control through the configuration of 3GPP-defined stack parameters. In this paper, we leverage the Open RAN closed-loop control capabilities to design, implement, and evaluate multiple data-driven and dynamic Service Level Agreement (SLA) enforcement policies, capable of adapting the RAN semi-persistent scheduling patterns to match users requirements. To do so, we implement semi-persistent scheduling capabilities in the OpenAirInterface (OAI) 5G stack, as well as an easily extensible and customizable version of the Open RAN E2 interface that connects the OAI base stations to the near-real-time RIC. We deploy and test our framework on Colosseum, a large-scale hardware-in-the-loop channel emulator. Results confirm the effectiveness of the proposed Open RAN-based solution in managing SLA in near-real-time.

We consider a variant of the clustering problem for a complete weighted graph. The aim is to partition the nodes into clusters maximizing the sum of the edge weights within the clusters. This problem is known as the clique partitioning problem, being NP-hard in the general case of having edge weights of different signs. We propose a new method of estimating an upper bound of the objective function that we combine with the classical branch-and-bound technique to find the exact solution. We evaluate our approach on a broad range of random graphs and real-world networks. The proposed approach provided tighter upper bounds and achieved significant convergence speed improvements compared to known alternative methods.

Reinforcement learning~(RL) is a versatile framework for learning to solve complex real-world tasks. However, influences on the learning performance of RL algorithms are often poorly understood in practice. We discuss different analysis techniques and assess their effectiveness for investigating the impact of action representations in RL. Our experiments demonstrate that the action representation can significantly influence the learning performance on popular RL benchmark tasks. The analysis results indicate that some of the performance differences can be attributed to changes in the complexity of the optimization landscape. Finally, we discuss open challenges of analysis techniques for RL algorithms.

We propose a new framework for the sampling, compression, and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces. Our approach involves constructing a tensor called the RaySense sketch, which captures nearest neighbors from the underlying geometry of points along a set of rays. We explore various operations that can be performed on the RaySense sketch, leading to different properties and potential applications. Statistical information about the data set can be extracted from the sketch, independent of the ray set. Line integrals on point sets can be efficiently computed using the sketch. We also present several examples illustrating applications of the proposed strategy in practical scenarios.

The steady-state solution of fluid flow in pipeline infrastructure networks driven by junction/node potentials is a crucial ingredient in various decision support tools for system design and operation. While the non-linear system is known to have a unique solution (when one exists), the absence of a definite result on existence of solutions hobbles the development of computational algorithms, for it is not possible to distinguish between algorithm failure and non-existence of a solution. In this letter we show that a unique solution exists for such non-linear systems if the term solution is interpreted in terms of potentials and flows rather than pressures and flows. The existence result for flow of natural gas in networks also applies to other fluid flow networks such as water distribution networks or networks that transport carbon dioxide in carbon capture and sequestration. Most importantly, by giving a complete answer to the question of existence of solutions, our result enables correct diagnosis of algorithmic failure, problem stiffness and non-convergence in computational algorithms.

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

Object detection is a fundamental task in computer vision and image processing. Current deep learning based object detectors have been highly successful with abundant labeled data. But in real life, it is not guaranteed that each object category has enough labeled samples for training. These large object detectors are easy to overfit when the training data is limited. Therefore, it is necessary to introduce few-shot learning and zero-shot learning into object detection, which can be named low-shot object detection together. Low-Shot Object Detection (LSOD) aims to detect objects from a few or even zero labeled data, which can be categorized into few-shot object detection (FSOD) and zero-shot object detection (ZSD), respectively. This paper conducts a comprehensive survey for deep learning based FSOD and ZSD. First, this survey classifies methods for FSOD and ZSD into different categories and discusses the pros and cons of them. Second, this survey reviews dataset settings and evaluation metrics for FSOD and ZSD, then analyzes the performance of different methods on these benchmarks. Finally, this survey discusses future challenges and promising directions for FSOD and ZSD.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

北京阿比特科技有限公司