亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We describe the orchestration of a decentralized swarm of rotary-wing UAV-relays, augmenting the coverage and service capabilities of a terrestrial base station. Our goal is to minimize the time-average service latencies involved in handling transmission requests from ground users under Poisson arrivals, subject to an average UAV power constraint. Equipped with rate adaptation to efficiently leverage air-to-ground channel stochastics, we first derive the optimal control policy for a single relay via a semi-Markov decision process formulation, with competitive swarm optimization for UAV trajectory design. Accordingly, we detail a multiscale decomposition of this construction: outer decisions on radial wait velocities and end positions optimize the expected long-term delay-power trade-off; consequently, inner decisions on angular wait velocities, service schedules, and UAV trajectories greedily minimize the instantaneous delay-power costs. Next, generalizing to UAV swarms via replication and consensus-driven command-and-control, this policy is embedded with spread maximization and conflict resolution heuristics. We demonstrate that our framework offers superior performance vis-\`a-vis average service latencies and average per-UAV power consumption: 11x faster data payload delivery relative to static UAV-relay deployments and 2x faster than a deep-Q network solution; remarkably, one relay with our scheme outclasses three relays under a joint successive convex approximation policy by 62%.

相關內容

The lasso is the most famous sparse regression and feature selection method. One reason for its popularity is the speed at which the underlying optimization problem can be solved. Sorted L-One Penalized Estimation (SLOPE) is a generalization of the lasso with appealing statistical properties. In spite of this, the method has not yet reached widespread interest. A major reason for this is that current software packages that fit SLOPE rely on algorithms that perform poorly in high dimensions. To tackle this issue, we propose a new fast algorithm to solve the SLOPE optimization problem, which combines proximal gradient descent and proximal coordinate descent steps. We provide new results on the directional derivative of the SLOPE penalty and its related SLOPE thresholding operator, as well as provide convergence guarantees for our proposed solver. In extensive benchmarks on simulated and real data, we show that our method outperforms a long list of competing algorithms.

The long-range and low energy consumption requirements in Internet of Things (IoT) applications have led to a new wireless communication technology known as Low Power Wide Area Network (LPWANs). In recent years, the Long Range (LoRa) protocol has gained a lot of attention as one of the most promising technologies in LPWAN. Choosing the right combination of transmission parameters is a major challenge in the LoRa networks. In LoRa, an Adaptive Data Rate (ADR) mechanism is executed to configure each End Device's (ED) transmission parameters, resulting in improved performance metrics. In this paper, we propose a link-based ADR approach that aims to configure the transmission parameters of EDs by making a decision without taking into account the history of the last received packets, resulting in a relatively low space complexity approach. In this study, we present four different scenarios for assessing performance, including a scenario where mobile EDs are considered. Our simulation results show that in a mobile scenario with high channel noise, our proposed algorithm's Packet Delivery Ratio (PDR) is 2.8 times outperforming the original ADR and 1.35 times that of other relevant algorithms.

Preoperative medical imaging is an essential part of surgical planning. The data from medical imaging devices, such as CT and MRI scanners, consist of stacks of 2D images in DICOM format. Conversely, advances in 3D data visualization provide further information by assembling cross-sections into 3D volumetric datasets. As Microsoft unveiled the HoloLens 2 (HL2), which is considered one of the best Mixed Reality (XR) headsets in the market, it promised to enhance visualization in 3D by providing an immersive experience to users. This paper introduces a prototype holographic XR DICOM Viewer for the 3D visualization of DICOM image sets on HL2 for surgical planning. We first developed a standalone graphical C++ engine using the native DirectX11 API and HLSL shaders. Based on that, the prototype further applies the OpenXR API for potential deployment on a wide range of devices from vendors across the XR spectrum. With native access to the device, our prototype unravels the limitation of hardware capabilities on HL2 for 3D volume rendering and interaction. Moreover, smartphones can act as input devices to provide another user interaction method by connecting to our server. In this paper, we present a holographic DICOM viewer for the HoloLens 2 and contribute (i) a prototype that renders the DICOM image stacks in real-time on HL2, (ii) three types of user interactions in XR, and (iii) a preliminary qualitative evaluation of our prototype.

With the advanced request to employ a team of robots to perform a task collaboratively, the research community has become increasingly interested in collaborative simultaneous localization and mapping. Unfortunately, existing datasets are limited in the scale and variation of the collaborative trajectories they capture, even though generalization between inter-trajectories among different agents is crucial to the overall viability of collaborative tasks. To help align the research community's contributions with real-world multiagent ordinated SLAM problems, we introduce S3E, a novel large-scale multimodal dataset captured by a fleet of unmanned ground vehicles along four designed collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor scenes that each exceed 200 seconds, consisting of well synchronized and calibrated high-quality stereo camera, LiDAR, and high-frequency IMU data. Crucially, our effort exceeds previous attempts regarding dataset size, scene variability, and complexity. It has 4x as much average recording time as the pioneering EuRoC dataset. We also provide careful dataset analysis as well as baselines for collaborative SLAM and single counterparts. Find data, code, and more up-to-date information at //github.com/PengYu-Team/S3E.

Modern analytical workloads are highly heterogeneous and massively complex, making generic query optimizers untenable for many customers and scenarios. As a result, it is important to specialize these optimizers to instances of the workloads. In this paper, we continue a recent line of work in steering a query optimizer towards better plans for a given workload, and make major strides in pushing previous research ideas to production deployment. Along the way we solve several operational challenges including, making steering actions more manageable, keeping the costs of steering within budget, and avoiding unexpected performance regressions in production. Our resulting system, QQ-advisor, essentially externalizes the query planner to a massive offline pipeline for better exploration and specialization. We discuss various aspects of our design and show detailed results over production SCOPE workloads at Microsoft, where the system is currently enabled by default.

Differentiable planning promises end-to-end differentiability and adaptivity. However, an issue prevents it from scaling up to larger-scale problems: they need to differentiate through forward iteration layers to compute gradients, which couples forward computation and backpropagation, and needs to balance forward planner performance and computational cost of the backward pass. To alleviate this issue, we propose to differentiate through the Bellman fixed-point equation to decouple forward and backward passes for Value Iteration Network and its variants, which enables constant backward cost (in planning horizon) and flexible forward budget and helps scale up to large tasks. We study the convergence stability, scalability, and efficiency of the proposed implicit version of VIN and its variants and demonstrate their superiorities on a range of planning tasks: 2D navigation, visual navigation, and 2-DOF manipulation in configuration space and workspace.

Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices, via iterative local updates (at devices) and global aggregations (at the server). In this paper, we develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions: (i) Network, allowing decentralized cooperation among the devices via device-to-device (D2D) communications. (ii) Heterogeneity, interpreted at three levels: (ii-a) Learning: PSL considers heterogeneous number of stochastic gradient descent iterations with different mini-batch sizes at the devices; (ii-b) Data: PSL presumes a dynamic environment with data arrival and departure, where the distributions of local datasets evolve over time, captured via a new metric for model/concept drift. (ii-c) Device: PSL considers devices with different computation and communication capabilities. (iii) Proximity, where devices have different distances to each other and the access point. PSL considers the realistic scenario where global aggregations are conducted with idle times in-between them for resource efficiency improvements, and incorporates data dispersion and model dispersion with local model condensation into FedL. Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning. We then propose network-aware dynamic model tracking to optimize the model learning vs. resource efficiency tradeoff, which we show is an NP-hard signomial programming problem. We finally solve this problem through proposing a general optimization solver. Our numerical results reveal new findings on the interdependencies between the idle times in-between the global aggregations, model/concept drift, and D2D cooperation configuration.

Power splitting (PS) based simultaneous wireless information and power transfer (SWIPT) is considered in a multi-user multiple-input-single-output broadcast scenario. Specifically, we focus on jointly configuring the transmit beamforming vectors and receive PS ratios to minimize the total transmit energy of the base station under the user-specific latency and energy harvesting (EH) requirements. The battery depletion phenomenon is avoided by preemptively incorporating information regarding the receivers' battery state and EH fluctuations into the resource allocation design. The resulting time-average sum-power minimization problem is temporally correlated, non-convex (including mutually coupled latency-battery queue dynamics), and in general intractable. We use the Lyapunov optimization framework and derive a dynamic control algorithm to transform the original problem into a sequence of deterministic and independent subproblems, which are then solved via two alternative approaches: i) semidefinite relaxation combined with fractional programming, and ii) successive convex approximation. Furthermore, we design a low-complexity closed-form iterative algorithm exploiting the Karush-Kuhn-Tucker optimality conditions for a specific scenario with delay bounded batteryless receivers. Numerical results provide insights on the robustness of the proposed design to realize an energy-efficient SWIPT system while ensuring latency and EH requirements in a time dynamic mobile access network.

Motion planning and control are crucial components of robotics applications. Here, spatio-temporal hard constraints like system dynamics and safety boundaries (e.g., obstacles in automated driving) restrict the robot's motions. Direct methods from optimal control solve a constrained optimization problem. However, in many applications finding a proper cost function is inherently difficult because of the weighting of partially conflicting objectives. On the other hand, Imitation Learning (IL) methods such as Behavior Cloning (BC) provide a intuitive framework for learning decision-making from offline demonstrations and constitute a promising avenue for planning and control in complex robot applications. Prior work primarily relied on soft-constraint approaches, which use additional auxiliary loss terms describing the constraints. However, catastrophic safety-critical failures might occur in out-of-distribution (OOD) scenarios. This work integrates the flexibility of IL with hard constraint handling in optimal control. Our approach constitutes a general framework for constraint robotic motion planning and control using offline IL. Hard constraints are integrated into the learning problem in a differentiable manner, via explicit completion and gradient-based correction. Simulated experiments of mobile robot navigation and automated driving provide evidence for the performance of the proposed method.

We describe ACE0, a lightweight platform for evaluating the suitability and viability of AI methods for behaviour discovery in multiagent simulations. Specifically, ACE0 was designed to explore AI methods for multi-agent simulations used in operations research studies related to new technologies such as autonomous aircraft. Simulation environments used in production are often high-fidelity, complex, require significant domain knowledge and as a result have high R&D costs. Minimal and lightweight simulation environments can help researchers and engineers evaluate the viability of new AI technologies for behaviour discovery in a more agile and potentially cost effective manner. In this paper we describe the motivation for the development of ACE0.We provide a technical overview of the system architecture, describe a case study of behaviour discovery in the aerospace domain, and provide a qualitative evaluation of the system. The evaluation includes a brief description of collaborative research projects with academic partners, exploring different AI behaviour discovery methods.

北京阿比特科技有限公司