亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Future wireless systems are expected to support mission-critical services demanding higher and higher reliability. In this letter, we dimension the radio resources needed to achieve a given failure probability target for ultra-reliable wireless systems in high interference conditions, assuming a protocol with frequency hopping combined with packet repetitions. We resort to packet erasure channel models and derive the minimum amount of resource units in the case of receiver with and without collision resolution capability, as well as the number of packet repetitions needed for achieving the failure probability target. Analytical results are numerically validated and can be used as a benchmark for realistic system simulations

相關內容

Interest in the network analysis of bibliographic data has increased significantly in recent years. Yet, appropriate statistical models for examining the full dynamics of scientific citation networks, connecting authors to the papers they write and papers to other papers they cite, are not available. Very few studies exist that have examined how the social network between co-authors and the citation network among the papers shape one another and co-evolve. In consequence, our understanding of scientific citation networks remains incomplete. In this paper we extend recently derived relational hyperevent models (RHEM) to the analysis of scientific networks, providing a general framework to model the multiple dependencies involved in the relation linking multiple authors to the papers they write, and papers to the multiple references they cite. We demonstrate the empirical value of our model in an analysis of publicly available data on a scientific network comprising millions of authors and papers and assess the relative strength of various effects explaining scientific production. We outline the implications of the model for the evaluation of scientific research.

The capacity of a channel characterizes the maximum rate at which information can be transmitted through the channel asymptotically faithfully. For a channel with multiple senders and a single receiver, computing its sum capacity is possible in theory, but challenging in practice because of the nonconvex optimization involved. To address this challenge, we investigate three topics in our study. In the first part, we study the sum capacity of a family of multiple access channels (MACs) obtained from nonlocal games. For any MAC in this family, we obtain an upper bound on the sum rate that depends only on the properties of the game when allowing assistance from an arbitrary set of correlations between the senders. This approach can be used to prove separations between sum capacities when the senders are allowed to share different sets of correlations, such as classical, quantum or no-signalling correlations. We also construct a specific nonlocal game to show that the approach of bounding the sum capacity by relaxing the nonconvex optimization can give arbitrarily loose bounds. Owing to this result, in the second part, we study algorithms for non-convex optimization of a class of functions we call Lipschitz-like functions. This class includes entropic quantities, and hence these results may be of independent interest in information theory. Subsequently, in the third part, we show that one can use these techniques to compute the sum capacity of an arbitrary two-sender MACs to a fixed additive precision in quasi-polynomial time. We showcase our method by efficiently computing the sum capacity of a family of two-sender MACs for which one of the input alphabets has size two. Furthermore, we demonstrate with an example that our algorithm may compute the sum capacity to a higher precision than using the convex relaxation.

How do you ensure that, in a reverberant room, several people can speak simultaneously to several other people, making themselves perfectly understood and without any crosstalk between messages? In this work, we report a conceptual solution to this problem by developing an intelligent acoustic wall, which can be reconfigured electronically and is controlled by a learning algorithm that adapts to the geometry of the room and the positions of sources and receivers. To this end, a portion of the room boundaries is covered with a smart mirror made of a broadband acoustic reconfigurable metasurface (ARMs) designed to provide a two-state (0 or {\pi}) phase shift in the reflected waves by 200 independently tunable units. The whole phase pattern is optimized to maximize the Shannon capacity while minimizing crosstalk between the different sources and receivers. We demonstrate the control of multi-spectral sound fields covering a spectrum much larger than the coherence bandwidth of the room for diverse striking functionalities, including crosstalk-free acoustic communication, frequency-multiplexed communications, and multi-user communications. An experiment conducted with two music sources for two different people demonstrates a crosstalk-free simultaneous music playback. Our work opens new routes for the control of sound waves in complex media and for a new generation of acoustic devices.

Navigating automated driving systems (ADSs) through complex driving environments is difficult. Predicting the driving behavior of surrounding human-driven vehicles (HDVs) is a critical component of an ADS. This paper proposes an enhanced motion-planning approach for an ADS in a highway-merging scenario. The proposed enhanced approach utilizes the results of two aspects: the driving behavior and long-term trajectory of surrounding HDVs, which are coupled using a hierarchical model that is used for the motion planning of an ADS to improve driving safety.

Complex systems that consist of different kinds of entities that interact in different ways can be modeled by multilayer networks. This paper uses the tensor formalism with the Einstein tensor product to model this type of networks. Several centrality measures, that are well known for single-layer networks, are extended to multilayer networks using tensors and their properties are investigated. In particular, subgraph centrality based on the exponential and resolvent of a tensor are considered. Krylov subspace methods are introduced for computing approximations of different measures for large multilayer networks.

Network control theory can be used to model how one should steer the brain between different states by driving a specific region with an input. The needed energy to control a network is often used to quantify its controllability, and controlling brain networks requires diverse energy depending on the selected input region. We use the theory of how input node placement affects the longest control chain (LCC) in the controllability of brain networks to study the role of the architecture of white matter fibers in the required control energy. We show that the energy needed to control human brain networks is related to the LCC, i.e., the longest distance between the input region and other regions in the network. We indicate that regions that control brain networks with lower energy have small LCCs. These regions align with areas that can steer the brain around the state space smoothly. By contrast, regions that need higher energy to move the brain toward different target states have larger LCCs. We also investigate the role of the number of paths between regions in the control energy. Our results show that the more paths between regions, the lower cost needed to control brain networks. We evaluate the number of paths by counting specific motifs in brain networks since determining all paths in graphs is a difficult problem.

Physics-informed neural networks (PINNs) have emerged as a powerful tool for solving inverse problems, especially in cases where no complete information about the system is known and scatter measurements are available. This is especially useful in hemodynamics since the boundary information is often difficult to model, and high-quality blood flow measurements are generally hard to obtain. In this work, we use the PINNs methodology for estimating reduced-order model parameters and the full velocity field from scatter 2D noisy measurements in the ascending aorta. The results show stable and accurate parameter estimations when using the method with simulated data, while the velocity reconstruction shows dependence on the measurement quality and the flow pattern complexity. The method allows for solving clinical-relevant inverse problems in hemodynamics and complex coupled physical systems.

Plasticity, the ability of a neural network to quickly change its predictions in response to new information, is essential for the adaptability and robustness of deep reinforcement learning systems. Deep neural networks are known to lose plasticity over the course of training even in relatively simple learning problems, but the mechanisms driving this phenomenon are still poorly understood. This paper conducts a systematic empirical analysis into plasticity loss, with the goal of understanding the phenomenon mechanistically in order to guide the future development of targeted solutions. We find that loss of plasticity is deeply connected to changes in the curvature of the loss landscape, but that it often occurs in the absence of saturated units. Based on this insight, we identify a number of parameterization and optimization design choices which enable networks to better preserve plasticity over the course of training. We validate the utility of these findings on larger-scale RL benchmarks in the Arcade Learning Environment.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司