亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Future wireless systems are envisioned to create an endogenously holography-capable, intelligent, and programmable radio propagation environment, that will offer unprecedented capabilities for high spectral and energy efficiency, low latency, and massive connectivity. A potential and promising technology for supporting the expected extreme requirements of the sixth-generation (6G) communication systems is the holographic multiple-input multiple-output (MIMO) surface (HMIMOS), which will actualize holographic radios with reasonable power consumption and fabrication cost. An HMIMOS is a nearly continuous aperture that incorporates reconfigurable and sub-wavelength-spaced antennas and/or metamaterials. Such surfaces comprising dense electromagnetic (EM) excited elements are capable of recording and manipulating impinging fields with utmost flexibility and precision, as well as with reduced cost and power consumption, thereby shaping arbitrary-intended EM waves with high energy efficiency. The powerful EM processing capability of HMIMOS opens up the possibility of wireless communications of holographic imaging level, paving the way for signal processing techniques realized in the EM domain, possibly in conjunction with their digital-domain counterparts. However, in spite of the significant potential, the studies on HMIMOS-based wireless systems are still at an initial stage. In this survey, we present a comprehensive overview of the latest advances in holographic MIMO communications, with a special focus on their physical aspects, theoretical foundations, and enabling technologies. We also compare HMIMOS systems with conventional multi-antenna technologies, especially massive MIMO systems, present various promising synergies of HMIMOS with current and future candidate technologies, and provide an extensive list of research challenges and open directions.

相關內容

This paper explores the use of reconfigurable intelligent surfaces (RIS) in mitigating cross-system interference in spectrum sharing and secure wireless applications. Unlike conventional RIS that can only adjust the phase of the incoming signal and essentially reflect all impinging energy, or active RIS, which also amplify the reflected signal at the cost of significantly higher complexity, noise, and power consumption, an absorptive RIS (ARIS) is considered. An ARIS can in principle modify both the phase and modulus of the impinging signal by absorbing a portion of the signal energy, providing a compromise between its conventional and active counterparts in terms of complexity, power consumption, and degrees of freedom (DoFs). We first use a toy example to illustrate the benefit of ARIS, and then we consider three applications: (1) Spectral coexistence of radar and communication systems, where a convex optimization problem is formulated to minimize the Frobenius norm of the channel matrix from the communication base station to the radar receiver; (2) Spectrum sharing in device-to-device (D2D) communications, where a max-min scheme that maximizes the worst-case signal-to-interference-plus-noise ratio (SINR) among the D2D links is developed and then solved via fractional programming; (3) The physical layer security of a downlink communication system, where the secrecy rate is maximized and the resulting nonconvex problem is solved by a fractional programming algorithm together with a sequential convex relaxation procedure. Numerical results are then presented to show the significant benefit of ARIS in these applications.

Glioblastoma is the most common and aggressive malignant adult tumor of the central nervous system, with grim prognosis and heterogeneous morphologic and molecular profiles. Since the adoption of the current standard of care treatment, 18 years ago, there are no substantial prognostic improvements noticed. Accurate prediction of patient overall survival (OS) from clinical histopathology whole slide images (WSI) using advanced computational methods could contribute to optimization of clinical decision making and patient management. Here, we focus on identifying prognostically relevant glioblastoma morphologic patterns on H&E stained WSI. The exact approach capitalizes on the comprehensive WSI curation of apparent artifactual content and on an interpretability mechanism via a weakly supervised attention based multiple instance learning algorithm that further utilizes clustering to constrain the search space. The automatically identified patterns of high diagnostic value are used to classify the WSI as representative of a short or a long survivor. Identifying tumor morphologic patterns associated with short and long OS will allow the clinical neuropathologist to provide additional prognostic information gleaned during microscopic assessment to the treating team, as well as suggest avenues of biological investigation for understanding and potentially treating glioblastoma.

The timely transportation of goods to customers is an essential component of economic activities. However, heavy-duty diesel trucks that deliver goods contribute significantly to greenhouse gas emissions within many large metropolitan areas, including Los Angeles, New York, and San Francisco. To facilitate freight electrification, this paper proposes joint routing and charging (JRC) scheduling for electric trucks. The objective of the associated optimization problem is to minimize the cost of transportation, charging, and tardiness. As a result of a large number of combinations of road segments, electric trucks can take a large number of combinations of possible charging decisions and charging duration as well. The resulting mixed-integer linear programming problem (MILP) is extremely challenging because of the combinatorial complexity even in the deterministic case. Therefore, a Level-Based Surrogate Lagrangian Relaxation method is employed to decompose and coordinate the overall problem into truck subproblems that are significantly less complex. In the coordination aspect, each truck subproblem is solved independently of other subproblems based on charging cost, tardiness, and the values of Lagrangian multipliers. In addition to serving as a means of guiding and coordinating trucks, multipliers can also serve as a basis for transparent and explanatory decision-making by trucks. Testing results demonstrate that even small instances cannot be solved using the over-the-shelf solver CPLEX after several days of solving. The new method, on the other hand, can obtain near-optimal solutions within a few minutes for small cases, and within 30 minutes for large ones. Furthermore, it has been demonstrated that as battery capacity increases, the total cost decreases significantly; moreover, as the charging power increases, the number of trucks required decreases as well.

While 5G networks are being rolled out, the definition of the potential 5G-Advanced features and the identification of disruptive technologies for 6G systems are being addressed by the scientific and academic communities to tackle the challenges that 2030 communication systems will face, such as terabit-capacity and always-on networks. In this framework, it is globally recognised that Non-Terrestrial Networks (NTN) will play a fundamental role in support to a fully connected world, in which physical, human, and digital domains will converge. In this framework, one of the main challenges that NTN have to address is the provision of the high throughput requested by the new ecosystem. In this paper, we focus on Cell-Free Multiple Input Multiple Output (CF-MIMO) algorithms for NTN. In particular: i) we discuss the architecture design supporting centralised and federated CF-MIMO in NTN, with the latter implementing distributed MIMO algorithms from multiple satellites in the same formation (swarm); ii) propose a novel location-based CF-MIMO algorithm, which does not require Channel State Information (CSI) at the transmitter; and iii) design novel normalisation approaches for federated CF-MIMO in NTN, to cope with the constraints on non-colocated radiating elements. The numerical results substantiate the good performance of the proposed algorithm, also in the presence of non-ideal information.

Clustering is a fundamental machine learning task which has been widely studied in the literature. Classic clustering methods follow the assumption that data are represented as features in a vectorized form through various representation learning techniques. As the data become increasingly complicated and complex, the shallow (traditional) clustering methods can no longer handle the high-dimensional data type. With the huge success of deep learning, especially the deep unsupervised learning, many representation learning techniques with deep architectures have been proposed in the past decade. Recently, the concept of Deep Clustering, i.e., jointly optimizing the representation learning and clustering, has been proposed and hence attracted growing attention in the community. Motivated by the tremendous success of deep learning in clustering, one of the most fundamental machine learning tasks, and the large number of recent advances in this direction, in this paper we conduct a comprehensive survey on deep clustering by proposing a new taxonomy of different state-of-the-art approaches. We summarize the essential components of deep clustering and categorize existing methods by the ways they design interactions between deep representation learning and clustering. Moreover, this survey also provides the popular benchmark datasets, evaluation metrics and open-source implementations to clearly illustrate various experimental settings. Last but not least, we discuss the practical applications of deep clustering and suggest challenging topics deserving further investigations as future directions.

Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.

Visual recognition is currently one of the most important and active research areas in computer vision, pattern recognition, and even the general field of artificial intelligence. It has great fundamental importance and strong industrial needs. Deep neural networks (DNNs) have largely boosted their performances on many concrete tasks, with the help of large amounts of training data and new powerful computation resources. Though recognition accuracy is usually the first concern for new progresses, efficiency is actually rather important and sometimes critical for both academic research and industrial applications. Moreover, insightful views on the opportunities and challenges of efficiency are also highly required for the entire community. While general surveys on the efficiency issue of DNNs have been done from various perspectives, as far as we are aware, scarcely any of them focused on visual recognition systematically, and thus it is unclear which progresses are applicable to it and what else should be concerned. In this paper, we present the review of the recent advances with our suggestions on the new possible directions towards improving the efficiency of DNN-related visual recognition approaches. We investigate not only from the model but also the data point of view (which is not the case in existing surveys), and focus on three most studied data types (images, videos and points). This paper attempts to provide a systematic summary via a comprehensive survey which can serve as a valuable reference and inspire both researchers and practitioners who work on visual recognition problems.

In the last decade or so, we have witnessed deep learning reinvigorating the machine learning field. It has solved many problems in the domains of computer vision, speech recognition, natural language processing, and various other tasks with state-of-the-art performance. The data is generally represented in the Euclidean space in these domains. Various other domains conform to non-Euclidean space, for which graph is an ideal representation. Graphs are suitable for representing the dependencies and interrelationships between various entities. Traditionally, handcrafted features for graphs are incapable of providing the necessary inference for various tasks from this complex data representation. Recently, there is an emergence of employing various advances in deep learning to graph data-based tasks. This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning. Taxonomy of each graph based learning setting is provided with logical divisions of methods falling in the given learning setting. The approaches for each learning task are analyzed from both theoretical as well as empirical standpoints. Further, we provide general architecture guidelines for building GNNs. Various applications and benchmark datasets are also provided, along with open challenges still plaguing the general applicability of GNNs.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require that a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with an arbitrary depth. Although the primitive graph neural networks have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on graph convolutional network (GCN) and gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.

北京阿比特科技有限公司