Since the orthogonality of the line-of-sight multiple input multiple output (LoS MIMO) channel is only available within the Rayleigh distance, coverage of communication systems is restricted due to the finite implementation spacing of antennas. However, media with different permittivity in the transmission path are likely to loosen the requirement for antenna spacing. Such a conclusion could be enlightening in an air-to-ground LoS MIMO scenario considering the existence of clouds in the troposphere. To analyze the random phase variations in the presence of a single-layer cloud, we propose and modify a new cloud modeling method fit for LoS MIMO scene based on real-measurement data. Then, the preliminary analysis of channel capacity is conducted based on the simulation result.
Sky-image-based solar forecasting using deep learning has been recognized as a promising approach in reducing the uncertainty in solar power generation. However, one of the biggest challenges is the lack of massive and diversified sky image samples. In this study, we present a comprehensive survey of open-source ground-based sky image datasets for very short-term solar forecasting (i.e., forecasting horizon less than 30 minutes), as well as related research areas which can potentially help improve solar forecasting methods, including cloud segmentation, cloud classification and cloud motion prediction. We first identify 72 open-source sky image datasets that satisfy the needs of machine/deep learning. Then a database of information about various aspects of the identified datasets is constructed. To evaluate each surveyed datasets, we further develop a multi-criteria ranking system based on 8 dimensions of the datasets which could have important impacts on usage of the data. Finally, we provide insights on the usage of these datasets for different applications. We hope this paper can provide an overview for researchers who are looking for datasets for very short-term solar forecasting and related areas.
Predicting drug side-effects before they occur is a key task in keeping the number of drug-related hospitalizations low and to improve drug discovery processes. Automatic predictors of side-effects generally are not able to process the structure of the drug, resulting in a loss of information. Graph neural networks have seen great success in recent years, thanks to their ability of exploiting the information conveyed by the graph structure and labels. These models have been used in a wide variety of biological applications, among which the prediction of drug side-effects on a large knowledge graph. Exploiting the molecular graph encoding the structure of the drug represents a novel approach, in which the problem is formulated as a multi-class multi-label graph-focused classification. We developed a methodology to carry out this task, using recurrent Graph Neural Networks, and building a dataset from freely accessible and well established data sources. The results show that our method has an improved classification capability, under many parameters and metrics, with respect to previously available predictors.
We consider load balancing in large-scale heterogeneous server systems in the presence of data locality that imposes constraints on which tasks can be assigned to which servers. The constraints are naturally captured by a bipartite graph between the servers and the dispatchers handling assignments of various arrival flows. When a task arrives, the corresponding dispatcher assigns it to a server with the shortest queue among $d\geq 2$ randomly selected servers obeying the above constraints. Server processing speeds are heterogeneous and they depend on the server-type. For a broad class of bipartite graphs, we characterize the limit of the appropriately scaled occupancy process, both on the process-level and in steady state, as the system size becomes large. Using such a characterization, we show that data locality constraints can be used to significantly improve the performance of heterogeneous systems. This is in stark contrast to either heterogeneous servers in a full flexible system or data locality constraints in systems with homogeneous servers, both of which have been observed to degrade the system performance. Extensive numerical experiments corroborate the theoretical results.
We introduce a Loss Discounting Framework for model and forecast combination which generalises and combines Bayesian model synthesis and generalized Bayes methodologies. We use a loss function to score the performance of different models and introduce a multilevel discounting scheme which allows a flexible specification of the dynamics of the model weights. This novel and simple model combination approach can be easily applied to large scale model averaging/selection, can handle unusual features such as sudden regime changes, and can be tailored to different forecasting problems. We compare our method to both established methodologies and state of the art methods for a number of macroeconomic forecasting examples. We find that the proposed method offers an attractive, computationally efficient alternative to the benchmark methodologies and often outperforms more complex techniques.
We consider a flexible Bayesian evidence synthesis approach to model the age-specific transmission dynamics of COVID-19 based on daily age-stratified mortality counts. The temporal evolution of transmission rates in populations containing multiple types of individual are reconstructed via an appropriate dimension-reduction formulation driven by independent diffusion processes assigned to the key epidemiological parameters. A suitably tailored Susceptible-Exposed-Infected-Removed (SEIR) compartmental model is used to capture the latent counts of infections and to account for fluctuations in transmission influenced by phenomena like public health interventions and changes in human behaviour. We analyze the outbreak of COVID-19 in Greece and Austria and validate the proposed model using the estimated counts of cumulative infections from a large-scale seroprevalence survey in England.
Distributed protocols are generally parametric and can be executed on a system with any number of nodes, and hence proving their correctness becomes an infinite state verification problem. The most popular approach for verifying distributed protocols is to find an inductive invariant which is strong enough to prove the required safety property. However, finding inductive invariants is known to be notoriously hard, and is especially harder in the context of distributed protocols which are quite complex due to their asynchronous nature. In this work, we investigate an orthogonal cut-off based approach to verifying distributed protocols which sidesteps the problem of finding an inductive invariant, and instead reduces checking correctness to a finite state verification problem. The main idea is to find a finite, fixed protocol instance called the cutoff instance, such that if the cutoff instance is safe, then any protocol instance would also be safe. Previous cutoff based approaches have only been applied to a restricted class of protocols and specifications. We formalize the cutoff approach in the context of a general protocol modeling language (RML), and identify sufficient conditions which can be efficiently encoded in SMT to check whether a given protocol instance is a cutoff instance. Further, we propose a simple static analysis-based algorithm to automatically synthesize a cut-off instance. We have applied our approach successfully on a number of complex distributed protocols, providing the first known cut-off results for many of them.
Heterogeneity is a hallmark of complex diseases. Regression-based heterogeneity analysis, which is directly concerned with outcome-feature relationships, has led to a deeper understanding of disease biology. Such an analysis identifies the underlying subgroup structure and estimates the subgroup-specific regression coefficients. However, most of the existing regression-based heterogeneity analyses can only address disjoint subgroups; that is, each sample is assigned to only one subgroup. In reality, some samples have multiple labels, for example, many genes have several biological functions, and some cells of pure cell types transition into other types over time, which suggest that their outcome-feature relationships (regression coefficients) can be a mixture of relationships in more than one subgroups, and as a result, the disjoint subgrouping results can be unsatisfactory. To this end, we develop a novel approach to regression-based heterogeneity analysis, which takes into account possible overlaps between subgroups and high data dimensions. A subgroup membership vector is introduced for each sample, which is combined with a loss function. Considering the lack of information arising from small sample sizes, an $l_2$ norm penalty is developed for each membership vector to encourage similarity in its elements. A sparse penalization is also applied for regularized estimation and feature selection. Extensive simulations demonstrate its superiority over direct competitors. The analysis of Cancer Cell Line Encyclopedia data and lung cancer data from The Cancer Genome Atlas shows that the proposed approach can identify an overlapping subgroup structure with favorable performance in prediction and stability.
The existing sonar image classification methods based on deep learning are often analyzed in Euclidean space, only considering the local image features. For this reason, this paper presents a sonar classification method based on improved Graph Attention Network (GAT), namely SI-GAT, which is applicable to multiple types imaging sonar. This method quantifies the correlation relationship between nodes based on the joint calculation of color proximity and spatial proximity that represent the sonar characteristics in non-Euclidean space, then the KNN (K-Nearest Neighbor) algorithm is used to determine the neighborhood range and adjacency matrix in the graph attention mechanism, which are jointly considered with the attention coefficient matrix to construct the key part of the SI-GAT. This SI-GAT is superior to several CNN (Convolutional Neural Network) methods based on Euclidean space through validation of real data.
Solving large-scale nonlinear minimization problems is computationally demanding. Nonlinear multilevel minimization (NMM) methods explore the structure of the underlying minimization problem to solve such problems in a computationally efficient and scalable manner. The efficiency of the NMM methods relies on the quality of the coarse-level models. Traditionally, coarse-level models are constructed using the additive approach, where the so-called $\tau$-correction enforces a local coherence between the fine-level and coarse-level objective functions. In this work, we extend this methodology and discuss how to enforce local coherence between the objective functions using a multiplicative approach. Moreover, we also present a hybrid approach, which takes advantage of both, additive and multiplicative, approaches. Using numerical experiments from the field of deep learning, we show that employing a hybrid approach can greatly improve the convergence speed of NMM methods and therefore it provides an attractive alternative to the almost universally used additive approach.
With the rapid development of high-speed railway systems and railway wireless communication, the application of ultra-wideband millimeter wave band is an inevitable trend. However, the millimeter wave channel has large propagation loss and is easy to be blocked. Moreover, there are many problems such as eavesdropping between the base station (BS) and the train. As an emerging technology, reconfigurable intelligent surface (RIS) can achieve the effect of passive beamforming by controlling the propagation of the incident electromagnetic wave in the desired direction.We propose a RIS-assisted scheduling scheme for scheduling interrupted transmission and improving quality of service (QoS).In the propsed scheme, an RIS is deployed between the BS and multiple mobile relays (MRs). By jointly optimizing the beamforming vector and the discrete phase shift of the RIS, the constructive interference between direct link signals and indirect link signals can be achieved, and the channel capacity of eavesdroppers is guaranteed to be within a controllable range. Finally, the purpose of maximizing the number of successfully scheduled tasks and satisfying their QoS requirements can be practically realized. Extensive simulations demonstrate that the proposed scheme has superior performance regarding the number of completed tasks and the system secrecy capacity over four baseline schemes in literature.