When a fluid flows over a solid surface, it creates a thin boundary layer where the flow velocity is influenced by the surface through viscosity, and can transition from laminar to turbulent at sufficiently high speeds. Understanding and forecasting the wind dynamics under these conditions is one of the most challenging scientific problems in fluid dynamics. It is therefore of high interest to formulate models able to capture the nonlinear spatio-temporal velocity structure as well as produce forecasts in a computationally efficient manner. Traditional statistical approaches are limited in their ability to produce timely forecasts of complex, nonlinear spatio-temporal structures which are at the same time able to incorporate the underlying flow physics. In this work, we propose a model to accurately forecast boundary layer velocities with a deep double reservoir computing network which is capable of capturing the complex, nonlinear dynamics of the boundary layer while at the same time incorporating physical constraints via a penalty obtained by a Partial Differential Equation (PDE). Simulation studies on a one-dimensional viscous fluid demonstrate how the proposed model is able to produce accurate forecasts while simultaneously accounting for energy loss. The application focuses on boundary layer data on a wind tunnel with a PDE penalty derived from an appropriate simplification of the Navier-Stokes equations, showing forecasts more compliant with mass conservation.
Recently, intelligent reflecting surface (IRS)-aided millimeter-wave (mmWave) and terahertz (THz) communications are considered in the wireless community. This paper aims to design a beam-based multiple-access strategy for this new paradigm. Its key idea is to make use of multiple sub-arrays over a hybrid digital-analog array to form independent beams, each of which is steered towards the desired direction to mitigate inter-user interference and suppress unwanted signal reflection. The proposed scheme combines the advantages of both orthogonal multiple access (i.e., no inter-user interference) and non-orthogonal multiple access (i.e., full time-frequency resource use). Consequently, it can substantially boost the system capacity, as verified by Monte-Carlo simulations.
In robust optimization problems, the magnitude of perturbations is relatively small. Consequently, solutions within certain regions are less likely to represent the robust optima when perturbations are introduced. Hence, a more efficient search process would benefit from increased opportunities to explore promising regions where global optima or good local optima are situated. In this paper, we introduce a novel robust evolutionary algorithm named the dual-stage robust evolutionary algorithm (DREA) aimed at discovering robust solutions. DREA operates in two stages: the peak-detection stage and the robust solution-searching stage. The primary objective of the peak-detection stage is to identify peaks in the fitness landscape of the original optimization problem. Conversely, the robust solution-searching stage focuses on swiftly identifying the robust optimal solution using information obtained from the peaks discovered in the initial stage. These two stages collectively enable the proposed DREA to efficiently obtain the robust optimal solution for the optimization problem. This approach achieves a balance between solution optimality and robustness by separating the search processes for optimal and robust optimal solutions. Experimental results demonstrate that DREA significantly outperforms five state-of-the-art algorithms across 18 test problems characterized by diverse complexities. Moreover, when evaluated on higher-dimensional robust optimization problems (100-$D$ and 200-$D$), DREA also demonstrates superior performance compared to all five counterpart algorithms.
We consider the problem of tracking multiple, unknown, and time-varying numbers of objects using a distributed network of heterogeneous sensors. In an effort to derive a formulation for practical settings, we consider limited and unknown sensor field-of-views (FoVs), sensors with limited local computational resources and communication channel capacity. The resulting distributed multi-object tracking algorithm involves solving an NP-hard multidimensional assignment problem either optimally for small-size problems or sub-optimally for general practical problems. For general problems, we propose an efficient distributed multi-object tracking algorithm that performs track-to-track fusion using a clustering-based analysis of the state space transformed into a density space to mitigate the complexity of the assignment problem. The proposed algorithm can more efficiently group local track estimates for fusion than existing approaches. To ensure we achieve globally consistent identities for tracks across a network of nodes as objects move between FoVs, we develop a graph-based algorithm to achieve label consensus and minimise track segmentation. Numerical experiments with a synthetic and a real-world trajectory dataset demonstrate that our proposed method is significantly more computationally efficient than state-of-the-art solutions, achieving similar tracking accuracy and bandwidth requirements but with improved label consistency.
In contrast to conventional reconfigurable intelligent surface (RIS), simultaneous transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) has been proposed recently to enlarge the serving area from 180o to 360o coverage. This work considers the performance of a STAR-RIS aided full-duplex (FD) non-orthogonal multiple access (NOMA) communication systems. The STAR-RIS is implemented at the cell-edge to assist the cell-edge users, while the cell-center users can communicate directly with a FD base station (BS). We first introduce new user clustering schemes for the downlink and uplink transmissions. Then, based on the proposed transmission schemes closed-form expressions of the ergodic rates in the downlink and uplink modes are derived taking into account the system impairments caused by the self interference at the FD-BS and the imperfect successive interference cancellation (SIC). Moreover, an optimization problem to maximize the total sum-rate is formulated and solved by optimizing the amplitudes and the phase-shifts of the STAR-RIS elements and allocating the transmit power efficiently. The performance of the proposed user clustering schemes and the optimal STAR-RIS design are investigated through numerical results
The fisheye camera, with its unique wide field of view and other characteristics, has found extensive applications in various fields. However, the fisheye camera suffers from significant distortion compared to pinhole cameras, resulting in distorted images of captured objects. Fish-eye camera distortion is a common issue in digital image processing, requiring effective correction techniques to enhance image quality. This review provides a comprehensive overview of various methods used for fish-eye camera distortion correction. The article explores the polynomial distortion model, which utilizes polynomial functions to model and correct radial distortions. Additionally, alternative approaches such as panorama mapping, grid mapping, direct methods, and deep learning-based methods are discussed. The review highlights the advantages, limitations, and recent advancements of each method, enabling readers to make informed decisions based on their specific needs.
In this paper, we aim to propose a consistent non-Gaussian Bayesian filter of which the system state is a continuous function. The distributions of the true system states, and those of the system and observation noises, are only assumed Lebesgue integrable with no prior constraints on what function classes they fall within. This type of filter has significant merits in both theory and practice, which is able to ameliorate the curse of dimensionality for the particle filter, a popular non-Gaussian Bayesian filter of which the system state is parameterized by discrete particles and the corresponding weights. We first propose a new type of statistics, called the generalized logarithmic moments. Together with the power moments, they are used to form a density surrogate, parameterized as an analytic function, to approximate the true system state. The map from the parameters of the proposed density surrogate to both the power moments and the generalized logarithmic moments is proved to be a diffeomorphism, establishing the fact that there exists a unique density surrogate which satisfies both moment conditions. This diffeomorphism also allows us to use gradient methods to treat the convex optimization problem in determining the parameters. Last but not least, simulation results reveal the advantage of using both sets of moments for estimating mixtures of complicated types of functions. A robot localization simulation is also given, as an engineering application to validate the proposed filtering scheme.
Neural Radiance Fields (NeRF) have recently emerged as a powerful method for image-based 3D reconstruction, but the lengthy per-scene optimization limits their practical usage, especially in resource-constrained settings. Existing approaches solve this issue by reducing the number of input views and regularizing the learned volumetric representation with either complex losses or additional inputs from other modalities. In this paper, we present KeyNeRF, a simple yet effective method for training NeRF in few-shot scenarios by focusing on key informative rays. Such rays are first selected at camera level by a view selection algorithm that promotes baseline diversity while guaranteeing scene coverage, then at pixel level by sampling from a probability distribution based on local image entropy. Our approach performs favorably against state-of-the-art methods, while requiring minimal changes to existing NeRF codebases.
Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision, e.g., GPT-3 and Swin Transformer. Although originally designed for prediction problems, it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems, which are typically beset by long-standing issues involving sample efficiency, credit assignment, and partial observability. In recent years, sequence models, especially the Transformer, have attracted increasing interest in the RL communities, spawning numerous approaches with notable effectiveness and generalizability. This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer, by discussing the connection between sequential decision-making and sequence modeling, and categorizing them based on the way they utilize the Transformer. Moreover, this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making, encompassing theoretical foundations, network architectures, algorithms, and efficient training systems. As this article has been accepted by the Frontiers of Computer Science, here is an early version, and the most up-to-date version can be found at //journal.hep.com.cn/fcs/EN/10.1007/s11704-023-2689-5
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.
Learning latent representations of nodes in graphs is an important and ubiquitous task with widespread applications such as link prediction, node classification, and graph visualization. Previous methods on graph representation learning mainly focus on static graphs, however, many real-world graphs are dynamic and evolve over time. In this paper, we present Dynamic Self-Attention Network (DySAT), a novel neural architecture that operates on dynamic graphs and learns node representations that capture both structural properties and temporal evolutionary patterns. Specifically, DySAT computes node representations by jointly employing self-attention layers along two dimensions: structural neighborhood and temporal dynamics. We conduct link prediction experiments on two classes of graphs: communication networks and bipartite rating networks. Our experimental results show that DySAT has a significant performance gain over several different state-of-the-art graph embedding baselines.