Traditional single-input single-output and multiple-input multiple-output information theory adopt spatially discrete modeling, which does not fully capture the continuous nature of the underlying electromagnetic (EM) fields supporting the physical layer of wireless communication systems. Thus, it is of interest to examine the information-carrying capability of continuous EM fields, which motivates research on EM information theory (EIT). In this article, we systematically investigate the basic ideas and main results of EIT. First, we review the fundamental analytical tools of classical information theory and EM theory. Then, we introduce the modeling and analysis methodologies of EIT, including continuous field modeling, degree of freedom, mutual information, and capacity analyses. After that, several EIT-inspired applications are discussed to illustrate how EIT guides the design of practical wireless systems. Finally, we point out several open problems of EIT, where further research efforts are required for EIT to construct a unified interdisciplinary theory.
The popular systemic risk measure CoVaR (conditional Value-at-Risk) is widely used in economics and finance. Formally, it is defined as an (extreme) quantile of one variable (e.g., losses in the financial system) conditional on some other variable (e.g., losses in a bank's shares) being in distress and, hence, measures the spillover of risks. In this article, we propose joint dynamic and semiparametric models for VaR and CoVaR together with a two-step M-estimator for the model parameters drawing on recently proposed bivariate scoring functions for the pair (VaR, CoVaR). Among others, this allows for the estimation of joint dynamic forecasting models for (VaR, CoVaR). We prove consistency and asymptotic normality of the proposed estimator and analyze its finite-sample properties in simulations. We apply our dynamic models to generate CoVaR forecasts for real financial data, which are shown to be superior to existing methods.
Simple temporal problems represent a powerful class of models capable of describing the temporal relations between events that arise in many real-world applications such as logistics, robot planning and management systems. The classic simple temporal problem permits each event to have only a single release and due date. In this paper, we focus on the case where events may have an arbitrarily large number of release and due dates. This type of problem, however, has been referred to by various names. In order to simplify and standardize nomenclatures, we introduce the name Simple Disjunctive Temporal Problem. We provide three mathematical models to describe this problem using constraint programming and linear programming. To efficiently solve simple disjunctive temporal problems, we design two new algorithms inspired by previous research, both of which exploit the problem's structure to significantly reduce their space complexity. Additionally, we implement algorithms from the literature and provide the first in-depth empirical study comparing methods to solve simple disjunctive temporal problems across a wide range of experiments. Our analysis and conclusions offer guidance for future researchers and practitioners when tackling similar temporal constraint problems in new applications. All results, source code and instances are made publicly available to further assist future research.
The primary aim of this paper is the derivation and the proof of a simple and tractable formula for the stray field energy in micromagnetic problems. The formula is based on an expansion in terms of Arar-Boulmezaoud functions. It remains valid even if the magnetization is not of constant magnitude or if the sample is not geometrically bounded. The paper continuous with a direct and important application which consists in a fast summation technique of the stray field energy. The convergence of this technique is established and its efficiency is proved by various numerical experiences.
2LS ("tools") is a verification tool for C programs, built upon the CPROVER framework. It allows one to verify user-specified assertions, memory safety properties (e.g. buffer overflows), numerical overflows, division by zero, memory leaks, and termination properties. The analysis is performed by translating the verification task into a second-order logic formula over bitvector, array, and floating-point arithmetic theories. The formula is solved by a modular combination of algorithms involving unfolding and template-based invariant synthesis with the help of incremental SAT solving. Advantages of 2LS include its very fast incremental bounded model checking algorithm and its flexible framework for experimenting with novel analysis and abstraction ideas for invariant inference. Drawbacks are its lack of support for certain program features (e.g. multi-threading).
In this paper, we present a numerical strategy to check the strong stability (or GKS-stability) of one-step explicit finite difference schemes for the one-dimensional advection equation with an inflow boundary condition. The strong stability is studied using the Kreiss-Lopatinskii theory. We introduce a new tool, the intrinsic Kreiss-Lopatinskii determinant, which possesses the same regularity as the vector bundle of discrete stable solutions. By applying standard results of complex analysis to this determinant, we are able to relate the strong stability of numerical schemes to the computation of a winding number, which is robust and cheap. The study is illustrated with the O3 scheme and the fifth-order Lax-Wendroff (LW5) scheme together with a reconstruction procedure at the boundary.
Recent advances of data-driven machine learning have revolutionized fields like computer vision, reinforcement learning, and many scientific and engineering domains. In many real-world and scientific problems, systems that generate data are governed by physical laws. Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm. In this survey, we present this learning paradigm called Physics-Informed Machine Learning (PIML) which is to build a model that leverages empirical data and available physical prior knowledge to improve performance on a set of tasks that involve a physical mechanism. We systematically review the recent development of physics-informed machine learning from three perspectives of machine learning tasks, representation of physical prior, and methods for incorporating physical prior. We also propose several important open research problems based on the current trends in the field. We argue that encoding different forms of physical prior into model architectures, optimizers, inference algorithms, and significant domain-specific applications like inverse engineering design and robotic control is far from fully being explored in the field of physics-informed machine learning. We believe that this study will encourage researchers in the machine learning community to actively participate in the interdisciplinary research of physics-informed machine learning.
Diffusion models are a class of deep generative models that have shown impressive results on various tasks with dense theoretical founding. Although diffusion models have achieved impressive quality and diversity of sample synthesis than other state-of-the-art models, they still suffer from costly sampling procedure and sub-optimal likelihood estimation. Recent studies have shown great enthusiasm on improving the performance of diffusion model. In this article, we present a first comprehensive review of existing variants of the diffusion models. Specifically, we provide a first taxonomy of diffusion models and categorize them variants to three types, namely sampling-acceleration enhancement, likelihood-maximization enhancement and data-generalization enhancement. We also introduce in detail other five generative models (i.e., variational autoencoders, generative adversarial networks, normalizing flow, autoregressive models, and energy-based models), and clarify the connections between diffusion models and these generative models. Then we make a thorough investigation into the applications of diffusion models, including computer vision, natural language processing, waveform signal processing, multi-modal modeling, molecular graph generation, time series modeling, and adversarial purification. Furthermore, we propose new perspectives pertaining to the development of this generative model.
Blockchain is an emerging decentralized data collection, sharing and storage technology, which have provided abundant transparent, secure, tamper-proof, secure and robust ledger services for various real-world use cases. Recent years have witnessed notable developments of blockchain technology itself as well as blockchain-adopting applications. Most existing surveys limit the scopes on several particular issues of blockchain or applications, which are hard to depict the general picture of current giant blockchain ecosystem. In this paper, we investigate recent advances of both blockchain technology and its most active research topics in real-world applications. We first review the recent developments of consensus mechanisms and storage mechanisms in general blockchain systems. Then extensive literature is conducted on blockchain enabled IoT, edge computing, federated learning and several emerging applications including healthcare, COVID-19 pandemic, social network and supply chain, where detailed specific research topics are discussed in each. Finally, we discuss the future directions, challenges and opportunities in both academia and industry.
Causal Machine Learning (CausalML) is an umbrella term for machine learning methods that formalize the data-generation process as a structural causal model (SCM). This allows one to reason about the effects of changes to this process (i.e., interventions) and what would have happened in hindsight (i.e., counterfactuals). We categorize work in \causalml into five groups according to the problems they tackle: (1) causal supervised learning, (2) causal generative modeling, (3) causal explanations, (4) causal fairness, (5) causal reinforcement learning. For each category, we systematically compare its methods and point out open problems. Further, we review modality-specific applications in computer vision, natural language processing, and graph representation learning. Finally, we provide an overview of causal benchmarks and a critical discussion of the state of this nascent field, including recommendations for future work.
The era of big data provides researchers with convenient access to copious data. However, people often have little knowledge about it. The increasing prevalence of big data is challenging the traditional methods of learning causality because they are developed for the cases with limited amount of data and solid prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of traditional and frontier methods and a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.