We propose two specifications of a real-time mixed-frequency semi-structural time series model for evaluating the output potential, output gap, Phillips curve, and Okun's law for the US. The baseline model uses minimal theory-based multivariate identification restrictions to inform trend-cycle decomposition, while the alternative model adds the CBO's output gap measure as an observed variable. The latter model results in a smoother output potential and lower cyclical correlation between inflation and real variables but performs worse in forecasting beyond the short term. This methodology allows for the assessment and real-time monitoring of official trend and gap estimates.
The geometric optimization of crystal structures is a procedure widely used in Chemistry that changes the geometrical placement of the particles inside a structure. It is called structural relaxation and constitutes a local minimization problem with a non-convex objective function whose domain complexity increases according to the number of particles involved. In this work we study the performance of the two most popular first order optimization methods in structural relaxation. Although frequently employed, there is a lack of their study in this context from an algorithmic point of view. We run each algorithm in combination with a constant step size, which provides a benchmark for the methods' analysis and direct comparison. We also design dynamic step size rules and study how these improve the two algorithms' performance. Our results show that there is a trade-off between convergence rate and the possibility of an experiment to succeed, hence we construct a function to assign utility to each method based on our respective preference. The function is built according to a recently introduced model of preference indication concerning algorithms with deadline and their run time. Finally, building on all our insights from the experimental results, we provide algorithmic recipes that best correspond to each of the presented preferences and select one recipe as the optimal for equally weighted preferences. Alongside our results we present our open source Python software veltiCRYS, which was used to perform the geometric optimization experiments. Our implementation, can be easily edited to accommodate other energy functions and is especially targeted for testing different methods in structural relaxation.
A Gaussian process (GP)-based methodology is proposed to emulate complex dynamical computer models (or simulators). The method relies on emulating the short-time numerical flow map of the system, where the flow map is a function that returns the solution of a dynamical system at a certain time point, given initial conditions. In order to predict the model output times series, a single realisation of the emulated flow map (i.e., its posterior distribution) is taken and used to iterate from the initial condition ahead in time. Repeating this procedure with multiple such draws creates a distribution over the time series whose mean and variance serve as the model output prediction and the associated uncertainty, respectively. However, since there is no known method to draw an exact sample from the GP posterior analytically, we approximate the kernel with random Fourier features and generate approximate sample paths. The proposed method is applied to emulate several dynamic nonlinear simulators including the well-known Lorenz and van der Pol models. The results suggest that our approach has a high predictive performance and the associated uncertainty can capture the dynamics of the system accurately. Additionally, our approach has potential for ``embarrassingly" parallel implementations where one can conduct the iterative predictions performed by a realisation on a single computing node.
A novel multistatic multiple-input multiple-output (MIMO) integrated sensing and communication (ISAC) system in cellular networks is proposed. It can make use of widespread base stations (BSs) to perform cooperative sensing in wide area. This system is important since the deployment of sensing function can be achieved based on the existing mobile communication networks at a low cost. In this system, orthogonal frequency division multiplexing (OFDM) signals transmitted from the central BS are received and processed by each of the neighboring BSs to estimate sensing object parameters. A joint data processing method is then introduced to derive the closed-form solution of objects position and velocity. Numerical simulation shows that the proposed multistatic system can improve the position and velocity estimation accuracy compared with monostatic and bistatic system, demonstrating the effectiveness and promise of implementing ISAC in the upcoming fifth generation advanced (5G-A) and sixth generation (6G) mobile networks.
This paper aims at obtaining, by means of integral transforms, analytical approximations in short times of solutions to boundary value problems for the one-dimensional reaction-diffusion equation with constant coefficients. The general form of the equation is considered on a bounded generic interval and the three classical types of boundary conditions, i.e., Dirichlet as well as Neumann and mixed boundary conditions are considered in a unified way. The Fourier and Laplace integral transforms are successively applied and an exact solution is obtained in the Laplace domain. This operational solution is proven to be the accurate Laplace transform of the infinite series obtained by the Fourier decomposition method and presented in the literature as solutions to this type of problem. On the basis of this unified operational solution, four cases are distinguished where innovative formulas expressing consistent analytical approximations in short time limits are derived with respect to the behavior of the solution at the boundaries. Compared to the infinite series solutions, the analytical approximations may open new perspectives and applications, among which can be noted the improvement of numerical efficiency in simulations of one-dimensional moving boundary problems, such as in Stefan models.
This study develops an asymptotic theory for estimating the time-varying characteristics of locally stationary functional time series (LSFTS). We investigate a kernel-based method to estimate the time-varying covariance operator and the time-varying mean function of an LSFTS. In particular, we derive the convergence rate of the kernel estimator of the covariance operator and associated eigenvalue and eigenfunctions and establish a central limit theorem for the kernel-based locally weighted sample mean. As applications of our results, we discuss methods for testing the equality of time-varying mean functions in two functional samples.
Processing-in-memory (PIM) promises to alleviate the data movement bottleneck in modern computing systems. However, current real-world PIM systems have the inherent disadvantage that their hardware is more constrained than in conventional processors (CPU, GPU), due to the difficulty and cost of building processing elements near or inside the memory. As a result, general-purpose PIM architectures support fairly limited instruction sets and struggle to execute complex operations such as transcendental functions and other hard-to-calculate operations (e.g., square root). These operations are particularly important for some modern workloads, e.g., activation functions in machine learning applications. In order to provide support for transcendental (and other hard-to-calculate) functions in general-purpose PIM systems, we present \emph{TransPimLib}, a library that provides CORDIC-based and LUT-based methods for trigonometric functions, hyperbolic functions, exponentiation, logarithm, square root, etc. We develop an implementation of TransPimLib for the UPMEM PIM architecture and perform a thorough evaluation of TransPimLib's methods in terms of performance and accuracy, using microbenchmarks and three full workloads (Blackscholes, Sigmoid, Softmax). We open-source all our code and datasets at~\url{//github.com/CMU-SAFARI/transpimlib}.
When estimating quantities and fields that are difficult to measure directly, such as the fluidity of ice, from point data sources, such as satellite altimetry, it is important to solve a numerical inverse problem that is formulated with Bayesian consistency. Otherwise, the resultant probability density function for the difficult to measure quantity or field will not be appropriately clustered around the truth. In particular, the inverse problem should be formulated by evaluating the numerical solution at the true point locations for direct comparison with the point data source. If the data are first fitted to a gridded or meshed field on the computational grid or mesh, and the inverse problem formulated by comparing the numerical solution to the fitted field, the benefits of additional point data values below the grid density will be lost. We demonstrate, with examples in the fields of groundwater hydrology and glaciology, that a consistent formulation can increase the accuracy of results and aid discourse between modellers and observationalists. To do this, we bring point data into the finite element method ecosystem as discontinuous fields on meshes of disconnected vertices. Point evaluation can then be formulated as a finite element interpolation operation (dual-evaluation). This new abstraction is well-suited to automation, including automatic differentiation. We demonstrate this through implementation in Firedrake, which generates highly optimised code for solving Partial Differential Equations (PDEs) with the finite element method. Our solution integrates with dolfin-adjoint/pyadjoint, allowing PDE-constrained optimisation problems, such as data assimilation, to be solved through forward and adjoint mode automatic differentiation.
Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in asymptotically large markets: 1. For incentives, the A-CEEI mechanism is Envy-Free-but-for-Tie-Breaking (EF-TB), which implies that it is Strategyproof-in-the-Large (SP-L). 2. From a computational perspective, computing the equilibrium solution is unfortunately a computationally intractable problem (in the worst-case, assuming $\textsf{PPAD}\ne \textsf{FP}$). We develop a new heuristic algorithm that outperforms the previous state-of-the-art by multiple orders of magnitude. This new, faster algorithm lets us perform experiments on real-world inputs for the first time. We discover that with real-world preferences, even in a realistic implementation that satisfies the EF-TB and SP-L properties, agents may have surprisingly simple and plausible deviations from truthful reporting of preferences. To this end, we propose a novel strengthening of EF-TB, which dramatically reduces the potential for strategic deviations from truthful reporting in our experiments. A (variant of) our algorithm is now in production: on real course allocation problems it is much faster, has zero clearing error, and has stronger incentive properties than the prior state-of-the-art implementation.
Barseghyan and Molinari (2023) give sufficient conditions for semi-nonparametric point identification of parameters of interest in a mixture model of decision-making under risk, allowing for unobserved heterogeneity in utility functions and limited consideration. A key assumption in the model is that the heterogeneity of risk preferences is unobservable but context-independent. In this comment, we build on their insights and present identification results in a setting where the risk preferences are allowed to be context-dependent.
The rapid changes in the finance industry due to the increasing amount of data have revolutionized the techniques on data processing and data analysis and brought new theoretical and computational challenges. In contrast to classical stochastic control theory and other analytical approaches for solving financial decision-making problems that heavily reply on model assumptions, new developments from reinforcement learning (RL) are able to make full use of the large amount of financial data with fewer model assumptions and to improve decisions in complex financial environments. This survey paper aims to review the recent developments and use of RL approaches in finance. We give an introduction to Markov decision processes, which is the setting for many of the commonly used RL approaches. Various algorithms are then introduced with a focus on value and policy based methods that do not require any model assumptions. Connections are made with neural networks to extend the framework to encompass deep RL algorithms. Our survey concludes by discussing the application of these RL algorithms in a variety of decision-making problems in finance, including optimal execution, portfolio optimization, option pricing and hedging, market making, smart order routing, and robo-advising.