A partially identified model, where the parameters can not be uniquely identified, often arises during statistical analysis. While researchers frequently use Bayesian inference to analyze the models, when Bayesian inference with an off-the-shelf MCMC sampling algorithm is applied to a partially identified model, the computational performance can be poor. It is found that using importance sampling with transparent reparameterization (TP) is one remedy. This method is preferable since the model is known to be rendered as identified with respect to the new parameterization, and at the same time, it may allow faster, i.i.d. Monte Carlo sampling by using conjugate convenience priors. In this paper, we explain the importance sampling method with the TP and a pseudo-TP. We introduce the pseudo-TP, an alternative to TP, since finding a TP is sometimes difficult. Then, we test the methods' performance in some scenarios and compare it to the performance of the off-the-shelf MCMC method - Gibbs sampling - applied in the original parameterization. While the importance sampling with TP (ISTP) shows generally better results than off-the-shelf MCMC methods, as seen in the compute time and trace plots, it is also seen that finding a TP which is necessary for the method may not be easy. On the other hand, the pseudo-TP method shows a mixed result and room for improvement since it relies on an approximation, which may not be adequate for a given model and dataset.
The Kennedy and O'Hagan (KOH) calibration framework uses coupled Gaussian processes (GPs) to meta-model an expensive simulator (first GP), tune its ``knobs" (calibration inputs) to best match observations from a real physical/field experiment and correct for any modeling bias (second GP) when predicting under new field conditions (design inputs). There are well-established methods for placement of design inputs for data-efficient planning of a simulation campaign in isolation, i.e., without field data: space-filling, or via criterion like minimum integrated mean-squared prediction error (IMSPE). Analogues within the coupled GP KOH framework are mostly absent from the literature. Here we derive a closed form IMSPE criterion for sequentially acquiring new simulator data for KOH. We illustrate how acquisitions space-fill in design space, but concentrate in calibration space. Closed form IMSPE precipitates a closed-form gradient for efficient numerical optimization. We demonstrate that our KOH-IMSPE strategy leads to a more efficient simulation campaign on benchmark problems, and conclude with a showcase on an application to equilibrium concentrations of rare earth elements for a liquid-liquid extraction reaction.
The aim of this article is to present a hybrid finite element/finite difference method which is used for reconstructions of electromagnetic properties within a realistic breast phantom. This is done by studying the mentioned properties' (electric permittivity and conductivity in this case) representing coefficients in a constellation of Maxwell's equations. This information is valuable since these coefficient can reveal types of tissues within the breast, and in applications could be used to detect shapes and locations of tumours. Because of the ill-posed nature of this coefficient inverse problem, we approach it as an optimization problem by introducing the corresponding Tikhonov functional and in turn Lagrangian. These are then minimized by utilizing an interplay between finite element and finite difference methods for solutions of direct and adjoint problems, and thereafter by applying a conjugate gradient method to an adaptively refined mesh.
To avoid ineffective collisions between the equilibrium states, the hybrid method with deviational particles (HDP) has been proposed to integrate the Fokker-Planck-Landau system, while leaving a new issue in sampling deviational particles from the high-dimensional source term. In this paper, we present an adaptive sampling (AS) strategy that first adaptively reconstructs a piecewise constant approximation of the source term based on sequential clustering via discrepancy estimation, and then samples deviational particles directly from the resulting adaptive piecewise constant function without rejection. The mixture discrepancy, which can be easily calculated thanks to its explicit analytical expression, is employed as a measure of uniformity instead of the star discrepancy the calculation of which is NP-hard. The resulting method, dubbed the HDP-AS method, runs approximately ten times faster than the HDP method while keeping the same accuracy in the Landau damping, two stream instability, bump on tail and Rosenbluth's test problem.
The consensus problem in distributed computing involves a network of agents aiming to compute the average of their initial vectors through local communication, represented by an undirected graph. This paper focuses on the studying of this problem using an average-case analysis approach, particularly over regular graphs. Traditional algorithms for solving the consensus problem often rely on worst-case performance evaluation scenarios, which may not reflect typical performance in real-world applications. Instead, we apply average-case analysis, focusing on the expected spectral distribution of eigenvalues to obtain a more realistic view of performance. Key contributions include deriving the optimal method for consensus on regular graphs, showing its relation to the Heavy Ball method, analyzing its asymptotic convergence rate, and comparing it to various first-order methods through numerical experiments.
Quantum Extreme Learning Machines (QELMs) have emerged as a promising framework for quantum machine learning. Their appeal lies in the rich feature map induced by the dynamics of a quantum substrate - the quantum reservoir - and the efficient post-measurement training via linear regression. Here we study the expressivity of QELMs by decomposing the prediction of QELMs into a Fourier series. We show that the achievable Fourier frequencies are determined by the data encoding scheme, while Fourier coefficients depend on both the reservoir and the measurement. Notably, the expressivity of QELMs is fundamentally limited by the number of Fourier frequencies and the number of observables, while the complexity of the prediction hinges on the reservoir. As a cautionary note on scalability, we identify four sources that can lead to the exponential concentration of the observables as the system size grows (randomness, hardware noise, entanglement, and global measurements) and show how this can turn QELMs into useless input-agnostic oracles. In particular, our result on the reservoir-induced concentration strongly indicates that quantum reservoirs drawn from a highly random ensemble make QELM models unscalable. Our analysis elucidates the potential and fundamental limitations of QELMs, and lays the groundwork for systematically exploring quantum reservoir systems for other machine learning tasks.
Marginal structural models are a popular method for estimating causal effects in the presence of time-varying exposures. In spite of their popularity, no scalable non-parametric estimator exist for marginal structural models with multi-valued and time-varying treatments. In this paper, we use machine learning together with recent developments in semiparametric efficiency theory for longitudinal studies to propose such an estimator. The proposed estimator is based on a study of the non-parametric identifying functional, including first order von-Mises expansions as well as the efficient influence function and the efficiency bound. We show conditions under which the proposed estimator is efficient, asymptotically normal, and sequentially doubly robust in the sense that it is consistent if, for each time point, either the outcome or the treatment mechanism is consistently estimated. We perform a simulation study to illustrate the properties of the estimators, and present the results of our motivating study on a COVID-19 dataset studying the impact of mobility on the cumulative number of observed cases.
The preservation of stochastic orders by distortion functions has become a topic of increasing interest in the reliability analysis of coherent systems. The reason of this interest is that the reliability function of a coherent system with identically distributed components can be represented as a distortion function of the common reliability function of the components. In this framework, we study the preservation of the excess wealth order, the total time on test transform order, the decreasing mean residual live order, and the quantile mean inactivity time order by distortion functions. The results are applied to study the preservation of these stochastic orders under the formation of coherent systems with exchangeable components.
Probabilistic programming combines general computer programming, statistical inference, and formal semantics to help systems make decisions when facing uncertainty. Probabilistic programs are ubiquitous, including having a significant impact on machine intelligence. While many probabilistic algorithms have been used in practice in different domains, their automated verification based on formal semantics is still a relatively new research area. In the last two decades, it has attracted much interest. Many challenges, however, remain. The work presented in this paper, probabilistic unifying relations (ProbURel), takes a step towards our vision to tackle these challenges. Our work is based on Hehner's predicative probabilistic programming, but there are several obstacles to the broader adoption of his work. Our contributions here include (1) the formalisation of its syntax and semantics by introducing an Iverson bracket notation to separate relations from arithmetic; (2) the formalisation of relations using Unifying Theories of Programming (UTP) and probabilities outside the brackets using summation over the topological space of the real numbers; (3) the constructive semantics for probabilistic loops using Kleene's fixed-point theorem; (4) the enrichment of its semantics from distributions to subdistributions and superdistributions to deal with the constructive semantics; (5) the unique fixed-point theorem to simplify the reasoning about probabilistic loops; and (6) the mechanisation of our theory in Isabelle/UTP, an implementation of UTP in Isabelle/HOL, for automated reasoning using theorem proving. We demonstrate our work with six examples, including problems in robot localisation, classification in machine learning, and the termination of probabilistic loops.
Chaos is generic in strongly-coupled recurrent networks of model neurons, and thought to be an easily accessible dynamical regime in the brain. While neural chaos is typically seen as an impediment to robust computation, we show how such chaos might play a functional role in allowing the brain to learn and sample from generative models. We construct architectures that combine a classic model of neural chaos either with a canonical generative modeling architecture or with energy-based models of neural memory. We show that these architectures have appealing properties for sampling, including easy biologically-plausible control of sampling rates via overall gain modulation.
We study the decidability and complexity of non-cooperative rational synthesis problem (abbreviated as NCRSP) for some classes of probabilistic strategies. We show that NCRSP for stationary strategies and Muller objectives is in 3-EXPTIME, and if we restrict the strategies of environment players to be positional, NCRSP becomes NEXPSPACE solvable. On the other hand, NCRSP_>, which is a variant of NCRSP, is shown to be undecidable even for pure finite-state strategies and terminal reachability objectives. Finally, we show that NCRSP becomes EXPTIME solvable if we restrict the memory of a strategy to be the most recently visited t vertices where t is linear in the size of the game.