Low Reynolds number fluid flows are governed by the Stokes equations. In two dimensions, Stokes flows can be described by two analytic functions, known as Goursat functions. Brubeck and Trefethen (2022) recently introduced a lightning Stokes solver that uses rational functions to approximate the Goursat functions in polygonal domains. In this paper, we present a solver for computing 2D Stokes flows in domains with smooth boundaries and multiply-connected domains using lightning and AAA rational approximation (Nakatsukasa et al., 2018). This leads to a new rational approximation algorithm "LARS" that is suitable for computing many bounded 2D Stokes flow problems. After validating our solver against known analytical solutions, we solve a variety of 2D Stokes flow problems with physical and engineering applications. The computations take less than a second and give solutions with at least 6-digit accuracy.
We develop three new methods to implement any Linear Combination of Unitaries (LCU), a powerful quantum algorithmic tool with diverse applications. While the standard LCU procedure requires several ancilla qubits and sophisticated multi-qubit controlled operations, our methods consume significantly fewer quantum resources. The first method (Single-Ancilla LCU) estimates expectation values of observables with respect to any quantum state prepared by an LCU procedure while requiring only a single ancilla qubit, and quantum circuits of shorter depths. The second approach (Analog LCU) is a simple, physically motivated, continuous-time analogue of LCU, tailored to hybrid qubit-qumode systems. The third method (Ancilla-free LCU) requires no ancilla qubit at all and is useful when we are interested in the projection of a quantum state (prepared by the LCU procedure) in some subspace of interest. We apply the first two techniques to develop new quantum algorithms for a wide range of practical problems, ranging from Hamiltonian simulation, ground state preparation and property estimation, and quantum linear systems. Remarkably, despite consuming fewer quantum resources they retain a provable quantum advantage. The third technique allows us to connect discrete and continuous-time quantum walks with their classical counterparts. It also unifies the recently developed optimal quantum spatial search algorithms in both these frameworks, and leads to the development of new ones. Additionally, using this method, we establish a relationship between discrete-time and continuous-time quantum walks, making inroads into a long-standing open problem.
Bayesian linear mixed-effects models and Bayesian ANOVA are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can - if the sphericity assumption is violated - likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from \url{//osf.io/mjf47/}.
Including a large number of predictors in the imputation model underlying a multiple imputation (MI) procedure is one of the most challenging tasks imputers face. A variety of high-dimensional MI techniques can help, but there has been limited research on their relative performance. In this study, we investigated a wide range of extant high-dimensional MI techniques that can handle a large number of predictors in the imputation models and general missing data patterns. We assessed the relative performance of seven high-dimensional MI methods with a Monte Carlo simulation study and a resampling study based on real survey data. The performance of the methods was defined by the degree to which they facilitate unbiased and confidencevalid estimates of the parameters of complete data analysis models. We found that using lasso penalty or forward selection to select the predictors used in the MI model and using principal component analysis to reduce the dimensionality of auxiliary data produce the best results.
The work of Kalman and Bucy has established a duality between filtering and optimal estimation in the context of time-continuous linear systems. This duality has recently been extended to time-continuous nonlinear systems in terms of an optimization problem constrained by a backward stochastic partial differential equation. Here we revisit this problem from the perspective of appropriate forward-backward stochastic differential equations. This approach sheds new light on the estimation problem and provides a unifying perspective. It is also demonstrated that certain formulations of the estimation problem lead to deterministic formulations similar to the linear Gaussian case as originally investigated by Kalman and Bucy. Finally, optimal control of partially observed diffusion processes is discussed as an application of the proposed estimators.
It is a challenge to numerically solve nonlinear partial differential equations whose solution involves discontinuity. In the context of numerical simulators for multi-phase flow in porous media, there exists a long-standing issue known as Grid Orientation Effect (GOE), wherein different numerical solutions can be obtained when considering grids with different orientations under certain unfavorable conditions. Our perspective is that GOE arises due to numerical instability near displacement fronts, where spurious oscillations accompanied by sharp fronts, if not adequately suppressed, lead to GOE. To reduce or even eliminate GOE, we propose augmenting adaptive artificial viscosity when solving the saturation equation. It has been demonstrated that appropriate artificial viscosity can effectively reduce or even eliminate GOE. The proposed numerical method can be easily applied in practical engineering problems.
Venkatachala on the one hand, and Avdispahi\'c & Zejnulahi on the other, both studiied integer sequences with an unusual sum property defined in a greedy way, and proved many results about them. However, their proofs were rather lengthy and required numerous cases. In this paper, I provide a different approach, via finite automata, that can prove the same results (and more) in a simple, unified way. Instead of case analysis, we use a decision procedure implemented in the free software Walnut. Using these ideas, we can prove a conjecture of Quet and find connections between Quet's sequence and the "married" functions of Hofstadter.
Maxwell's equations are a collection of coupled partial differential equations (PDEs) that, together with the Lorentz force law, constitute the basis of classical electromagnetism and electric circuits. Effectively solving Maxwell's equations is crucial in various fields, like electromagnetic scattering and antenna design optimization. Physics-informed neural networks (PINNs) have shown powerful ability in solving PDEs. However, PINNs still struggle to solve Maxwell's equations in heterogeneous media. To this end, we propose a domain-adaptive PINN (da-PINN) to solve inverse problems of Maxwell's equations in heterogeneous media. First, we propose a location parameter of media interface to decompose the whole domain into several sub-domains. Furthermore, the electromagnetic interface conditions are incorporated into a loss function to improve the prediction performance near the interface. Then, we propose a domain-adaptive training strategy for da-PINN. Finally, the effectiveness of da-PINN is verified with two case studies.
Operator splitting is a popular divide-and-conquer strategy for solving differential equations. Typically, the right-hand side of the differential equation is split into a number of parts that are then integrated separately. Many methods are known that split the right-hand side into two parts. This approach is limiting, however, and there are situations when 3-splitting is more natural and ultimately more advantageous. The second-order Strang operator-splitting method readily generalizes to a right-hand side splitting into any number of operators. It is arguably the most popular method for 3-splitting because of its efficiency, ease of implementation, and intuitive nature. Other 3-splitting methods exist, but they are less well-known, and \rev{analysis and} evaluation of their performance in practice are scarce. We demonstrate the effectiveness of some alternative 3-split, second-order methods to Strang splitting on two problems: the reaction-diffusion Brusselator, which can be split into three parts that each have closed-form solutions, and the kinetic Vlasov--Poisson equations that is used in semi-Lagrangian plasma simulations. We find alternative second-order 3-operator-splitting methods that realize efficiency gains of 10\%--20\% over traditional Strang splitting. Our analysis for the practical assessment of efficiency of operator-splitting methods includes the computational cost of the integrators and can be used in method design.
Whisper is a recent Automatic Speech Recognition (ASR) model displaying impressive robustness to both out-of-distribution inputs and random noise. In this work, we show that this robustness does not carry over to adversarial noise. We show that we can degrade Whisper performance dramatically, or even transcribe a target sentence of our choice, by generating very small input perturbations with Signal Noise Ratio of 35-45dB. We also show that by fooling the Whisper language detector we can very easily degrade the performance of multilingual models. These vulnerabilities of a widely popular open-source model have practical security implications and emphasize the need for adversarially robust ASR.
Robustness to adversarial attacks is typically evaluated with adversarial accuracy. While essential, this metric does not capture all aspects of robustness and in particular leaves out the question of how many perturbations can be found for each point. In this work, we introduce an alternative approach, adversarial sparsity, which quantifies how difficult it is to find a successful perturbation given both an input point and a constraint on the direction of the perturbation. We show that sparsity provides valuable insight into neural networks in multiple ways: for instance, it illustrates important differences between current state-of-the-art robust models them that accuracy analysis does not, and suggests approaches for improving their robustness. When applying broken defenses effective against weak attacks but not strong ones, sparsity can discriminate between the totally ineffective and the partially effective defenses. Finally, with sparsity we can measure increases in robustness that do not affect accuracy: we show for example that data augmentation can by itself increase adversarial robustness, without using adversarial training.