In Lipschitz domains, we study a Darcy-Forchheimer problem coupled with a singular heat equation by a nonlinear forcing term depending on the temperature. By singular we mean that the heat source corresponds to a Dirac measure. We establish the existence of solutions for a model that allows a diffusion coefficient in the heat equation depending on the temperature. For such a model, we also propose a finite element discretization scheme and provide an a priori convergence analysis. In the case that the aforementioned diffusion coefficient is constant, we devise an a posteriori error estimator and investigate reliability and efficiency properties. We conclude by devising an adaptive loop based on the proposed error estimator and presenting numerical experiments.
The discovery of partial differential equations (PDEs) is a challenging task that involves both theoretical and empirical methods. Machine learning approaches have been developed and used to solve this problem; however, it is important to note that existing methods often struggle to identify the underlying equation accurately in the presence of noise. In this study, we present a new approach to discovering PDEs by combining variational Bayes and sparse linear regression. The problem of PDE discovery has been posed as a problem to learn relevant basis from a predefined dictionary of basis functions. To accelerate the overall process, a variational Bayes-based approach for discovering partial differential equations is proposed. To ensure sparsity, we employ a spike and slab prior. We illustrate the efficacy of our strategy in several examples, including Burgers, Korteweg-de Vries, Kuramoto Sivashinsky, wave equation, and heat equation (1D as well as 2D). Our method offers a promising avenue for discovering PDEs from data and has potential applications in fields such as physics, engineering, and biology.
Quantum data access and quantum processing can make certain classically intractable learning tasks feasible. However, quantum capabilities will only be available to a select few in the near future. Thus, reliable schemes that allow classical clients to delegate learning to untrusted quantum servers are required to facilitate widespread access to quantum learning advantages. Building on a recently introduced framework of interactive proof systems for classical machine learning, we develop a framework for classical verification of quantum learning. We exhibit learning problems that a classical learner cannot efficiently solve on their own, but that they can efficiently and reliably solve when interacting with an untrusted quantum prover. Concretely, we consider the problems of agnostic learning parities and Fourier-sparse functions with respect to distributions with uniform input marginal. We propose a new quantum data access model that we call "mixture-of-superpositions" quantum examples, based on which we give efficient quantum learning algorithms for these tasks. Moreover, we prove that agnostic quantum parity and Fourier-sparse learning can be efficiently verified by a classical verifier with only random example or statistical query access. Finally, we showcase two general scenarios in learning and verification in which quantum mixture-of-superpositions examples do not lead to sample complexity improvements over classical data. Our results demonstrate that the potential power of quantum data for learning tasks, while not unlimited, can be utilized by classical agents through interaction with untrusted quantum entities.
We investigate the propagation of acoustic singular surfaces, specifically, linear shock waves and nonlinear acceleration waves, in a class of inhomogeneous gases whose ambient mass density varies exponentially. Employing the mathematical tools of singular surface theory, we first determine the evolution of both the jump amplitudes and the locations/velocities of their associated wave-fronts, along with a variety of related analytical results. We then turn to what have become known as Krylov subspace spectral (KSS) methods to numerically simulate the evolution of the full waveforms under consideration. These are not only performed quite efficiently, since KSS allows the use of `large' CFL numbers, but also quite accurately, in the sense of capturing theoretically-predicted features of the solution profiles more faithfully than other time-stepping methods, since KSS customizes the computation of the components of the solution corresponding to the different frequencies involved. The presentation concludes with a listing of possible, acoustics-related, follow-on studies.
Two combined numerical methods for solving time-varying semilinear differential-algebraic equations (DAEs) are obtained. The convergence and correctness of the methods are proved. When constructing the methods, time-varying spectral projectors which can be found numerically are used. This enables to numerically solve the DAE in the original form without additional analytical transformations. To improve the accuracy of the second method, recalculation is used. The developed methods are applicable to the DAEs with the continuous nonlinear part which may not be differentiable in time, and the restrictions of the type of the global Lipschitz condition are not used in the presented theorems on the DAE global solvability and the convergence of the methods. This extends the scope of methods. The fulfillment of the conditions of the global solvability theorem ensures the existence of a unique exact solution on any given time interval, which enables to seek an approximate solution also on any time interval. Numerical examples illustrating the capabilities of the methods and their effectiveness in various situations are provided. To demonstrate this, mathematical models of the dynamics of electrical circuits are considered. It is shown that the results of the theoretical and numerical analyses of these models are consistent.
We study a sequential decision making problem between a principal and an agent with incomplete information on both sides. In this model, the principal and the agent interact in a stochastic environment, and each is privy to observations about the state not available to the other. The principal has the power of commitment, both to elicit information from the agent and to provide signals about her own information. The principal and the agent communicate their signals to each other, and select their actions independently based on this communication. Each player receives a payoff based on the state and their joint actions, and the environment moves to a new state. The interaction continues over a finite time horizon, and both players act to optimize their own total payoffs over the horizon. Our model encompasses as special cases stochastic games of incomplete information and POMDPs, as well as sequential Bayesian persuasion and mechanism design problems. We study both computation of optimal policies and learning in our setting. While the general problems are computationally intractable, we study algorithmic solutions under a conditional independence assumption on the underlying state-observation distributions. We present an polynomial-time algorithm to compute the principal's optimal policy up to an additive approximation. Additionally, we show an efficient learning algorithm in the case where the transition probabilities are not known beforehand. The algorithm guarantees sublinear regret for both players.
We present a novel approach for solving the time-dependent Schr\"{o}dinger equation (TDSE). The method we propose converts the TDSE to an equivalent Volterra integral equation; introducing a global Lagrange interpolation of the integrand transforms the equation to a linear system, which is then solved iteratively. In this paper, we derive the method, explore its performance on several examples, and discuss the corresponding numerical details.
Causal regularization was introduced as a stable causal inference strategy in a two-environment setting in \cite{kania2022causal}. We start with observing that causal regularizer can be extended to several shifted environments. We derive the multi-environment casual regularizer in the population setting. We propose its plug-in estimator, and study its concentration in measure behavior. Although the variance of the plug-in estimator is not well-defined in general, we instead study its conditional variance both with respect to a natural filtration of the empirical as well as conditioning with respect to certain events. We also study generalizations where we consider conditional expectations of higher central absolute moments of the estimator. The results presented here are also new in the prior setting of \cite{kania2022causal} as well as in \cite{Rot}.
Entanglement serves as the resource to empower quantum computing. Recent progress has highlighted its positive impact on learning quantum dynamics, wherein the integration of entanglement into quantum operations or measurements of quantum machine learning (QML) models leads to substantial reductions in training data size, surpassing a specified prediction error threshold. However, an analytical understanding of how the entanglement degree in data affects model performance remains elusive. In this study, we address this knowledge gap by establishing a quantum no-free-lunch (NFL) theorem for learning quantum dynamics using entangled data. Contrary to previous findings, we prove that the impact of entangled data on prediction error exhibits a dual effect, depending on the number of permitted measurements. With a sufficient number of measurements, increasing the entanglement of training data consistently reduces the prediction error or decreases the required size of the training data to achieve the same prediction error. Conversely, when few measurements are allowed, employing highly entangled data could lead to an increased prediction error. The achieved results provide critical guidance for designing advanced QML protocols, especially for those tailored for execution on early-stage quantum computers with limited access to quantum resources.
We establish convergence results related to the operator splitting scheme on the Cauchy problem for the nonlinear Schr\"odinger equation with rough initial data in $L^2$, $$ \left\{ \begin{array}{ll} i\partial_t u +\Delta u = \lambda |u|^{p} u, & (x,t) \in \mathbb{R}^d \times \mathbb{R}_+, u (x,0) =\phi (x), & x\in\mathbb{R}^d, \end{array} \right. $$ where $\lambda \in \{-1,1\}$ and $p >0$. While the Lie approximation $Z_L$ is known to converge to the solution $u$ when the initial datum $\phi$ is sufficiently smooth, the convergence result for rough initial data is open to question. In this paper, for rough initial data $\phi\in L^2 (\mathbb{R}^d)$, we prove the convergence of the filtered Lie approximation $Z_{flt}$ to the solution $u$ in the mass-subcritical range, $\max\left\{1,\frac{2}{d}\right\} \leq p < \frac{4}{d}$. Furthermore, we provide a precise convergence result for radial initial data $\phi\in L^2 (\mathbb{R}^d)$,
The era of big data provides researchers with convenient access to copious data. However, people often have little knowledge about it. The increasing prevalence of big data is challenging the traditional methods of learning causality because they are developed for the cases with limited amount of data and solid prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of traditional and frontier methods and a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.