In this paper, we introduce a novel numerical approach for approximating the SIR model in epidemiology. Our method enhances the existing linearization procedure by incorporating a suitable relaxation term to tackle the transcendental equation of nonlinear type. Developed within the continuous framework, our relaxation method is explicit and easy to implement, relying on a sequence of linear differential equations. This approach yields accurate approximations in both discrete and analytical forms. Through rigorous analysis, we prove that, with an appropriate choice of the relaxation parameter, our numerical scheme is non-negativity-preserving and globally strongly convergent towards the true solution. These theoretical findings have not received sufficient attention in various existing SIR solvers. We also extend the applicability of our relaxation method to handle some variations of the traditional SIR model. Finally, we present numerical examples using simulated data to demonstrate the effectiveness of our proposed method.
In this paper, we present a deep surrogate model for learning the Green's function associated with the reaction-diffusion operator in rectangular domain. The U-Net architecture is utilized to effectively capture the mapping from source to solution of the target partial differential equations (PDEs). To enable efficient training of the model without relying on labeled data, we propose a novel loss function that draws inspiration from traditional numerical methods used for solving PDEs. Furthermore, a hard encoding mechanism is employed to ensure that the predicted Green's function is perfectly matched with the boundary conditions. Based on the learned Green's function from the trained deep surrogate model, a fast solver is developed to solve the corresponding PDEs with different sources and boundary conditions. Various numerical examples are also provided to demonstrate the effectiveness of the proposed model.
In this paper, we introduce a novel class of graphical models for representing time lag specific causal relationships and independencies of multivariate time series with unobserved confounders. We completely characterize these graphs and show that they constitute proper subsets of the currently employed model classes. As we show, from the novel graphs one can thus draw stronger causal inferences -- without additional assumptions. We further introduce a graphical representation of Markov equivalence classes of the novel graphs. This graphical representation contains more causal knowledge than what current state-of-the-art causal discovery algorithms learn.
In this paper we introduce a general framework for analyzing the numerical conditioning of minimal problems in multiple view geometry, using tools from computational algebra and Riemannian geometry. Special motivation comes from the fact that relative pose estimation, based on standard 5-point or 7-point Random Sample Consensus (RANSAC) algorithms, can fail even when no outliers are present and there is enough data to support a hypothesis. We argue that these cases arise due to the intrinsic instability of the 5- and 7-point minimal problems. We apply our framework to characterize the instabilities, both in terms of the world scenes that lead to infinite condition number, and directly in terms of ill-conditioned image data. The approach produces computational tests for assessing the condition number before solving the minimal problem. Lastly synthetic and real data experiments suggest that RANSAC serves not only to remove outliers, but also to select for well-conditioned image data, as predicted by our theory.
In this paper, we introduce the Spectral Coefficient Learning via Operator Network (SCLON), a novel operator learning-based approach for solving parametric partial differential equations (PDEs) without the need for data harnessing. The cornerstone of our method is the spectral methodology that employs expansions using orthogonal functions, such as Fourier series and Legendre polynomials, enabling accurate PDE solutions with fewer grid points. By merging the merits of spectral methods - encompassing high accuracy, efficiency, generalization, and the exact fulfillment of boundary conditions - with the prowess of deep neural networks, SCLON offers a transformative strategy. Our approach not only eliminates the need for paired input-output training data, which typically requires extensive numerical computations, but also effectively learns and predicts solutions of complex parametric PDEs, ranging from singularly perturbed convection-diffusion equations to the Navier-Stokes equations. The proposed framework demonstrates superior performance compared to existing scientific machine learning techniques, offering solutions for multiple instances of parametric PDEs without harnessing data. The mathematical framework is robust and reliable, with a well-developed loss function derived from the weak formulation, ensuring accurate approximation of solutions while exactly satisfying boundary conditions. The method's efficacy is further illustrated through its ability to accurately predict intricate natural behaviors like the Kolmogorov flow and boundary layers. In essence, our work pioneers a compelling avenue for parametric PDE solutions, serving as a bridge between traditional numerical methodologies and cutting-edge machine learning techniques in the realm of scientific computation.
Python serves as an open-source and cost-effective alternative to the MATLAB programming language. This paper introduces a concise topology optimization Python code, named ``PyHexTop," primarily intended for educational purposes. Code employs hexagonal elements to parameterize design domains as such elements provide checkerboard-free optimized design naturally. PyHexTop is developed based on the ``HoneyTop90" MATLAB code~\cite{kumar2023honeytop90} and uses the NumPy and SciPy libraries. Code is straightforward and easily comprehensible, proving a helpful tool that can help people new in the topology optimization field to learn and explore. PyHexTop is specifically tailored to address compliance minimization with specified volume constraints. The paper provides a detailed explanation of the code for solving the MBB design and extensions to solve problems with varying boundary and force conditions. The code is publicly shared at: \url{//github.com/PrabhatIn/PyHexTop.}
In this paper, we introduce an innovative approach for addressing Bayesian inverse problems through the utilization of physics-informed invertible neural networks (PI-INN). The PI-INN framework encompasses two sub-networks: an invertible neural network (INN) and a neural basis network (NB-Net). The primary role of the NB-Net lies in modeling the spatial basis functions characterizing the solution to the forward problem dictated by the underlying partial differential equation. Simultaneously, the INN is designed to partition the parameter vector linked to the input physical field into two distinct components: the expansion coefficients representing the forward problem solution and the Gaussian latent noise. If the forward mapping is precisely estimated, and the statistical independence between expansion coefficients and latent noise is well-maintained, the PI-INN offers a precise and efficient generative model for Bayesian inverse problems, yielding tractable posterior density estimates. As a particular physics-informed deep learning model, the primary training challenge for PI-INN centers on enforcing the independence constraint, which we tackle by introducing a novel independence loss based on estimated density. We support the efficacy and precision of the proposed PI-INN through a series of numerical experiments, including inverse kinematics, 1-dimensional and 2-dimensional diffusion equations, and seismic traveltime tomography. Specifically, our experimental results showcase the superior performance of the proposed independence loss in comparison to the commonly used but computationally demanding kernel-based maximum mean discrepancy loss.
In this paper, we propose nonlocal diffusion models with Dirichlet boundary. These nonlocal diffusion models preserve the maximum principle and also have corresponding variational form. With these good properties, It is relatively easy to prove the well-posedness and the vanishing nonlocality convergence. Furthermore, by specifically designed weight function, we can get a nonlocal diffusion model with second order convergence which is optimal for nonlocal diffusion models.
Graphics Processing Unit, or GPUs, have been successfully adopted both for graphic computation in 3D applications, and for general purpose application (GP-GPUs), thank to their tremendous performance-per-watt. Recently, there is a big interest in adopting them also within automotive and avionic industrial settings, imposing for the first time real-time constraints on the design of such devices. Unfortunately, it is extremely hard to extract timing guarantees from modern GPU designs, and current approaches rely on a model where the GPU is treated as a unique monolithic execution device. Unlike state-of-the-art of research, we try to open the box of modern GPU architectures, providing a clean way to exploit intra-GPU predictable execution.
In this paper, we present a novel spectral renormalization exponential integrator method for solving gradient flow problems. Our method is specifically designed to simultaneously satisfy discrete analogues of the energy dissipation laws and achieve high-order accuracy in time. To accomplish this, our method first incorporates the energy dissipation law into the target gradient flow equation by introducing a time-dependent spectral renormalization (TDSR) factor. Then, the coupled equations are discretized using the spectral approximation in space and the exponential time differencing (ETD) in time. Finally, the resulting fully discrete nonlinear system is decoupled and solved using the Picard iteration at each time step. Furthermore, we introduce an extra enforcing term into the system for updating the TDSR factor, which greatly relaxes the time step size restriction of the proposed method and enhances its computational efficiency. Extensive numerical tests with various gradient flows are also presented to demonstrate the accuracy and effectiveness of our method as well as its high efficiency when combined with an adaptive time-stepping strategy for long-term simulations.
Complexity is a fundamental concept underlying statistical learning theory that aims to inform generalization performance. Parameter count, while successful in low-dimensional settings, is not well-justified for overparameterized settings when the number of parameters is more than the number of training samples. We revisit complexity measures based on Rissanen's principle of minimum description length (MDL) and define a novel MDL-based complexity (MDL-COMP) that remains valid for overparameterized models. MDL-COMP is defined via an optimality criterion over the encodings induced by a good Ridge estimator class. We provide an extensive theoretical characterization of MDL-COMP for linear models and kernel methods and show that it is not just a function of parameter count, but rather a function of the singular values of the design or the kernel matrix and the signal-to-noise ratio. For a linear model with $n$ observations, $d$ parameters, and i.i.d. Gaussian predictors, MDL-COMP scales linearly with $d$ when $d<n$, but the scaling is exponentially smaller -- $\log d$ for $d>n$. For kernel methods, we show that MDL-COMP informs minimax in-sample error, and can decrease as the dimensionality of the input increases. We also prove that MDL-COMP upper bounds the in-sample mean squared error (MSE). Via an array of simulations and real-data experiments, we show that a data-driven Prac-MDL-COMP informs hyper-parameter tuning for optimizing test MSE with ridge regression in limited data settings, sometimes improving upon cross-validation and (always) saving computational costs. Finally, our findings also suggest that the recently observed double decent phenomenons in overparameterized models might be a consequence of the choice of non-ideal estimators.