Dahlquist, Liniger, and Nevanlinna design a family of one-leg, two-step methods (the DLN method) that is second order, A- and G-stable for arbitrary, non-uniform time steps. Recently, the implementation of the DLN method can be simplified by the refactorization process (adding time filters on backward Euler scheme). Due to these fine properties, the DLN method has strong potential for the numerical simulation of time-dependent fluid models. In the report, we propose a semi-implicit DLN algorithm for the Navier Stokes equations (avoiding non-linear solver at each time step) and prove the unconditional, long-term stability and second-order convergence with the moderate time step restriction. Moreover, the adaptive DLN algorithms by the required error or numerical dissipation criterion are presented to balance the accuracy and computational cost. Numerical tests will be given to support the main conclusions.
For numerical design, the development of efficient and accurate surrogate models is paramount. They allow us to approximate complex physical phenomena, thereby reducing the computational burden of direct numerical simulations. We propose INFINITY, a deep learning model that utilizes implicit neural representations (INRs) to address this challenge. Our framework encodes geometric information and physical fields into compact representations and learns a mapping between them to infer the physical fields. We use an airfoil design optimization problem as an example task and we evaluate our approach on the challenging AirfRANS dataset, which closely resembles real-world industrial use-cases. The experimental results demonstrate that our framework achieves state-of-the-art performance by accurately inferring physical fields throughout the volume and surface. Additionally we demonstrate its applicability in contexts such as design exploration and shape optimization: our model can correctly predict drag and lift coefficients while adhering to the equations.
This paper proposes a decoupled numerical scheme of the time-dependent Ginzburg--Landau equations under the temporal gauge. For the magnetic potential and the order parameter, the discrete scheme adopts the second type Ned${\rm \acute{e}}$lec element and the linear element for spatial discretization, respectively; and a linearized backward Euler method and the first order exponential time differencing method for time discretization, respectively. The maximum bound principle (MBP) of the order parameter and the energy dissipation law in the discrete sense are proved. The discrete energy stability and MBP-preservation can guarantee the stability and validity of the numerical simulations, and further facilitate the adoption of an adaptive time-stepping strategy, which often plays an important role in long-time simulations of vortex dynamics, especially when the applied magnetic field is strong. An optimal error estimate of the proposed scheme is also given. Numerical examples verify the theoretical results of the proposed scheme and demonstrate the vortex motions of superconductors in an external magnetic field.
Inspired by the multiple-exposure fusion approach in computational photography, recently, several practitioners have explored the idea of high dynamic range (HDR) X-ray imaging and tomography. While establishing promising results, these approaches inherit the limitations of multiple-exposure fusion strategy. To overcome these disadvantages, the modulo Radon transform (MRT) has been proposed. The MRT is based on a co-design of hardware and algorithms. In the hardware step, Radon transform projections are folded using modulo non-linearities. Thereon, recovery is performed by algorithmically inverting the folding, thus enabling a single-shot, HDR approach to tomography. The first steps in this topic established rigorous mathematical treatment to the problem of reconstruction from folded projections. This paper takes a step forward by proposing a new, Fourier domain recovery algorithm that is backed by mathematical guarantees. The advantages include recovery at lower sampling rates while being agnostic to modulo threshold, lower computational complexity and empirical robustness to system noise. Beyond numerical simulations, we use prototype modulo ADC based hardware experiments to validate our claims. In particular, we report image recovery based on hardware measurements up to 10 times larger than the sensor's dynamic range while benefiting with lower quantization noise ($\sim$12 dB).
Deep neural networks have garnered widespread attention due to their simplicity and flexibility in the fields of engineering and scientific calculation. In this study, we probe into solving a class of elliptic partial differential equations(PDEs) with multiple scales by utilizing Fourier-based mixed physics informed neural networks(dubbed FMPINN), its solver is configured as a multi-scale deep neural network. In contrast to the classical PINN method, a dual (flux) variable about the rough coefficient of PDEs is introduced to avoid the ill-condition of neural tangent kernel matrix caused by the oscillating coefficient of multi-scale PDEs. Therefore, apart from the physical conservation laws, the discrepancy between the auxiliary variables and the gradients of multi-scale coefficients is incorporated into the cost function, then obtaining a satisfactory solution of PDEs by minimizing the defined loss through some optimization methods. Additionally, a trigonometric activation function is introduced for FMPINN, which is suited for representing the derivatives of complex target functions. Handling the input data by Fourier feature mapping will effectively improve the capacity of deep neural networks to solve high-frequency problems. Finally, to validate the efficiency and robustness of the proposed FMPINN algorithm, we present several numerical examples of multi-scale problems in various dimensional Euclidean spaces. These examples cover both low-frequency and high-frequency oscillation cases, demonstrating the effectiveness of our approach. All code and data accompanying this manuscript will be made publicly available at \href{//github.com/Blue-Giant/FMPINN}{//github.com/Blue-Giant/FMPINN}.
The crossed random-effects model is widely used in applied statistics, finding applications in various fields such as longitudinal studies, e-commerce, and recommender systems, among others. However, these models encounter scalability challenges, as the computational time grows disproportionately with the number of data points, typically following a cubic root relationship (N^(3/2) or worse) with N. Our inspiration for addressing this issue comes from observing the recommender system employed by an online clothing retailer. Our dataset comprises over 700,000 clients, 5,000 items, and 5,000,000 measurements. When applying the maximum likelihood approach to fit crossed random effects, computational inefficiency becomes a significant concern, limiting the applicability of this approach in large-scale settings. To tackle the scalability issues, previous research by Ghosh et al. (2022a) and Ghosh et al. (2022b) has explored linear and logistic regression models utilizing fixed-effect features based on client and item variables, while incorporating random intercept terms for clients and items. In this study, we present a more generalized version of the problem, allowing random effect sizes/slopes. This extension enables us to capture the variability in effect size among both clients and items. Importantly, we have developed a scalable solution to address the aforementioned problem and have empirically demonstrated the consistency of our estimates. Specifically, as the number of data points increases, our estimates converge towards the true parameters. To validate our approach, we implement the proposed algorithm using Stitch Fix data.
This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.
A new algorithm for regret minimization in online convex optimization is described. The regret of the algorithm after $T$ time periods is $O(\sqrt{T \log T})$ - which is the minimum possible up to a logarithmic term. In addition, the new algorithm is adaptive, in the sense that the regret bounds hold not only for the time periods $1,\ldots,T$ but also for every sub-interval $s,s+1,\ldots,t$. The running time of the algorithm matches that of newly introduced interior point algorithms for regret minimization: in $n$-dimensional space, during each iteration the new algorithm essentially solves a system of linear equations of order $n$, rather than solving some constrained convex optimization problem in $n$ dimensions and possibly many constraints.
We study the Euler scheme for scalar non-autonomous stochastic differential equations, whose diffusion coefficient is not globally Lipschitz but a fractional power of a globally Lipschitz function. We analyse the strong error and establish a criterion, which relates the convergence order of the Euler scheme to an inverse moment condition for the diffusion coefficient. Our result in particular applies to Cox-Ingersoll-Ross-, Chan-Karolyi-Longstaff-Sanders- or Wright-Fisher-type stochastic differential equations and thus provides a unifying framework.
An asymptotic preserving and energy stable scheme for the Euler-Poisson system under the quasineutral scaling is designed and analysed. Correction terms are introduced in the convective fluxes and the electrostatic potential, which lead to the dissipation of mechanical energy and the entropy stability. The resolution of the semi-implicit in time finite volume in space fully-discrete scheme involves two steps: the solution of an elliptic problem for the potential and an explicit evaluation for the density and velocity. The proposed scheme possesses several physically relevant attributes, such as the the entropy stability and the consistency with the weak formulation of the continuous Euler-Poisson system. The AP property of the scheme, i.e. the boundedness of the mesh parameters with respect to the Debye length and its consistency with the quasineutral limit system, is shown. The results of numerical case studies are presented to substantiate the robustness and efficiency of the proposed method.
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.