We present an extension of the linear sampling method for solving the sound-soft inverse acoustic scattering problem with randomly distributed point sources. The theoretical justification of our sampling method is based on the Helmholtz--Kirchhoff identity, the cross-correlation between measurements, and the volume and imaginary near-field operators, which we introduce and analyze. Implementations in MATLAB using boundary elements, the SVD, Tikhonov regularization, and Morozov's discrepancy principle are also discussed. We demonstrate the robustness and accuracy of our algorithms with several numerical experiments in two dimensions.
The study of partial differential equations (PDE) through the framework of deep learning emerged a few years ago leading to the impressive approximations of simple dynamics. Graph neural networks (GNN) turned out to be very useful in those tasks by allowing the treatment of unstructured data often encountered in the field of numerical resolutions of PDE. However, the resolutions of harder PDE such as Navier-Stokes equations are still a challenging task and most of the work done on the latter concentrate either on simulating the flow around simple geometries or on qualitative results that looks physical for design purpose. In this study, we try to leverage the work done on deep learning for PDE and GNN by proposing an adaptation of a known architecture in order to tackle the task of approximating the solution of the two-dimensional steady-state incompressible Navier-Stokes equations over different airfoil geometries. In addition to that, we test our model not only on its performance over the volume but also on its performance to approximate surface quantities such as the wall shear stress or the isostatic pressure leading to the inference of global coefficients such as the lift and the drag of our airfoil in order to allow design exploration. This work takes place in a longer project that aims to approximate three dimensional steady-state solutions over industrial geometries.
We introduce a priori Sobolev-space error estimates for the solution of nonlinear, and possibly parametric, PDEs using Gaussian process and kernel based methods. The primary assumptions are: (1) a continuous embedding of the reproducing kernel Hilbert space of the kernel into a Sobolev space of sufficient regularity; and (2) the stability of the differential operator and the solution map of the PDE between corresponding Sobolev spaces. The proof is articulated around Sobolev norm error estimates for kernel interpolants and relies on the minimizing norm property of the solution. The error estimates demonstrate dimension-benign convergence rates if the solution space of the PDE is smooth enough. We illustrate these points with applications to high-dimensional nonlinear elliptic PDEs and parametric PDEs. Although some recent machine learning methods have been presented as breaking the curse of dimensionality in solving high-dimensional PDEs, our analysis suggests a more nuanced picture: there is a trade-off between the regularity of the solution and the presence of the curse of dimensionality. Therefore, our results are in line with the understanding that the curse is absent when the solution is regular enough.
We propose a novel spectral method for the Allen--Cahn equation on spheres that does not necessarily require quadrature exactness assumptions. Instead of certain exactness degrees, we employ a restricted isometry relation based on the Marcinkiewicz--Zygmund system of quadrature rules to quantify the quadrature error of polynomial integrands. The new method imposes only some conditions on the polynomial degree of numerical solutions to derive the maximum principle and energy stability, and thus, differs substantially from existing methods in the literature that often rely on strict conditions on the time stepping size, Lipschitz property of the nonlinear term, or $L^{\infty}$ boundedness of numerical solutions. Moreover, the new method is suitable for long-time simulations because the time stepping size is independent of the diffusion coefficient in the equation. Inspired by the effective maximum principle recently proposed by Li (Ann. Appl. Math., 37(2): 131--290, 2021), we develop an almost sharp maximum principle that allows controllable deviation of numerical solutions from the sharp bound. Further, we show that the new method is energy stable and equivalent to the Galerkin method if the quadrature rule exhibits sufficient exactness degrees. In addition, we propose an energy-stable mixed-quadrature scheme which works well even with randomly sampled initial condition data. We validate the theoretical results about the energy stability and the almost sharp maximum principle by numerical experiments on the 2-sphere $\mathbb{S}^2$.
The trade algorithm, which includes the curveball and fastball implementations, is the state-of-the-art for uniformly sampling r x c binary matrices with fixed row and column sums. The mixing time of the trade algorithm is currently unknown, although 5r is currently used as a heuristic. We propose a distribution-based approach to estimating the mixing time, but which also can return a sample of matrices that are nearly guaranteed to be uniformly randomly sampled. In numerical experiments on matrices that vary by size, fill, and row and column sum distributions, we find that the upper bound on mixing time is at least 10r, and that it increases as a function of both c and the fraction of cells containing a 1.
In this paper, we focus our attention on the high-dimensional double sparse linear regression, that is, a combination of element-wise and group-wise sparsity.To address this problem, we propose an IHT-style (iterative hard thresholding) procedure that dynamically updates the threshold at each step. We establish the matching upper and lower bounds for parameter estimation, showing the optimality of our proposal in the minimax sense. Coupled with a novel sparse group information criterion, we develop a fully adaptive procedure to handle unknown group sparsity and noise levels.We show that our adaptive procedure achieves optimal statistical accuracy with fast convergence. Finally, we demonstrate the superiority of our method by comparing it with several state-of-the-art algorithms on both synthetic and real-world datasets.
We give a simple characterization of which functions can be computed deterministically by anonymous processes in dynamic networks, depending on the number of leaders in the network. In addition, we provide efficient distributed algorithms for computing all such functions assuming minimal or no knowledge about the network. Each of our algorithms comes in two versions: one that terminates with the correct output and a faster one that stabilizes on the correct output without explicit termination. Notably, these are the first deterministic algorithms whose running times scale linearly with both the number of processes and a parameter of the network which we call "dynamic disconnectivity" (meaning that our dynamic networks do not necessarily have to be connected at all times). We also provide matching lower bounds, showing that all our algorithms are asymptotically optimal for any fixed number of leaders. While most of the existing literature on anonymous dynamic networks relies on classical mass-distribution techniques, our work makes use of a recently introduced combinatorial structure called "history tree", also developing its theory in new directions. Among other contributions, our results make definitive progress on two popular fundamental problems for anonymous dynamic networks: leaderless Average Consensus (i.e., computing the mean value of input numbers distributed among the processes) and multi-leader Counting (i.e., determining the exact number of processes in the network). In fact, our approach unifies and improves upon several independent lines of research on anonymous networks, including Nedic et al., IEEE Trans. Automat. Contr. 2009; Olshevsky, SIAM J. Control Optim. 2017; Kowalski-Mosteiro, ICALP 2019, SPAA 2021; Di Luna-Viglietta, FOCS 2022.
Linear inverse problems arise in diverse engineering fields especially in signal and image reconstruction. The development of computational methods for linear inverse problems with sparsity tool is one of the recent trends in this area. The so-called optimal $k$-thresholding is a newly introduced method for sparse optimization and linear inverse problems. Compared to other sparsity-aware algorithms, the advantage of optimal $k$-thresholding method lies in that it performs thresholding and error metric reduction simultaneously and thus works stably and robustly for solving medium-sized linear inverse problems. However, the runtime of this method remains high when the problem size is relatively large. The purpose of this paper is to propose an acceleration strategy for this method. Specifically, we propose a heavy-ball-based optimal $k$-thresholding (HBOT) algorithm and its relaxed variants for sparse linear inverse problems. The convergence of these algorithms is shown under the restricted isometry property. In addition, the numerical performance of the heavy-ball-based relaxed optimal $k$-thresholding pursuit (HBROTP) has been studied, and simulations indicate that HBROTP admits robust capability for signal and image reconstruction even in noisy environments.
There are various optimization techniques in the realm of 3D, including point cloud-based approaches that use mesh, texture, and voxels which optimize how you store, and how do calculate in 3D. These techniques employ methods such as feed-forward networks, 3D convolutions, graph neural networks, transformers, and sparse tensors. However, the field of 3D is one of the most computationally expensive fields, and these methods have yet to achieve their full potential due to their large capacity, complexity, and computation limits. This paper proposes the application of knowledge distillation techniques, especially for sparse tensors in 3D deep learning, to reduce model sizes while maintaining performance. We analyze and purpose different loss functions, including standard methods and combinations of various losses, to simulate the performance of state-of-the-art models of different Sparse Convolutional NNs. Our experiments are done on the standard ScanNet V2 dataset, and we achieved around 2.6\% mIoU difference with a 4 times smaller model and around 8\% with a 16 times smaller model on the latest state-of-the-art spacio-temporal convents based models.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.
Image-to-image translation (I2I) aims to transfer images from a source domain to a target domain while preserving the content representations. I2I has drawn increasing attention and made tremendous progress in recent years because of its wide range of applications in many computer vision and image processing problems, such as image synthesis, segmentation, style transfer, restoration, and pose estimation. In this paper, we provide an overview of the I2I works developed in recent years. We will analyze the key techniques of the existing I2I works and clarify the main progress the community has made. Additionally, we will elaborate on the effect of I2I on the research and industry community and point out remaining challenges in related fields.