We present a new rank-adaptive tensor method to compute the numerical solution of high-dimensional nonlinear PDEs. The method combines functional tensor train (FTT) series expansions, operator splitting time integration, and a new rank-adaptive algorithm based on a thresholding criterion that limits the component of the PDE velocity vector normal to the FTT tensor manifold. This yields a scheme that can add or remove tensor modes adaptively from the PDE solution as time integration proceeds. The new method is designed to improve computational efficiency, accuracy and robustness in numerical integration of high-dimensional problems. In particular, it overcomes well-known computational challenges associated with dynamic tensor integration, including low-rank modeling errors and the need to invert covariance matrices of tensor cores at each time step. Numerical applications are presented and discussed for linear and nonlinear advection problems in two dimensions, and for a four-dimensional Fokker-Planck equation.
We present new approaches for solving constrained multicomponent nonlinear Schr\"odinger equations in arbitrary dimensions. The idea is to introduce an artificial time and solve an extended damped second order dynamic system whose stationary solution is the solution to the time-independent nonlinear Schr\"odinger equation. Constraints are often considered by projection onto the constraint set, here we include them explicitly into the dynamical system. We show the applicability and efficiency of the methods on examples of relevance in modern physics applications.
We propose a residual randomization procedure designed for robust Lasso-based inference in the high-dimensional setting. Compared to earlier work that focuses on sub-Gaussian errors, the proposed procedure is designed to work robustly in settings that also include heavy-tailed covariates and errors. Moreover, our procedure can be valid under clustered errors, which is important in practice, but has been largely overlooked by earlier work. Through extensive simulations, we illustrate our method's wider range of applicability as suggested by theory. In particular, we show that our method outperforms state-of-art methods in challenging, yet more realistic, settings where the distribution of covariates is heavy-tailed or the sample size is small, while it remains competitive in standard, ``well behaved" settings previously studied in the literature.
In this paper, we develop an adaptive high-order surface finite element method (FEM) to solve self-consistent field equations of polymers on general curved surfaces. It is an improvement of the existing algorithm of [J. Comp. Phys. 387: 230-244 (2019)] in which a linear surface FEM was presented to address this problem. The high-order surface FEM is obtained by the high-order surface geometrical approximation and high-order function space approximation. In order to describe the sharp interface in the strong segregation system more accurately, an adaptive FEM equipped with a novel Log marking strategy is proposed. Compared with the traditional strategy, this new marking strategy can not only label the elements that need to be refined or coarsened, but also give the refined or coarsened times, which can make full use of the information of a posterior error estimator and improve the efficiency of the adaptive algorithm. To demonstrate the power of our approach, we investigate the self-assembled patterns of diblock copolymers on several distinct curved surfaces. Numerical results illustrate the efficiency of the proposed method, especially for strong segregation systems.
In this paper, we propose a numerical method to solve the mass-conserved Ohta-Kawasaki equation with finite element discretization. An unconditional stable convex split-ting scheme is applied to time approximation. The Newton method and its variant are used to address the implicitly nonlinear term. We rigorously analyze the convergence of the Newton iteration methods. Theoretical results demonstrate that two Newton iteration methods have the same convergence rate, and the Newton method has a smaller convergent factor than the variant one. To reduce the condition number of discretized linear system, we design two efficient block preconditioners and analyze their spectral distribution. Finally, we offer numerical examples to support the theoretical analysis and indicate the efficiency of the proposed numerical methods for the mass-conserved Ohta-Kawasaki equation.
We introduce the Subspace Power Method (SPM) for calculating the CP decomposition of low-rank even-order real symmetric tensors. This algorithm applies the tensor power method of Kolda-Mayo to a certain modified tensor, constructed from a matrix flattening of the original tensor, and then uses deflation steps. Numerical simulations indicate SPM is roughly one order of magnitude faster than state-of-the-art algorithms, while performing robustly for low-rank tensors subjected to additive noise. We obtain rigorous guarantees for SPM regarding convergence and global optima, for tensors of rank up to roughly the square root of the number of tensor entries, by drawing on results from classical algebraic geometry and dynamical systems. In a second contribution, we extend SPM to compute De Lathauwer's symmetric block term tensor decompositions. As an application of the latter decomposition, we provide a method-of-moments for generalized principal component analysis.
In many clustering scenes, data samples' attribute values change over time. For such data, we are often interested in obtaining a partition for each time step and tracking the dynamic change of partitions. Normally, a smooth change is assumed for data to have a temporal smooth nature. Existing algorithms consider the temporal smoothness as an a priori preference and bias the search towards the preferred direction. This a priori manner leads to a risk of converging to an unexpected region because it is not always the case that a reasonable preference can be elicited given the little prior knowledge about the data. To address this issue, this paper proposes a new clustering framework called evolutionary robust clustering over time. One significant innovation of the proposed framework is processing the temporal smoothness in an a posteriori manner, which avoids unexpected convergence that occurs in existing algorithms. Furthermore, the proposed framework automatically tunes the weight of smoothness without data's affinity matrix and predefined parameters, which holds better applicability and scalability. The effectiveness and efficiency of the proposed framework are confirmed by comparing with state-of-the-art algorithms on both synthetic and real datasets.
This paper proposes a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifolds, identified with point clouds, based on diffusion maps (DM) and deep learning. The PDE solver is formulated as a supervised learning task to solve a least-squares regression problem that imposes an algebraic equation approximating a PDE (and boundary conditions if applicable). This algebraic equation involves a graph-Laplacian type matrix obtained via DM asymptotic expansion, which is a consistent estimator of second-order elliptic differential operators. The resulting numerical method is to solve a highly non-convex empirical risk minimization problem subjected to a solution from a hypothesis space of neural-network type functions. In a well-posed elliptic PDE setting, when the hypothesis space consists of feedforward neural networks with either infinite width or depth, we show that the global minimizer of the empirical loss function is a consistent solution in the limit of large training data. When the hypothesis space is a two-layer neural network, we show that for a sufficiently large width, the gradient descent method can identify a global minimizer of the empirical loss function. Supporting numerical examples demonstrate the convergence of the solutions and the effectiveness of the proposed solver in avoiding numerical issues that hampers the traditional approach when a large data set becomes available, e.g., large matrix inversion.
Linear discriminant analysis (LDA) is a classical method for dimensionality reduction, where discriminant vectors are sought to project data to a lower dimensional space for optimal separability of classes. Several recent papers have outlined strategies for exploiting sparsity for using LDA with high-dimensional data. However, many lack scalable methods for solution of the underlying optimization problems. We propose three new numerical optimization schemes for solving the sparse optimal scoring formulation of LDA based on block coordinate descent, the proximal gradient method, and the alternating direction method of multipliers. We show that the per-iteration cost of these methods scales linearly in the dimension of the data provided restricted regularization terms are employed, and cubically in the dimension of the data in the worst case. Furthermore, we establish that if our block coordinate descent framework generates convergent subsequences of iterates, then these subsequences converge to the stationary points of the sparse optimal scoring problem. We demonstrate the effectiveness of our new methods with empirical results for classification of Gaussian data and data sets drawn from benchmarking repositories, including time-series and multispectral X-ray data, and provide Matlab and R implementations of our optimization schemes.
Mean Field Games (MFG) have been introduced to tackle games with a large number of competing players. Considering the limit when the number of players is infinite, Nash equilibria are studied by considering the interaction of a typical player with the population's distribution. The situation in which the players cooperate corresponds to Mean Field Control (MFC) problems, which can also be viewed as optimal control problems driven by a McKean-Vlasov dynamics. These two types of problems have found a wide range of potential applications, for which numerical methods play a key role since most models do not have analytical solutions. In these notes, we review several aspects of numerical methods for MFG and MFC. We start by presenting some heuristics in a basic linear-quadratic setting. We then discuss numerical schemes for forward-backward systems of partial differential equations (PDEs), optimization techniques for variational problems driven by a Kolmogorov-Fokker-Planck PDE, an approach based on a monotone operator viewpoint, and stochastic methods relying on machine learning tools.
In this paper, we study multi-dimensional image recovery. Recently, transform-based tensor nuclear norm minimization methods are considered to capture low-rank tensor structures to recover third-order tensors in multi-dimensional image processing applications. The main characteristic of such methods is to perform the linear transform along the third mode of third-order tensors, and then compute tensor nuclear norm minimization on the transformed tensor so that the underlying low-rank tensors can be recovered. The main aim of this paper is to propose a nonlinear multilayer neural network to learn a nonlinear transform via the observed tensor data under self-supervision. The proposed network makes use of low-rank representation of transformed tensors and data-fitting between the observed tensor and the reconstructed tensor to construct the nonlinear transformation. Extensive experimental results on tensor completion, background subtraction, robust tensor completion, and snapshot compressive imaging are presented to demonstrate that the performance of the proposed method is better than that of state-of-the-art methods.