We introduce a high-order spline geometric approach for the initial boundary value problem for Maxwell's equations. The method is geometric in the sense that it discretizes in structure preserving fashion the two de Rham sequences of differential forms involved in the formulation of the continuous system. Both the Ampere--Maxwell and the Faraday equations are required to hold strongly, while to make the system solvable two discrete Hodge star operators are used. By exploiting the properties of the chosen spline spaces and concepts from exterior calculus, a non-standard explicit in time formulation is introduced, based on the solution of linear systems with matrices presenting Kronecker product structure, rather than mass matrices as in the standard literature. These matrices arise from the application of the exterior (wedge) product in the discrete setting, and they present Kronecker product structure independently of the geometry of the domain or the material parameters. The resulting scheme preserves the desirable energy conservation properties of the known approaches. The computational advantages of the newly proposed scheme are studied both through a complexity analysis and through numerical experiments in three dimensions.
Noiseless compressive sensing is a protocol that enables undersampling and later recovery of a signal without loss of information. This compression is possible because the signal is usually sufficiently sparse in a given basis. Currently, the algorithm offering the best tradeoff between compression rate, robustness, and speed for compressive sensing is the LASSO (l1-norm bias) algorithm. However, many studies have pointed out the possibility that the implementation of lp-norms biases, with p smaller than one, could give better performance while sacrificing convexity. In this work, we focus specifically on the extreme case of the l0-based reconstruction, a task that is complicated by the discontinuity of the loss. In the first part of the paper, we describe via statistical physics methods, and in particular the replica method, how the solutions to this optimization problem are arranged in a clustered structure. We observe two distinct regimes: one at low compression rate where the signal can be recovered exactly, and one at high compression rate where the signal cannot be recovered accurately. In the second part, we present two message-passing algorithms based on our first results for the l0-norm optimization problem. The proposed algorithms are able to recover the signal at compression rates higher than the ones achieved by LASSO while being computationally efficient.
The time continuous Volterra equations valued in $\mathbb{R}$ with nonnegative resolvent kernels have two basic monotone properties. The first is that any two solution curves do not intersect with suitable given signals. The second is that the solutions to the autonomous equations are monotone. The so-called CM-preserving schemes (Comm. Math. Sci., 2021,19(5), 1301-1336) have been proposed to preserve the complete monotonicity property and thus these monotonicity properties but they are restricted to uniform meshes. In this work, through an analogue of the convolution on nonuniform meshes, we introduce the concept of ``right complementary monotone'' (R-CMM) kernels in the discrete level for nonuniform meshes, which is an analogue of the CM-preserving property but much more flexible. We prove that the discrete solutions preserve these two monotone properties if the discretized kernel satisfies R-CMM property. Technically, we highly rely on the resolvent kernels to achieve this.
We propose a machine-learning approach to model long-term out-of-sample dynamics of brain activity from task-dependent fMRI data. Our approach is a three stage one. First, we exploit Diffusion maps (DMs) to discover a set of variables that parametrize the low-dimensional manifold on which the emergent high-dimensional fMRI time series evolve. Then, we construct reduced-order-models (ROMs) on the embedded manifold via two techniques: Feedforward Neural Networks (FNNs) and the Koopman operator. Finally, for predicting the out-of-sample long-term dynamics of brain activity in the ambient fMRI space, we solve the pre-image problem coupling DMs with Geometric Harmonics (GH) when using FNNs and the Koopman modes per se. For our illustrations, we have assessed the performance of the two proposed schemes using a benchmark fMRI dataset with recordings during a visuo-motor task. The results suggest that just a few (for the particular task, five) non-linear coordinates of the high-dimensional fMRI time series provide a good basis for modelling and out-of-sample prediction of the brain activity. Furthermore, we show that the proposed approaches outperform the one-step ahead predictions of the naive random walk model, which, in contrast to our scheme, relies on the knowledge of the signals in the previous time step. Importantly, we show that the proposed Koopman operator approach provides, for any practical purposes, equivalent results to the FNN-GH approach, thus bypassing the need to train a non-linear map and to use GH to extrapolate predictions in the ambient fMRI space; one can use instead the low-frequency truncation of the DMs function space of L^2-integrable functions, to predict the entire list of coordinate functions in the fMRI space and to solve the pre-image problem.
Quadratic minimization problems with orthogonality constraints (QMPO) play an important role in many applications of science and engineering. However, some existing methods may suffer from low accuracy or heavy workload for large-scale QMPO. Krylov subspace methods are popular for large-scale optimization problems. In this work, we propose a block Lanczos method for solving the large-scale QMPO. In the proposed method, the original problem is projected into a small-sized one, and the Riemannian Trust-Region method is employed to solve the reduced QMPO. Convergence results on the optimal solution, the optimal objective function value, the multiplier and the KKT error are established. Moreover, we give the convergence speed of optimal solution, and show that if the block Lanczos process terminates, then an exact KKT solution is derived. Numerical experiments illustrate the numerical behavior of the proposed algorithm, and demonstrate that it is more powerful than many state-of-the-art algorithms for large-scale quadratic minimization problems with orthogonality constraints.
The idea of embedding optimization problems into deep neural networks as optimization layers to encode constraints and inductive priors has taken hold in recent years. Most existing methods focus on implicitly differentiating Karush-Kuhn-Tucker (KKT) conditions in a way that requires expensive computations on the Jacobian matrix, which can be slow and memory-intensive. In this paper, we developed a new framework, named Alternating Differentiation (Alt-Diff), that differentiates optimization problems (here, specifically in the form of convex optimization problems with polyhedral constraints) in a fast and recursive way. Alt-Diff decouples the differentiation procedure into a primal update and a dual update in an alternating way. Accordingly, Alt-Diff substantially decreases the dimensions of the Jacobian matrix especially for optimization with large-scale constraints and thus increases the computational speed of implicit differentiation. We show that the gradients obtained by Alt-Diff are consistent with those obtained by differentiating KKT conditions. In addition, we propose to truncate Alt-Diff to further accelerate the computational speed. Under some standard assumptions, we show that the truncation error of gradients is upper bounded by the same order of variables' estimation error. Therefore, Alt-Diff can be truncated to further increase computational speed without sacrificing much accuracy. A series of comprehensive experiments validate the superiority of Alt-Diff.
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints that arise in machine learning applications. In recent years, stochastic versions of FW have gained popularity, motivated by large datasets for which the computation of the full gradient is prohibitively expensive. In this paper, we present two new variants of the FW algorithms for stochastic finite-sum minimization. Our algorithms have the best convergence guarantees of existing stochastic FW approaches for both convex and non-convex objective functions. Our methods do not have the issue of permanently collecting large batches, which is common to many stochastic projection-free approaches. Moreover, our second approach does not require either large batches or full deterministic gradients, which is a typical weakness of many techniques for finite-sum problems. The faster theoretical rates of our approaches are confirmed experimentally.
PCA-Net is a recently proposed neural operator architecture which combines principal component analysis (PCA) with neural networks to approximate operators between infinite-dimensional function spaces. The present work develops approximation theory for this approach, improving and significantly extending previous work in this direction: First, a novel universal approximation result is derived, under minimal assumptions on the underlying operator and the data-generating distribution. Then, two potential obstacles to efficient operator learning with PCA-Net are identified, and made precise through lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues. The other obstacle relates to the inherent complexity of the space of operators between infinite-dimensional input and output spaces, resulting in a rigorous and quantifiable statement of the curse of dimensionality. In addition to these lower bounds, upper complexity bounds are derived. A suitable smoothness criterion is shown to ensure an algebraic decay of the PCA eigenvalues. Furthermore, it is shown that PCA-Net can overcome the general curse of dimensionality for specific operators of interest, arising from the Darcy flow and the Navier-Stokes equations.
We present two (a decoupled and a coupled) integral-equation-based methods for the Morse-Ingard equations subject to Neumann boundary conditions on the exterior domain. Both methods are based on second-kind integral equation (SKIE) formulations. The coupled method is well-conditioned and can achieve high accuracy. The decoupled method has lower computational cost and more flexibility in dealing with the boundary layer; however, it is prone to the ill-conditioning of the decoupling transform and cannot achieve as high accuracy as the coupled method. We show numerical examples using a Nystr\"om method based on quadrature-by-expansion (QBX) with fast-multipole acceleration. We demonstrate the accuracy and efficiency of the solvers in both two and three dimensions with complex geometry.
Under-approximations of reachable sets and tubes have been receiving growing research attention due to their important roles in control synthesis and verification. Available under-approximation methods applicable to continuous-time linear systems typically assume the ability to compute transition matrices and their integrals exactly, which is not feasible in general, and/or suffer from high computational costs. In this note, we attempt to overcome these drawbacks for a class of linear time-invariant (LTI) systems, where we propose a novel method to under-approximate finite-time forward reachable sets and tubes, utilizing approximations of the matrix exponential and its integral. In particular, we consider the class of continuous-time LTI systems with an identity input matrix and initial and input values belonging to full dimensional sets that are affine transformations of closed unit balls. The proposed method yields computationally efficient under-approximations of reachable sets and tubes, when implemented using zonotopes, with first-order convergence guarantees in the sense of the Hausdorff distance. To illustrate its performance, we implement our approach in three numerical examples, where linear systems of dimensions ranging between 2 and 200 are considered.
Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.