Applying half-quadratic optimization to loss functions can yield the corresponding regularizers, while these regularizers are usually not sparsity-inducing regularizers (SIRs). To solve this problem, we devise a framework to generate an SIR with closed-form proximity operator. Besides, we specify our framework using several commonly-used loss functions, and produce the corresponding SIRs, which are then adopted as nonconvex rank surrogates for low-rank matrix completion. Furthermore, algorithms based on the alternating direction method of multipliers are developed. Extensive numerical results show the effectiveness of our methods in terms of recovery performance and runtime.
We consider the problem of aggregating the judgements of a group of experts to form a single prior distribution representing the judgements of the group. We develop a Bayesian hierarchical model to reconcile the judgements of the group of experts based on elicited quantiles for continuous quantities and probabilities for one-off events. Previous Bayesian reconciliation methods have not been used widely, if at all, in contrast to pooling methods and consensus-based approaches. To address this we embed Bayesian reconciliation within the probabilistic Delphi method. The result is to furnish the outcome of the probabilistic Delphi method with a direct probabilistic interpretation, with the resulting prior representing the judgements of the decision maker. We can use the rationales from the Delphi process to group the experts for the hierarchical modelling. We illustrate the approach with applications to studies evaluating erosion in embankment dams and pump failures in a water pumping station, and assess the properties of the approach using the TU Delft database of expert judgement studies. We see that, even using an off-the-shelf implementation of the approach, it out-performs individual experts, equal weighting of experts and the classical method based on the log score.
Neural operators have been explored as surrogate models for simulating physical systems to overcome the limitations of traditional partial differential equation (PDE) solvers. However, most existing operator learning methods assume that the data originate from a single physical mechanism, limiting their applicability and performance in more realistic scenarios. To this end, we propose Physical Invariant Attention Neural Operator (PIANO) to decipher and integrate the physical invariants (PI) for operator learning from the PDE series with various physical mechanisms. PIANO employs self-supervised learning to extract physical knowledge and attention mechanisms to integrate them into dynamic convolutional layers. Compared to existing techniques, PIANO can reduce the relative error by 13.6\%-82.2\% on PDE forecasting tasks across varying coefficients, forces, or boundary conditions. Additionally, varied downstream tasks reveal that the PI embeddings deciphered by PIANO align well with the underlying invariants in the PDE systems, verifying the physical significance of PIANO. The source code will be publicly available at: //github.com/optray/PIANO.
Common regularization algorithms for linear regression, such as LASSO and Ridge regression, rely on a regularization hyperparameter that balances the tradeoff between minimizing the fitting error and the norm of the learned model coefficients. As this hyperparameter is scalar, it can be easily selected via random or grid search optimizing a cross-validation criterion. However, using a scalar hyperparameter limits the algorithm's flexibility and potential for better generalization. In this paper, we address the problem of linear regression with l2-regularization, where a different regularization hyperparameter is associated with each input variable. We optimize these hyperparameters using a gradient-based approach, wherein the gradient of a cross-validation criterion with respect to the regularization hyperparameters is computed analytically through matrix differential calculus. Additionally, we introduce two strategies tailored for sparse model learning problems aiming at reducing the risk of overfitting to the validation data. Numerical examples demonstrate that our multi-hyperparameter regularization approach outperforms LASSO, Ridge, and Elastic Net regression. Moreover, the analytical computation of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation, especially when handling a large number of input variables. Application to the identification of over-parameterized Linear Parameter-Varying models is also presented.
Many time-dependent differential equations are equipped with invariants. Preserving such invariants under discretization can be important, e.g., to improve the qualitative and quantitative properties of numerical solutions. Recently, relaxation methods have been proposed as small modifications of standard time integration schemes guaranteeing the correct evolution of functionals of the solution. Here, we investigate how to combine these relaxation techniques with efficient step size control mechanisms based on local error estimates for explicit Runge-Kutta methods. We demonstrate our results in several numerical experiments including ordinary and partial differential equations.
Low-rank approximation of a matrix function, $f(A)$, is an important task in computational mathematics. Most methods require direct access to $f(A)$, which is often considerably more expensive than accessing $A$. Persson and Kressner (SIMAX 2023) avoid this issue for symmetric positive semidefinite matrices by proposing funNystr\"om, which first constructs a Nystr\"om approximation to $A$ using subspace iteration, and then uses the approximation to directly obtain a low-rank approximation for $f(A)$. They prove that the method yields a near-optimal approximation whenever $f$ is a continuous operator monotone function with $f(0) = 0$. We significantly generalize the results of Persson and Kressner beyond subspace iteration. We show that if $\widehat{A}$ is a near-optimal low-rank Nystr\"om approximation to $A$ then $f(\widehat{A})$ is a near-optimal low-rank approximation to $f(A)$, independently of how $\widehat{A}$ is computed. Further, we show sufficient conditions for a basis $Q$ to produce a near-optimal Nystr\"om approximation $\widehat{A} = AQ(Q^T AQ)^{\dagger} Q^T A$. We use these results to establish that many common low-rank approximation methods produce near-optimal Nystr\"om approximations to $A$ and therefore to $f(A)$.
We address the communication overhead of distributed sparse matrix-(multiple)-vector multiplication in the context of large-scale eigensolvers, using filter diagonalization as an example. The basis of our study is a performance model which includes a communication metric that is computed directly from the matrix sparsity pattern without running any code. The performance model quantifies to which extent scalability and parallel efficiency are lost due to communication overhead. To restore scalability, we identify two orthogonal layers of parallelism in the filter diagonalization technique. In the horizontal layer the rows of the sparse matrix are distributed across individual processes. In the vertical layer bundles of multiple vectors are distributed across separate process groups. An analysis in terms of the communication metric predicts that scalability can be restored if, and only if, one implements the two orthogonal layers of parallelism via different distributed vector layouts. Our theoretical analysis is corroborated by benchmarks for application matrices from quantum and solid state physics, road networks, and nonlinear programming. We finally demonstrate the benefits of using orthogonal layers of parallelism with two exemplary application cases -- an exciton and a strongly correlated electron system -- which incur either small or large communication overhead.
We consider the task of estimating functions belonging to a specific class of nonsmooth functions, namely so-called tame functions. These functions appear in a wide range of applications: training deep learning, value functions of mixed-integer programs, or wave functions of small molecules. We show that tame functions are approximable by piecewise polynomials on any full-dimensional cube. We then present the first ever mixed-integer programming formulation of piecewise polynomial regression. Together, these can be used to estimate tame functions. We demonstrate promising computational results.
Miura surfaces are the solutions of a constrained nonlinear elliptic system of equations. This system is derived by homogenization from the Miura fold, which is a type of origami fold with multiple applications in engineering. A previous inquiry, gave suboptimal conditions for existence of solutions and proposed an $H^2$-conformal finite element method to approximate them. In this paper, the existence of Miura surfaces is studied using a mixed formulation. It is also proved that the constraints propagate from the boundary to the interior of the domain for well-chosen boundary conditions. Then, a numerical method based on a least-squares formulation, Taylor--Hood finite elements and a Newton method is introduced to approximate Miura surfaces. The numerical method is proved to converge and numerical tests are performed to demonstrate its robustness.
Nonlinear Fokker-Planck equations play a major role in modeling large systems of interacting particles with a proved effectiveness in describing real world phenomena ranging from classical fields such as fluids and plasma to social and biological dynamics. Their mathematical formulation has often to face with physical forces having a significant random component or with particles living in a random environment which characterization may be deduced through experimental data and leading consequently to uncertainty-dependent equilibrium states. In this work, to address the problem of effectively solving stochastic Fokker-Planck systems, we will construct a new equilibrium preserving scheme through a micro-macro approach based on stochastic Galerkin methods. The resulting numerical method, contrarily to the direct application of a stochastic Galerkin projection in the parameter space of the unknowns of the underlying Fokker-Planck model, leads to highly accurate description of the uncertainty dependent large time behavior. Several numerical tests in the context of collective behavior for social and life sciences are presented to assess the validity of the present methodology against standard ones.
Robotic capacities in object manipulation are incomparable to those of humans. Besides years of learning, humans rely heavily on the richness of information from physical interaction with the environment. In particular, tactile sensing is crucial in providing such rich feedback. Despite its potential contributions to robotic manipulation, tactile sensing is less exploited; mainly due to the complexity of the time series provided by tactile sensors. In this work, we propose a method for assessing grasp stability using tactile sensing. More specifically, we propose a methodology to extract task-relevant features and design efficient classifiers to detect object slippage with respect to individual fingertips. We compare two classification models: support vector machine and logistic regression. We use highly sensitive Uskin tactile sensors mounted on an Allegro hand to test and validate our method. Our results demonstrate that the proposed method is effective in slippage detection in an online fashion.