We propose novel optimal and parameter-free algorithms for computing an approximate solution with small (projected) gradient norm. Specifically, for computing an approximate solution such that the norm of its (projected) gradient does not exceed $\varepsilon$, we obtain the following results: a) for the convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L\|x_0 - x^*\|/\varepsilon}$, where $L$ is the Lipschitz constant of the gradient and $x^*$ is any optimal solution; b) for the strongly convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L/\mu}\log(\|\nabla f(x_0)\|/\epsilon)$, where $\mu$ is the strong convexity modulus; and c) for the nonconvex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{Ll}(f(x_0) - f(x^*))/\varepsilon^2$, where $l$ is the lower curvature constant. Our complexity results match the lower complexity bounds of the convex and strongly cases, and achieve the above best-known complexity bound for the nonconvex case for the first time in the literature. Moreover, for all the convex, strongly convex, and nonconvex cases, we propose parameter-free algorithms that do not require the input of any problem parameters. To the best of our knowledge, there do not exist such parameter-free methods before especially for the strongly convex and nonconvex cases. Since most regularity conditions (e.g., strong convexity and lower curvature) are imposed over a global scope, the corresponding problem parameters are notoriously difficult to estimate. However, gradient norm minimization equips us with a convenient tool to monitor the progress of algorithms and thus the ability to estimate such parameters in-situ.
We propose a framework where Fer and Wilcox expansions for the solution of differential equations are derived from two particular choices for the initial transformation that seeds the product expansion. In this scheme intermediate expansions can also be envisaged. Recurrence formulas are developed. A new lower bound for the convergence of the Wilcox expansion is provided as well as some applications of the results. In particular, two examples are worked out up to high order of approximation to illustrate the behavior of the Wilcox expansion.
Most existing neural network-based approaches for solving stochastic optimal control problems using the associated backward dynamic programming principle rely on the ability to simulate the underlying state variables. However, in some problems, this simulation is infeasible, leading to the discretization of state variable space and the need to train one neural network for each data point. This approach becomes computationally inefficient when dealing with large state variable spaces. In this paper, we consider a class of this type of stochastic optimal control problems and introduce an effective solution employing multitask neural networks. To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks. Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.
We propose a new stabilised finite element method for the classical Kolmogorov equation. The latter serves as a basic model problem for large classes of kinetic-type equations and, crucially, is characterised by degenerate diffusion. The stabilisation is constructed so that the resulting method admits a \emph{numerical hypocoercivity} property, analogous to the corresponding property of the PDE problem. More specifically, the stabilisation is constructed so that spectral gap is possible in the resulting ``stronger-than-energy'' stabilisation norm, despite the degenerate nature of the diffusion in Kolmogorov, thereby the method has a provably robust behaviour as the ``time'' variable goes to infinity. We consider both a spatially discrete version of the stabilised finite element method and a fully discrete version, with the time discretisation realised by discontinuous Galerkin timestepping. Both stability and a priori error bounds are proven in all cases. Numerical experiments verify the theoretical findings.
This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.
We design an algorithm for computing the $L$-series associated to an Anderson $t$-motives, exhibiting quasilinear complexity with respect to the target precision. Based on experiments, we conjecture that the order of vanishing at $T=1$ of the $v$-adic $L$-series of a given Anderson $t$-motive with good reduction does not depend on the finite place $v$.
It is well known that for singular inconsistent range-symmetric linear systems, the generalized minimal residual (GMRES) method determines a least squares solution without breakdown. The reached least squares solution may be or not be the pseudoinverse solution. We show that a lift strategy can be used to obtain the pseudoinverse solution. In addition, we propose a new iterative method named RSMAR (minimum $\mathbf A$-residual) for range-symmetric linear systems $\mathbf A\mathbf x=\mathbf b$. At step $k$ RSMAR minimizes $\|\mathbf A\mathbf r_k\|$ in the $k$th Krylov subspace generated with $\{\mathbf A, \mathbf r_0\}$ rather than $\|\mathbf r_k\|$, where $\mathbf r_k$ is the $k$th residual vector and $\|\cdot\|$ denotes the Euclidean vector norm. We show that RSMAR and GMRES terminate with the same least squares solution when applied to range-symmetric linear systems. We provide two implementations for RSMAR. Our numerical experiments show that RSMAR is the most suitable method among GMRES-type methods for singular inconsistent range-symmetric linear systems.
A crucial challenge for solving problems in conflict research is in leveraging the semi-supervised nature of the data that arise. Observed response data such as counts of battle deaths over time indicate latent processes of interest such as intensity and duration of conflicts, but defining and labeling instances of these unobserved processes requires nuance and imprecision. The availability of such labels, however, would make it possible to study the effect of intervention-related predictors - such as ceasefires - directly on conflict dynamics (e.g., latent intensity) rather than through an intermediate proxy like observed counts of battle deaths. Motivated by this problem and the new availability of the ETH-PRIO Civil Conflict Ceasefires data set, we propose a Bayesian autoregressive (AR) hidden Markov model (HMM) framework as a sufficiently flexible machine learning approach for semi-supervised regime labeling with uncertainty quantification. We motivate our approach by illustrating the way it can be used to study the role that ceasefires play in shaping conflict dynamics. This ceasefires data set is the first systematic and globally comprehensive data on ceasefires, and our work is the first to analyze this new data and to explore the effect of ceasefires on conflict dynamics in a comprehensive and cross-country manner.
The enhancement of 3D object detection is pivotal for precise environmental perception and improved task execution capabilities in autonomous driving. LiDAR point clouds, offering accurate depth information, serve as a crucial information for this purpose. Our study focuses on key challenges in 3D target detection. To tackle the challenge of expanding the receptive field of a 3D convolutional kernel, we introduce the Dynamic Feature Fusion Module (DFFM). This module achieves adaptive expansion of the 3D convolutional kernel's receptive field, balancing the expansion with acceptable computational loads. This innovation reduces operations, expands the receptive field, and allows the model to dynamically adjust to different object requirements. Simultaneously, we identify redundant information in 3D features. Employing the Feature Selection Module (FSM) quantitatively evaluates and eliminates non-important features, achieving the separation of output box fitting and feature extraction. This innovation enables the detector to focus on critical features, resulting in model compression, reduced computational burden, and minimized candidate frame interference. Extensive experiments confirm that both DFFM and FSM not only enhance current benchmarks, particularly in small target detection, but also accelerate network performance. Importantly, these modules exhibit effective complementarity.
Recently, a family of unconventional integrators for ODEs with polynomial vector fields was proposed, based on the polarization of vector fields. The simplest instance is the by now famous Kahan discretization for quadratic vector fields. All these integrators seem to possess remarkable conservation properties. In particular, it has been proved that, when the underlying ODE is Hamiltonian, its polarization discretization possesses an integral of motion and an invariant volume form. In this note, we propose a new algebraic approach to derivation of the integrals of motion for polarization discretizations.
This paper introduces a new series of methods which combine modal decomposition algorithms, such as singular value decomposition and high-order singular value decomposition, and deep learning architectures to repair, enhance, and increase the quality and precision of numerical and experimental data. A combination of two- and three-dimensional, numerical and experimental dasasets are used to demonstrate the reconstruction capacity of the presented methods, showing that these methods can be used to reconstruct any type of dataset, showing outstanding results when applied to highly complex data, which is noisy. The combination of benefits of these techniques results in a series of data-driven methods which are capable of repairing and/or enhancing the resolution of a dataset by identifying the underlying physics that define the data, which is incomplete or under-resolved, filtering any existing noise. These methods and the Python codes are included in the first release of ModelFLOWs-app.