In this paper, by introducing a reconstruction operator based on the Legendre moments, we construct a reduced discontinuous Galerkin (RDG) space that could achieve the same approximation accuracy but using fewer degrees of freedom (DoFs) than the standard discontinuous Galerkin (DG) space. The design of the ``narrow-stencil-based'' reconstruction operator can preserve the local data structure property of the high-order DG methods. With the RDG space, we apply the local discontinuous Galerkin (LDG) method with the implicit-explicit time marching for the nonlinear unsteady convection-diffusion-reaction equation, where the reduction of the number of DoFs allows us to achieve higher efficiency. In terms of theoretical analysis, we give the well-posedness and approximation properties for the reconstruction operator and the $L^2$ error estimate for the semi-discrete LDG scheme. Several representative numerical tests demonstrate the accuracy and the performance of the proposed method in capturing the layers.
The aim of this work is to present a model reduction technique in the framework of optimal control problems for partial differential equations. We combine two approaches used for reducing the computational cost of the mathematical numerical models: domain-decomposition (DD) methods and reduced-order modelling (ROM). In particular, we consider an optimisation-based domain-decomposition algorithm for the parameter-dependent stationary incompressible Navier-Stokes equations. Firstly, the problem is described on the subdomains coupled at the interface and solved through an optimal control problem, which leads to the complete separation of the subdomain problems in the DD method. On top of that, a reduced model for the obtained optimal-control problem is built; the procedure is based on the Proper Orthogonal Decomposition technique and a further Galerkin projection. The presented methodology is tested on two fluid dynamics benchmarks: the stationary backward-facing step and lid-driven cavity flow. The numerical tests show a significant reduction of the computational costs in terms of both the problem dimensions and the number of optimisation iterations in the domain-decomposition algorithm.
We consider the problem of minimizing a continuous function given quantum access to a stochastic gradient oracle. We provide two new methods for the special case of minimizing a Lipschitz convex function. Each method obtains a dimension versus accuracy trade-off which is provably unachievable classically and we prove that one method is asymptotically optimal in low-dimensional settings. Additionally, we provide quantum algorithms for computing a critical point of a smooth non-convex function at rates not known to be achievable classically. To obtain these results we build upon the quantum multivariate mean estimation result of Cornelissen et al. 2022 and provide a general quantum-variance reduction technique of independent interest.
In this paper, we derive a variant of the Taylor theorem to obtain a new minimized remainder. For a given function $f$ defined on the interval $[a,b]$, this formula is derived by introducing a linear combination of $f'$ computed at $n+1$ equally spaced points in $[a,b]$, together with $f''(a)$ and $f''(b)$. We then consider two classical applications of this Taylor-like expansion: the interpolation error and the numerical quadrature formula. We show that using this approach improves both the Lagrange $P_2$ - interpolation error estimate and the error bound of the Simpson rule in numerical integration.
We propose a new discrete choice model, called the generalized stochastic preference (GSP) model, that incorporates non-rationality into the stochastic preference (SP) choice model, also known as the rank- based choice model. Our model can explain several choice phenomena that cannot be represented by any SP model such as the compromise and attraction effects, but still subsumes the SP model class. The GSP model is defined as a distribution over consumer types, where each type extends the choice behavior of rational types in the SP model. We build on existing methods for estimating the SP model and propose an iterative estimation algorithm for the GSP model that finds new types by solving a integer linear program in each iteration. We further show that our proposed notion of non-rationality can be incorporated into other choice models, like the random utility maximization (RUM) model class as well as any of its subclasses. As a concrete example, we introduce the non-rational extension of the classical MNL model, which we term the generalized MNL (GMNL) model and present an efficient expectation-maximization (EM) algorithm for estimating the GMNL model. Numerical evaluation on real choice data shows that the GMNL and GSP models can outperform their rational counterparts in out-of-sample prediction accuracy.
Algorithms for solving the linear classification problem have a long history, dating back at least to 1936 with linear discriminant analysis. For linearly separable data, many algorithms can obtain the exact solution to the corresponding 0-1 loss classification problem efficiently, but for data which is not linearly separable, it has been shown that this problem, in full generality, is NP-hard. Alternative approaches all involve approximations of some kind, including the use of surrogates for the 0-1 loss (for example, the hinge or logistic loss) or approximate combinatorial search, none of which can be guaranteed to solve the problem exactly. Finding efficient algorithms to obtain an exact i.e. globally optimal solution for the 0-1 loss linear classification problem with fixed dimension, remains an open problem. In research we report here, we detail the rigorous construction of a new algorithm, incremental cell enumeration (ICE), that can solve the 0-1 loss classification problem exactly in polynomial time. We prove correctness using concepts from the theory of hyperplane arrangements and oriented matroids. We demonstrate the effectiveness of this algorithm on synthetic and real-world datasets, showing optimal accuracy both in and out-of-sample, in practical computational time. We also empirically demonstrate how the use of approximate upper bound leads to polynomial time run-time improvements to the algorithm whilst retaining exactness. To our knowledge, this is the first, rigorously-proven polynomial time, practical algorithm for this long-standing problem.
We provide a framework for the numerical approximation of distributed optimal control problems, based on least-squares finite element methods. Our proposed method simultaneously solves the state and adjoint equations and is $\inf$--$\sup$ stable for any choice of conforming discretization spaces. A reliable and efficient a posteriori error estimator is derived for problems where box constraints are imposed on the control. It can be localized and therefore used to steer an adaptive algorithm. For unconstrained optimal control problems, i.e., the set of controls being a Hilbert space, we obtain a coercive least-squares method and, in particular, quasi-optimality for any choice of discrete approximation space. For constrained problems we derive and analyze a variational inequality where the PDE part is tackled by least-squares finite element methods. We show that the abstract framework can be applied to a wide range of problems, including scalar second-order PDEs, the Stokes problem, and parabolic problems on space-time domains. Numerical examples for some selected problems are presented.
We consider two-phase fluid deformable surfaces as model systems for biomembranes. Such surfaces are modeled by incompressible surface Navier-Stokes-Cahn-Hilliard-like equations with bending forces. We derive this model using the Lagrange-D'Alembert principle considering various dissipation mechanisms. The highly nonlinear model is solved numerically to explore the tight interplay between surface evolution, surface phase composition, surface curvature and surface hydrodynamics. It is demonstrated that hydrodynamics can enhance bulging and furrow formation, which both can further develop to pinch-offs. The numerical approach builds on a Taylor-Hood element for the surface Navier-Stokes part, a semi-implicit approach for the Cahn-Hilliard part, higher order surface parametrizations, appropriate approximations of the geometric quantities, and mesh redistribution. We demonstrate convergence properties that are known to be optimal for simplified sub-problems.
High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.
In this paper we propose a geometric integrator to numerically approximate the flow of Lie systems. The highlight of this paper is to present a novel procedure that integrates the system on a Lie group intrinsically associated to the Lie system, and then generating the discrete solution of this Lie system through a given action of the Lie group on the manifold where the system evolves. One major result from the integration on the Lie group is that one is able to solve all automorphic Lie systems at the same time, and that they can be written as first-order systems of linear homogeneous ODEs in normal form. This brings a lot of advantages, since solving a linear ODE involves less numerical cost. Specifically, we use two families of numerical schemes on the Lie group, which are designed to preserve its geometrical structure: the first one based on the Magnus expansion, whereas the second is based on RKMK methods. Moreover, since the aforementioned action relates the Lie group and the manifold where the Lie system evolves, the resulting integrator preserves any geometric structure of the latter. We compare both methods for Lie systems with geometric invariants, particularly a class on Lie systems on curved spaces. As already mentioned, the milestone of this paper is to show that the method we propose preserves all the geometric invariants very faithfully, in comparison with nongeometric numerical methods.
In this paper, we develop a unified regression approach to model unconditional quantiles, M-quantiles and expectiles of multivariate dependent variables exploiting the multidimensional Huber's function. To assess the impact of changes in the covariates across the entire unconditional distribution of the responses, we extend the work of Firpo et al. (2009) by running a mean regression of the recentered influence function on the explanatory variables. We discuss the estimation procedure and establish the asymptotic properties of the derived estimators. A data-driven procedure is also presented to select the tuning constant of the Huber's function. The validity of the proposed methodology is explored with simulation studies and through an application using the Survey of Household Income and Wealth 2016 conducted by the Bank of Italy.