The purpose of this article is to study the convergence of a low order finite element approximation for a natural convection problem. We prove that the discretization based on P1 polynomials for every variable (velocity, pressure and temperature) is well-posed if used with a penalty term in the divergence equation, to compensate the loss of an inf-sup condition. With mild assumptions on the pressure regularity, we recover convergence for the Navier-Stokes-Boussinesq system, provided the penalty term is chosen in accordance with the mesh size. We express conditions to obtain optimal order of convergence. We illustrate theoretical convergence results with extensive examples. The computational cost that can be saved by this approach is also assessed.
We analyze the finite element discretization of distributed elliptic optimal control problems with variable energy regularization, where the usual $L^2(\Omega)$ norm regularization term with a constant regularization parameter $\varrho$ is replaced by a suitable representation of the energy norm in $H^{-1}(\Omega)$ involving a variable, mesh-dependent regularization parameter $\varrho(x)$. It turns out that the error between the computed finite element state $\widetilde{u}_{\varrho h}$ and the desired state $\bar{u}$ (target) is optimal in the $L^2(\Omega)$ norm provided that $\varrho(x)$ behaves like the local mesh size squared. This is especially important when adaptive meshes are used in order to approximate discontinuous target functions. The adaptive scheme can be driven by the computable and localizable error norm $\| \widetilde{u}_{\varrho h} - \bar{u}\|_{L^2(\Omega)}$ between the finite element state $\widetilde{u}_{\varrho h}$ and the target $\bar{u}$. The numerical results not only illustrate our theoretical findings, but also show that the iterative solvers for the discretized reduced optimality system are very efficient and robust.
The Strong Exponential Time Hypothesis (SETH) asserts that for every $\varepsilon>0$ there exists $k$ such that $k$-SAT requires time $(2-\varepsilon)^n$. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-$k$-SAT, and Set Cover. In this paper, we show that fine-grained reductions implying even $\lambda^n$-hardness of these problems from SETH for any $\lambda>1$, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question). We also extend this barrier result to the class of parameterized problems. Namely, for every $\lambda>1$ we conditionally rule out fine-grained reductions implying SETH-based lower bounds of $\lambda^k$ for a number of problems parameterized by the solution size $k$. Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).
The workflow satisfiability problem (WSP) is a well-studied problem in access control seeking allocation of authorised users to every step of the workflow, subject to workflow specification constraints. It was noticed that the number $k$ of steps is typically small compared to the number of users in the real-world instances of WSP; therefore $k$ is considered as the parameter in WSP parametrised complexity research. While WSP in general was shown to be W[1]-hard, WSP restricted to a special case of user-independent (UI) constraints is fixed-parameter tractable (FPT). However, restriction to the UI constraints might be impractical. To efficiently handle non-UI constraints, we introduce the notion of branching factor of a constraint. As long as the branching factors of the constraints are relatively small and the number of non-UI constraints is reasonable, WSP can be solved in FPT time. Extending the results from Karapetyan et al. (2019), we demonstrate that general-purpose solvers are capable of achieving FPT-like performance on WSP with arbitrary constraints when used with appropriate formulations. This enables one to tackle most of practical WSP instances. While important on its own, we hope that this result will also motivate researchers to look for FPT-aware formulations of other FPT problems.
A two dimensional eigenvalue problem (2DEVP) of a Hermitian matrix pair $(A, C)$ is introduced in this paper. The 2DEVP can be viewed as a linear algebraic formulation of the well-known eigenvalue optimization problem of the parameter matrix $H(\mu) = A - \mu C$. We present fundamental properties of the 2DEVP such as the existence, the necessary and sufficient condition for the finite number of 2D-eigenvalues and variational characterizations. We use eigenvalue optimization problems from the minmax of two Rayleigh quotients and the computation of distance to instability to show their connections with the 2DEVP and new insights of these problems derived from the properties of the 2DEVP.
Asymptotic study on the partition function $p(n)$ began with the work of Hardy and Ramanujan. Later Rademacher obtained a convergent series for $p(n)$ and an error bound was given by Lehmer. Despite having this, a full asymptotic expansion for $p(n)$ with an explicit error bound is not known. Recently O'Sullivan studied the asymptotic expansion of $p^{k}(n)$-partitions into $k$th powers, initiated by Wright, and consequently obtained an asymptotic expansion for $p(n)$ along with a concise description of the coefficients involved in the expansion but without any estimation of the error term. Here we consider a detailed and comprehensive analysis on an estimation of the error term obtained by truncating the asymptotic expansion for $p(n)$ at any positive integer $n$. This gives rise to an infinite family of inequalities for $p(n)$ which finally answers to a question proposed by Chen. Our error term estimation predominantly relies on applications of algorithmic methods from symbolic summation.
This paper introduces a novel approach for the construction of bulk--surface splitting schemes for semi-linear parabolic partial differential equations with dynamic boundary conditions. The proposed construction is based on a reformulation of the system as a partial differential--algebraic equation and the inclusion of certain delay terms for the decoupling. To obtain a fully discrete scheme, the splitting approach is combined with finite elements in space and a BDF discretization in time. Within this paper, we focus on the second-order case, resulting in a $3$-step scheme. We prove second-order convergence under the assumption of a weak CFL-type condition and confirm the theoretical findings by numerical experiments. Moreover, we illustrate the potential for higher-order splitting schemes numerically.
For the first time, a nonlinear interface problem on an unbounded domain with nonmonotone set-valued transmission conditions is analyzed. The investigated problem involves a nonlinear monotone partial differential equation in the interior domain and the Laplacian in the exterior domain. Such a scalar interface problem models nonmonotone frictional contact of elastic infinite media. The variational formulation of the interface problem leads to a hemivariational inequality, which lives on the unbounded domain, and so cannot be treated numerically in a direct way. By boundary integral methods the problem is transformed and a novel hemivariational inequality (HVI) is obtained that lives on the interior domain and on the coupling boundary, only. Thus for discretization the coupling of finite elements and boundary elements is the method of choice. In addition smoothing techniques of nondifferentiable optimization are adapted and the nonsmooth part in the HVI is regularized. Thus we reduce the original variational problem to a finite dimensional problem that can be solved by standard optimization tools. We establish not only convergence results for the total approximation procedure, but also an asymptotic error estimate for the regularized HVI.
The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The low-rank optimal transport (LOT) approach advocated in \cite{scetbon2021lowrank} holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT.
We investigate $L_2$ boosting in the context of kernel regression. Kernel smoothers, in general, lack appealing traits like symmetry and positive definiteness, which are critical not only for understanding theoretical aspects but also for achieving good practical performance. We consider a projection-based smoother (Huang and Chen, 2008) that is symmetric, positive definite, and shrinking. Theoretical results based on the orthonormal decomposition of the smoother reveal additional insights into the boosting algorithm. In our asymptotic framework, we may replace the full-rank smoother with a low-rank approximation. We demonstrate that the smoother's low-rank ($d(n)$) is bounded above by $O(h^{-1})$, where $h$ is the bandwidth. Our numerical findings show that, in terms of prediction accuracy, low-rank smoothers may outperform full-rank smoothers. Furthermore, we show that the boosting estimator with low-rank smoother achieves the optimal convergence rate. Finally, to improve the performance of the boosting algorithm in the presence of outliers, we propose a novel robustified boosting algorithm which can be used with any smoother discussed in the study. We investigate the numerical performance of the proposed approaches using simulations and a real-world case.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.