We establish robust exponential convergence for $rp$-Finite Element Methods (FEMs) applied to fourth order singularly perturbed boundary value problems, in a \emph{balanced norm} which is stronger than the usual energy norm associated with the problem. As a corollary, we get robust exponential convergence in the maximum norm. $r p$ FEMs are simply $p$ FEMs with possible repositioning of the (fixed number of) nodes. This is done for a $C^1$ Galerkin FEM in 1-D, and a $C^0$ mixed FEM in 2-D over domains with smooth boundary. In both cases we utilize the \emph{Spectral Boundary Layer} mesh.
We propose to approximate a (possibly discontinuous) multivariate function f (x) on a compact set by the partial minimizer arg miny p(x, y) of an appropriate polynomial p whose construction can be cast in a univariate sum of squares (SOS) framework, resulting in a highly structured convex semidefinite program. In a number of non-trivial cases (e.g. when f is a piecewise polynomial) we prove that the approximation is exact with a low-degree polynomial p. Our approach has three distinguishing features: (i) It is mesh-free and does not require the knowledge of the discontinuity locations. (ii) It is model-free in the sense that we only assume that the function to be approximated is available through samples (point evaluations). (iii) The size of the semidefinite program is independent of the ambient dimension and depends linearly on the number of samples. We also analyze the sample complexity of the approach, proving a generalization error bound in a probabilistic setting. This allows for a comparison with machine learning approaches.
This paper describes a purely functional library for computing level-$p$-complexity of Boolean functions, and applies it to two-level iterated majority. Boolean functions are simply functions from $n$ bits to one bit, and they can describe digital circuits, voting systems, etc. An example of a Boolean function is majority, which returns the value that has majority among the $n$ input bits for odd $n$. The complexity of a Boolean function $f$ measures the cost of evaluating it: how many bits of the input are needed to be certain about the result of $f$. There are many competing complexity measures but we focus on level-$p$-complexity -- a function of the probability $p$ that a bit is 1. The level-$p$-complexity $D_p(f)$ is the minimum expected cost when the input bits are independent and identically distributed with Bernoulli($p$) distribution. We specify the problem as choosing the minimum expected cost of all possible decision trees -- which directly translates to a clearly correct, but very inefficient implementation. The library uses thinning and memoization for efficiency and type classes for separation of concerns. The complexity is represented using (sets of) polynomials, and the order relation used for thinning is implemented using polynomial factorisation and root-counting. Finally we compute the complexity for two-level iterated majority and improve on an earlier result by J.~Jansson.
Two Latin squares of order $n$ are $r$-orthogonal if, when superimposed, there are exactly $r$ distinct ordered pairs. The spectrum of all values of $r$ for Latin squares of order $n$ is known. A Latin square $A$ of order $n$ is $r$-self-orthogonal if $A$ and its transpose are $r$-orthogonal. The spectrum of all values of $r$ is known for all orders $n\ne 14$. We develop randomized algorithms for computing pairs of $r$-orthogonal Latin squares of order $n$ and algorithms for computing $r$-self-orthogonal Latin squares of order $n$.
We study Whitney-type estimates for approximation of convex functions in the uniform norm on various convex multivariate domains while paying a particular attention to the dependence of the involved constants on the dimension and the geometry of the domain.
In this paper, we examine a finite element approximation of the steady $p(\cdot)$-Navier-Stokes equations ($p(\cdot)$ is variable dependent) and prove orders of convergence by assuming natural fractional regularity assumptions on the velocity vector field and the kinematic pressure. Compared to previous results, we treat the convective term and employ a more practicable discretization of the power-law index $p(\cdot)$. Numerical experiments confirm the quasi-optimality of the a priori error estimates (for the velocity) with respect to fractional regularity assumptions on the velocity vector field and the kinematic pressure.
We explore a notion of bent sequence attached to the data consisting of an Hadamard matrix of order $n$ defined over the complex $q^{th}$ roots of unity, an eigenvalue of that matrix, and a Galois automorphism from the cyclotomic field of order $q.$ In particular we construct self-dual bent sequences for various $q\le 60$ and lengths $n\le 21.$ Computational construction methods comprise the resolution of polynomial systems by Groebner bases and eigenspace computations. Infinite families can be constructed from regular Hadamard matrices, Bush-type Hadamard matrices, and generalized Boolean bent functions.As an application, we estimate the covering radius of the code attached to that matrix over $\Z_q.$ We derive a lower bound on that quantity for the Chinese Euclidean metric when bent sequences exist. We give the Euclidean distance spectrum, and bound above the covering radius of an attached spherical code, depending on its strength as a spherical design.
Quadratic NURBS-based discretizations of the Galerkin method suffer from membrane locking when applied to Kirchhoff-Love shell formulations. Membrane locking causes not only smaller displacements than expected, but also large-amplitude spurious oscillations of the membrane forces. Continuous-assumed-strain (CAS) elements have been recently introduced to remove membrane locking in quadratic NURBS-based discretizations of linear plane curved Kirchhoff rods (Casquero et al., CMAME, 2022). In this work, we generalize CAS elements to vanquish membrane locking in quadratic NURBS-based discretizations of linear Kirchhoff-Love shells. CAS elements bilinearly interpolate the membrane strains at the four corners of each element. Thus, the assumed strains have C0 continuity across element boundaries. To the best of the authors' knowledge, CAS elements are the first assumed-strain treatment to effectively overcome membrane locking in quadratic NURBS-based discretizations of Kirchhoff-Love shells while satisfying the following important characteristics for computational efficiency: (1) No additional degrees of freedom are added, (2) No additional systems of algebraic equations need to be solved, (3) No matrix multiplications or matrix inversions are needed to obtain the stiffness matrix, and (4) The nonzero pattern of the stiffness matrix is preserved. The benchmark problems show that CAS elements, using either 2x2 or 3x3 Gauss-Legendre quadrature points per element, are an effective locking treatment since this element type results in more accurate displacements for coarse meshes and excises the spurious oscillations of the membrane forces. The benchmark problems also show that CAS elements outperform state-of-the-art element types based on Lagrange polynomials equipped with either assumed-strain or reduced-integration locking treatments.
Based on a new Taylor-like formula, we derived an improved interpolation error estimate in $W^{1,p}$. We compare it with the classical error estimates based on the standard Taylor formula, and also with the corresponding interpolation error estimate, derived from the mean value theorem. We then assess the improvement in accuracy we can get from this formula, leading to a significant reduction in finite element computation costs.
We construct a graph with $n$ vertices where the smoothed runtime of the 3-FLIP algorithm for the 3-Opt Local Max-Cut problem can be as large as $2^{\Omega(\sqrt{n})}$. This provides the first example where a local search algorithm for the Max-Cut problem can fail to be efficient in the framework of smoothed analysis. We also give a new construction of graphs where the runtime of the FLIP algorithm for the Local Max-Cut problem is $2^{\Omega(n)}$ for any pivot rule. This graph is much smaller and has a simpler structure than previous constructions.
The aim of ordinal classification is to predict the ordered labels of the output from a set of observed inputs. Interval-valued data refers to data in the form of intervals. For the first time, interval-valued data and interval-valued functional data are considered as inputs in an ordinal classification problem. Six ordinal classifiers for interval data and interval-valued functional data are proposed. Three of them are parametric, one of them is based on ordinal binary decompositions and the other two are based on ordered logistic regression. The other three methods are based on the use of distances between interval data and kernels on interval data. One of the methods uses the weighted $k$-nearest-neighbor technique for ordinal classification. Another method considers kernel principal component analysis plus an ordinal classifier. And the sixth method, which is the method that performs best, uses a kernel-induced ordinal random forest. They are compared with na\"ive approaches in an extensive experimental study with synthetic and original real data sets, about human global development, and weather data. The results show that considering ordering and interval-valued information improves the accuracy. The source code and data sets are available at //github.com/aleixalcacer/OCFIVD.