Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite lambda-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite lambda-terms as a projective limit of finite sets of usual lambda-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite lambda-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite lambda-term is compositional by constructing a cartesian closed category of profinite lambda-terms, and we establish that the embedding from lambda-terms modulo beta-eta-conversion to profinite lambda-terms is faithful using Statman's finite completeness theorem. Finally, we prove a parametricity theorem for Church encodings of word and ranked tree languages, which states that every parametric family of elements in the finite standard model is the interpretation of a profinite lambda-term. This result shows that our notion of profinite lambda-term conservatively extends the existing notion of profinite word and provides a natural framework for defining and studying profinite trees.
Many solid mechanics problems on complex geometries are conventionally solved using discrete boundary methods. However, such an approach can be cumbersome for problems involving evolving domain boundaries due to the need to track boundaries and constant remeshing. In this work, we employ a robust smooth boundary method (SBM) that represents complex geometry implicitly, in a larger and simpler computational domain, as the support of a smooth indicator function. We present the resulting equations for mechanical equilibrium, in which inhomogeneous boundary conditions are replaced by source terms. The resulting mechanical equilibrium problem is semidefinite, making it difficult to solve. In this work, we present a computational strategy for efficiently solving near-singular SBM elasticity problems. We use the block-structured adaptive mesh refinement (BSAMR) method for resolving evolving boundaries appropriately, coupled with a geometric multigrid solver for an efficient solution of mechanical equilibrium. We discuss some of the practical numerical strategies for implementing this method, notably including the importance of grid versus node-centered fields. We demonstrate the solver's accuracy and performance for three representative examples: a) plastic strain evolution around a void, b) crack nucleation and propagation in brittle materials, and c) structural topology optimization. In each case, we show that very good convergence of the solver is achieved, even with large near-singular areas, and that any convergence issues arise from other complexities, such as stress concentrations. We present this framework as a versatile tool for studying a wide variety of solid mechanics problems involving variable geometry.
We study the computational complexity of multi-stage robust optimization problems. Such problems are formulated with alternating min/max quantifiers and therefore naturally fall into a higher stage of the polynomial hierarchy. Despite this, almost no hardness results with respect to the polynomial hierarchy are known. In this work, we examine the hardness of robust two-stage adjustable and robust recoverable optimization with budgeted uncertainty sets. Our main technical contribution is the introduction of a technique tailored to prove $\Sigma^p_3$-hardness of such problems. We highlight a difference between continuous and discrete budgeted uncertainty: In the discrete case, indeed a wide range of problems becomes complete for the third stage of the polynomial hierarchy; in particular, this applies to the TSP, independent set, and vertex cover problems. However, in the continuous case this does not happen and problems remain in the first stage of the hierarchy. Finally, if we allow the uncertainty to not only affect the objective, but also multiple constraints, then this distinction disappears and even in the continuous case we encounter hardness for the third stage of the hierarchy. This shows that even robust problems which are already NP-complete can still exhibit a significant computational difference between column-wise and row-wise uncertainty.
We derive simplified sphere-packing and Gilbert--Varshamov bounds for codes in the sum-rank metric, which can be computed more efficiently than previous ones. They give rise to asymptotic bounds that cover the asymptotic setting that has not yet been considered in the literature: families of sum-rank-metric codes whose block size grows in the code length. We also provide two genericity results: we show that random linear codes achieve almost the sum-rank-metric Gilbert--Varshamov bound with high probability. Furthermore, we derive bounds on the probability that a random linear code attains the sum-rank-metric Singleton bound, showing that for large enough extension fields, almost all linear codes achieve it.
A linear arrangement is a mapping $\pi$ from the $n$ vertices of a graph $G$ to $n$ distinct consecutive integers. Linear arrangements can be represented by drawing the vertices along a horizontal line and drawing the edges as semicircles above said line. In this setting, the length of an edge is defined as the absolute value of the difference between the positions of its two vertices in the arrangement, and the cost of an arrangement as the sum of all edge lengths. Here we study two variants of the Maximum Linear Arrangement problem (MaxLA), which consists of finding an arrangement that maximizes the cost. In the planar variant for free trees, vertices have to be arranged in such a way that there are no edge crossings. In the projective variant for rooted trees, arrangements have to be planar and the root of the tree cannot be covered by any edge. In this paper we present algorithms that are linear in time and space to solve planar and projective MaxLA for trees. We also prove several properties of maximum projective and planar arrangements, and show that caterpillar trees maximize planar MaxLA over all trees of a fixed size thereby generalizing a previous extremal result on trees.
In 1973, Lemmens and Seidel posed the problem of determining the maximum number of equiangular lines in $\mathbb{R}^r$ with angle $\arccos(\alpha)$ and gave a partial answer in the regime $r \leq 1/\alpha^2 - 2$. At the other extreme where $r$ is at least exponential in $1/\alpha$, recent breakthroughs have led to an almost complete resolution of this problem. In this paper, we introduce a new method for obtaining upper bounds which unifies and improves upon previous approaches, thereby bridging the gap between the aforementioned regimes, as well as significantly extending or improving all previously known bounds when $r \geq 1/\alpha^2 - 2$. Our method is based on orthogonal projection of matrices with respect to the Frobenius inner product and it also yields the first extension of the Alon-Boppana theorem to dense graphs, with equality for strongly regular graphs corresponding to $\binom{r+1}{2}$ equiangular lines in $\mathbb{R}^r$. Applications of our method in the complex setting will be discussed as well.
Given a sound first-order p-time theory $T$ capable of formalizing syntax of first-order logic we define a p-time function $g_T$ that stretches all inputs by one bit and we use its properties to show that $T$ must be incomplete. We leave it as an open problem whether for some $T$ the range of $g_T$ intersects all infinite NP sets (i.e. whether it is a proof complexity generator hard for all proof systems). A propositional version of the construction shows that at least one of the following three statements is true: - there is no p-optimal propositional proof system (this is equivalent to the non-existence of a time-optimal propositional proof search algorithm), - $E \not\subseteq P/poly$, - there exists function $h$ that stretches all inputs by one bit, is computable in sub-exponential time and its range $Rng(h)$ intersects all infinite NP sets.
We consider the truncated multivariate normal distributions for which every component is one-sided truncated. We show that this family of distributions is an exponential family. We identify $\mathcal{D}$, the corresponding natural parameter space, and deduce that the family of distributions is not regular. We prove that the gradient of the cumulant-generating function of the family of distributions remains bounded near certain boundary points in $\mathcal{D}$, and therefore the family also is not steep. We also consider maximum likelihood estimation for $\boldsymbol{\mu}$, the location vector parameter, and $\boldsymbol{\Sigma}$, the positive definite (symmetric) matrix dispersion parameter, of a truncated non-singular multivariate normal distribution. We prove that each solution to the score equations for $(\boldsymbol{\mu},\boldsymbol{\Sigma})$ satisfies the method-of-moments equations, and we obtain a necessary condition for the existence of solutions to the score equations.
In this paper, we will show the $L^p$-resolvent estimate for the finite element approximation of the Stokes operator for $p \in \left( \frac{2N}{N+2}, \frac{2N}{N-2} \right)$, where $N \ge 2$ is the dimension of the domain. It is expected that this estimate can be applied to error estimates for finite element approximation of the non-stationary Navier--Stokes equations, since studies in this direction are successful in numerical analysis of nonlinear parabolic equations. To derive the resolvent estimate, we introduce the solution of the Stokes resolvent problem with a discrete external force. We then obtain local energy error estimate according to a novel localization technique and establish global $L^p$-type error estimates. The restriction for $p$ is caused by the treatment of lower-order terms appearing in the local energy error estimate. Our result may be a breakthrough in the $L^p$-theory of finite element methods for the non-stationary Navier--Stokes equations.
Approximating convex bodies is a fundamental question in geometry and has a wide variety of applications. Consider a convex body $K$ of diameter $\Delta$ in $\textbf{R}^d$ for fixed $d$. The objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error $\varepsilon$. It is known from classical results of Dudley (1974) and Bronshteyn and Ivanov (1976) that $\Theta((\Delta/\varepsilon)^{(d-1)/2})$ vertices (alternatively, facets) are both necessary and sufficient. While this bound is tight in the worst case, that of Euclidean balls, it is far from optimal for skinny convex bodies. A natural way to characterize a convex object's skinniness is in terms of its relationship to the Euclidean ball. Given a convex body $K$, define its \emph{volume diameter} $\Delta_d$ to be the diameter of a Euclidean ball of the same volume as $K$, and define its \emph{surface diameter} $\Delta_{d-1}$ analogously for surface area. It follows from generalizations of the isoperimetric inequality that $\Delta \geq \Delta_{d-1} \geq \Delta_d$. Arya, da Fonseca, and Mount (SoCG 2012) demonstrated that the diameter-based bound could be made surface-area sensitive, improving the above bound to $O((\Delta_{d-1}/\varepsilon)^{(d-1)/2})$. In this paper, we strengthen this by proving the existence of an approximation with $O((\Delta_d/\varepsilon)^{(d-1)/2})$ facets.
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.