The convexity of a set can be generalized to the two weaker notions of reach and $r$-convexity; both describe the regularity of a set's boundary. For any compact subset of $\mathbb{R}^d$, we provide methods for computing upper bounds on these quantities from point cloud data. The bounds converge to the respective quantities as the point cloud becomes dense in the set, and the rate of convergence for the bound on the reach is given under a weak regularity condition. We also introduce the $\beta$-reach, a generalization of the reach that excludes small-scale features of size less than a parameter $\beta\in[0,\infty)$. Numerical studies suggest how the $\beta$-reach can be used in high-dimension to infer the reach and other geometric properties of smooth submanifolds.
A prominent problem in scheduling theory is the weighted flow time problem on one machine. We are given a machine and a set of jobs, each of them characterized by a processing time, a release time, and a weight. The goal is to find a (possibly preemptive) schedule for the jobs in order to minimize the sum of the weighted flow times, where the flow time of a job is the time between its release time and its completion time. It had been a longstanding important open question to find a polynomial time $O(1)$-approximation algorithm for the problem and this was resolved in a recent line of work. These algorithms are quite complicated and involve for example a reduction to (geometric) covering problems, dynamic programs to solve those, and LP-rounding methods to reduce the running time to a polynomial in the input size. In this paper, we present a much simpler $(6+\epsilon)$-approximation algorithm for the problem that does not use any of these reductions, but which works on the input jobs directly. It even generalizes directly to an $O(1)$-approximation algorithm for minimizing the $p$-norm of the jobs' flow times, for any $0 < p < \infty$ (the original problem setting corresponds to $p=1$). Prior to our work, for $p>1$ only a pseudopolynomial time $O(1)$-approximation algorithm was known for this variant, and no algorithm for $p<1$. For the same objective function, we present a very simple QPTAS for the setting of constantly many unrelated machines for $0 < p < \infty$ (and assuming quasi-polynomially bounded input data). It works in the cases with and without the possibility to migrate a job to a different machine. This is the first QPTAS for the problem if migrations are allowed, and it is arguably simpler than the known QPTAS for minimizing the weighted sum of the jobs' flow times without migration.
Given a property (graph class) $\Pi$, a graph $G$, and an integer $k$, the \emph{$\Pi$-completion} problem consists in deciding whether we can turn $G$ into a graph with the property $\Pi$ by adding at most $k$ edges to $G$. The $\Pi$-completion problem is known to be NP-hard for general graphs when $\Pi$ is the property of being a proper interval graph (PIG). In this work, we study the PIG-completion problem %when $\Pi$ is the class of proper interval graphs (PIG) within different subclasses of chordal graphs. We show that the problem remains NP-complete even when restricted to split graphs. We then turn our attention to positive results and present polynomial time algorithms to solve the PIG-completion problem when the input is restricted to caterpillar and threshold graphs. We also present an efficient algorithm for the minimum co-bipartite-completion for quasi-threshold graphs, which provides a lower bound for the PIG-completion problem within this graph class.
In this article, we discuss the $\mathcal{NP}$ problem, we do not follow the line of research of many researchers, which is to try to find such an instance Q, and the instance Q belongs to the class of $\mathcal{NP}$-complete, if the instance Q is proved to belong to $\mathcal{P}$, then $\mathcal{P}$ and $\mathcal{NP}$ are the same, if the instance Q is proved not to belong to $\mathcal{P}$, then $\mathcal{P}$ and $\mathcal{NP}$ are separated. Our research strategy in this article: Select an instance S of $\mathcal{EXP}$-complete and reduce it to an instance of $\mathcal{NP}$ in polynomial time, then S belongs to $\mathcal{NP}$, so $\mathcal{EXP} = \mathcal{NP}$, and then from the well-known $\mathcal{P} \neq \mathcal{EXP}$, derive $\mathcal{P} \neq \mathcal{NP}$.
For terminal value problems of fractional differential equations of order $\alpha \in (0,1)$ that use Caputo derivatives, shooting methods are a well developed and investigated approach. Based on recently established analytic properties of such problems, we develop a new technique to select the required initial values that solves such shooting problems quickly and accurately. Numerical experiments indicate that this new proportional secting technique converges very quickly and accurately to the solution. Run time measurements indicate a speedup factor of between 4 and 10 when compared to the standard bisection method.
Partial differential equations (PDEs) with uncertain or random inputs have been considered in many studies of uncertainty quantification. In forward uncertainty quantification, one is interested in analyzing the stochastic response of the PDE subject to input uncertainty, which usually involves solving high-dimensional integrals of the PDE output over a sequence of stochastic variables. In practical computations, one typically needs to discretize the problem in several ways: approximating an infinite-dimensional input random field with a finite-dimensional random field, spatial discretization of the PDE using, e.g., finite elements, and approximating high-dimensional integrals using cubatures such as quasi-Monte Carlo methods. In this paper, we focus on the error resulting from dimension truncation of an input random field. We show how Taylor series can be used to derive theoretical dimension truncation rates for a wide class of problems and we provide a simple checklist of conditions that a parametric mathematical model needs to satisfy in order for our dimension truncation error bound to hold. Some of the novel features of our approach include that our results are applicable to non-affine parametric operator equations, dimensionally-truncated conforming finite element discretized solutions of parametric PDEs, and even compositions of PDE solutions with smooth nonlinear quantities of interest. As a specific application of our method, we derive an improved dimension truncation error bound for elliptic PDEs with lognormally parameterized diffusion coefficients. Numerical examples support our theoretical findings.
For a given function $F$ from $\mathbb F_{p^n}$ to itself, determining whether there exists a function which is CCZ-equivalent but EA-inequivalent to $F$ is a very important and interesting problem. For example, K\"olsch \cite{KOL21} showed that there is no function which is CCZ-equivalent but EA-inequivalent to the inverse function. On the other hand, for the cases of Gold function $F(x)=x^{2^i+1}$ and $F(x)=x^3+{\rm Tr}(x^9)$ over $\mathbb F_{2^n}$, Budaghyan, Carlet and Pott (respectively, Budaghyan, Carlet and Leander) \cite{BCP06, BCL09FFTA} found functions which are CCZ-equivalent but EA-inequivalent to $F$. In this paper, when a given function $F$ has a component function which has a linear structure, we present functions which are CCZ-equivalent to $F$, and if suitable conditions are satisfied, the constructed functions are shown to be EA-inequivalent to $F$. As a consequence, for every quadratic function $F$ on $\mathbb F_{2^n}$ ($n\geq 4$) with nonlinearity $>0$ and differential uniformity $\leq 2^{n-3}$, we explicitly construct functions which are CCZ-equivalent but EA-inequivalent to $F$. Also for every non-planar quadratic function on $\mathbb F_{p^n}$ $(p>2, n\geq 4)$ with $|\mathcal W_F|\leq p^{n-1}$ and differential uniformity $\leq p^{n-3}$, we explicitly construct functions which are CCZ-equivalent but EA-inequivalent to $F$.
The hazard function represents one of the main quantities of interest in the analysis of survival data. We propose a general approach for modelling the dynamics of the hazard function using systems of autonomous ordinary differential equations (ODEs). This modelling approach can be used to provide qualitative and quantitative analyses of the evolution of the hazard function over time. Our proposal capitalises on the extensive literature of ODEs which, in particular, allow for establishing basic rules or laws on the dynamics of the hazard function via the use of autonomous ODEs. We show how to implement the proposed modelling framework in cases where there is an analytic solution to the system of ODEs or where an ODE solver is required to obtain a numerical solution. We focus on the use of a Bayesian modelling approach, but the proposed methodology can also be coupled with maximum likelihood estimation. A simulation study is presented to illustrate the performance of these models and the interplay of sample size and censoring. Two case studies using real data are presented to illustrate the use of the proposed approach and to highlight the interpretability of the corresponding models. We conclude with a discussion on potential extensions of our work and strategies to include covariates into our framework.
An $n$-vertex $m$-edge graph is \emph{$k$-vertex connected} if it cannot be disconnected by deleting less than $k$ vertices. After more than half a century of intensive research, the result by [Li et al. STOC'21] finally gave a \emph{randomized} algorithm for checking $k$-connectivity in near-optimal $\widehat{O}(m)$ time. (We use $\widehat{O}(\cdot)$ to hide an $n^{o(1)}$ factor.) Deterministic algorithms, unfortunately, have remained much slower even if we assume a linear-time max-flow algorithm: they either require at least $\Omega(mn)$ time [Even'75; Henzinger Rao and Gabow, FOCS'96; Gabow, FOCS'00] or assume that $k=o(\sqrt{\log n})$ [Saranurak and Yingchareonthawornchai, FOCS'22]. We show a \emph{deterministic} algorithm for checking $k$-vertex connectivity in time proportional to making $\widehat{O}(k^{2})$ max-flow calls, and, hence, in $\widehat{O}(mk^{2})$ time using the deterministic max-flow algorithm by [Brand et al. FOCS'23]. Our algorithm gives the first almost-linear-time bound for all $k$ where $\sqrt{\log n}\le k\le n^{o(1)}$ and subsumes up to a sub polynomial factor the long-standing state-of-the-art algorithm by [Even'75] which requires $O(n+k^{2})$ max-flow calls. Our key technique is a deterministic algorithm for terminal reduction for vertex connectivity: given a terminal set separated by a vertex mincut, output either a vertex mincut or a smaller terminal set that remains separated by a vertex mincut. We also show a deterministic $(1+\epsilon)$-approximation algorithm for vertex connectivity that makes $O(n/\epsilon^2)$ max-flow calls, improving the bound of $O(n^{1.5})$ max-flow calls in the exact algorithm of [Gabow, FOCS'00]. The technique is based on Ramanujan graphs.
Metric spaces $(X, d)$ are ubiquitous objects in mathematics and computer science that allow for capturing (pairwise) distance relationships $d(x, y)$ between points $x, y \in X$. Because of this, it is natural to ask what useful generalizations there are of metric spaces for capturing "$k$-wise distance relationships" $d(x_1, \ldots, x_k)$ among points $x_1, \ldots, x_k \in X$ for $k > 2$. To that end, G\"{a}hler (Math. Nachr., 1963) (and perhaps others even earlier) defined $k$-metric spaces, which generalize metric spaces, and most notably generalize the triangle inequality $d(x_1, x_2) \leq d(x_1, y) + d(y, x_2)$ to the "simplex inequality" $d(x_1, \ldots, x_k) \leq \sum_{i=1}^k d(x_1, \ldots, x_{i-1}, y, x_{i+1}, \ldots, x_k)$. (The definition holds for any fixed $k \geq 2$, and a $2$-metric space is just a (standard) metric space.) In this work, we introduce strong $k$-metric spaces, $k$-metric spaces that satisfy a topological condition stronger than the simplex inequality, which makes them "behave nicely." We also introduce coboundary $k$-metrics, which generalize $\ell_p$ metrics (and in fact all finite metric spaces induced by norms) and minimum bounding chain $k$-metrics, which generalize shortest path metrics (and capture all strong $k$-metrics). Using these definitions, we prove analogs of a number of fundamental results about embedding finite metric spaces including Fr\'{e}chet embedding (isometric embedding into $\ell_{\infty}$) and isometric embedding of all tree metrics into $\ell_1$. We also study relationships between families of (strong) $k$-metrics, and show that natural quantities, like simplex volume, are strong $k$-metrics.
Classical multi-sorted equational theories and their free algebras have been fundamental in mathematics and computer science. In this paper, we present a generalization of multi-sorted equational theories from the classical ($Set$-enriched) context to the context of enrichment in a symmetric monoidal category $V$ that is topological over $Set$. Prominent examples of such categories include: various categories of topological and measurable spaces; the categories of models of relational Horn theories without equality, including the categories of preordered sets and (extended) pseudo-metric spaces; and the categories of quasispaces (a.k.a. concrete sheaves) on concrete sites, which have recently attracted interest in the study of programming language semantics. Given such a category $V$, we define a notion of $V$-enriched multi-sorted equational theory. We show that every $V$-enriched multi-sorted equational theory $T$ has an underlying classical multi-sorted equational theory $|T|$, and that free $T$-algebras may be obtained as suitable liftings of free $|T|$-algebras. We establish explicit and concrete descriptions of free $T$-algebras, which have a convenient inductive character when $V$ is cartesian closed. We provide several examples of $V$-enriched multi-sorted equational theories, and we also discuss the close connection between these theories and the presentations of $V$-enriched algebraic theories and monads studied in recent papers by the author and Lucyshyn-Wright.