The class XNLP consists of (parameterized) problems that can be solved nondeterministically in $f(k)n^{O(1)}$ time and $f(k)\log n$ space, where $n$ is the size of the input instance and $k$ the parameter. The class XALP consists of problems that can be solved in the above time and space with access to an additional stack. These two classes are a "natural home" for many standard graph problems and their generalizations. In this paper, we show the hardness of several problems on planar graphs, parameterized by outerplanarity, treewidth and pathwidth, thus strengthening several existing results. In particular, we show the XNLP-hardness of the following problems parameterized by outerplanarity: All-or-Nothing Flow, Target Outdegree Orientation, Capacitated (Red-Blue) Dominating Set, Target Set Selections etc. We also show the XNLP-completeness of Scattered Set parameterized by pathwidth and XALP-completeness parameterized by treewidth and outerplanarity.
The time-dependent Gross-Pitaevksii equation (GPE) is a nonlinear Schr\"odinger equation which is used in quantum physics to model the dynamics of Bose-Einstein condensates. In this work we consider numerical approximations of the GPE based on a multiscale approach known as the localized orthogonal decomposition. Combined with an energy preserving time integrator one derives a method which is of high order in space and time under mild regularity assumptions. In previous work, the method has been shown to be numerically very efficient compared to first order Lagrange FEM. In this paper, we further investigate the performance of the method and compare it with higher order Lagrange FEM. For rough problems we observe that the novel method performs very efficient and retains its high order, while the classical methods can only compete well for smooth problems.
Given a $K$-vertex simplex in a $d$-dimensional space, suppose we measure $n$ points on the simplex with noise (hence, some of the observed points fall outside the simplex). Vertex hunting is the problem of estimating the $K$ vertices of the simplex. A popular vertex hunting algorithm is successive projection algorithm (SPA). However, SPA is observed to perform unsatisfactorily under strong noise or outliers. We propose pseudo-point SPA (pp-SPA). It uses a projection step and a denoise step to generate pseudo-points and feed them into SPA for vertex hunting. We derive error bounds for pp-SPA, leveraging on extreme value theory of (possibly) high-dimensional random vectors. The results suggest that pp-SPA has faster rates and better numerical performances than SPA. Our analysis includes an improved non-asymptotic bound for the original SPA, which is of independent interest.
An \emph{eight-partition} of a finite set of points (respectively, of a continuous mass distribution) in $\mathbb{R}^3$ consists of three planes that divide the space into $8$ octants, such that each open octant contains at most $1/8$ of the points (respectively, of the mass). In 1966, Hadwiger showed that any mass distribution in $\mathbb{R}^3$ admits an eight-partition; moreover, one can prescribe the normal direction of one of the three planes. The analogous result for finite point sets follows by a standard limit argument. We prove the following variant of this result: Any mass distribution (or point set) in $\mathbb{R}^3$ admits an eight-partition for which the intersection of two of the planes is a line with a prescribed direction. Moreover, we present an efficient algorithm for calculating an eight-partition of a set of $n$ points in~$\mathbb{R}^3$ (with prescribed normal direction of one of the planes) in time $O^{*}(n^{5/2})$.
A new $H(\textrm{divdiv})$-conforming finite element is presented, which avoids the need for super-smoothness by redistributing the degrees of freedom to edges and faces. This leads to a hybridizable mixed method with superconvergence for the biharmonic equation. Moreover, new finite element divdiv complexes are established. Finally, new weak Galerkin and $C^0$ discontinuous Galerkin methods for the biharmonic equation are derived.
Given a finite set, $A \subseteq \mathbb{R}^2$, and a subset, $B \subseteq A$, the \emph{MST-ratio} is the combined length of the minimum spanning trees of $B$ and $A \setminus B$ divided by the length of the minimum spanning tree of $A$. The question of the supremum, over all sets $A$, of the maximum, over all subsets $B$, is related to the Steiner ratio, and we prove this sup-max is between $2.154$ and $2.427$. Restricting ourselves to $2$-dimensional lattices, we prove that the sup-max is $2.0$, while the inf-max is $1.25$. By some margin the most difficult of these results is the upper bound for the inf-max, which we prove by showing that the hexagonal lattice cannot have MST-ratio larger than $1.25$.
We consider the quasi-likelihood analysis for a linear regression model driven by a Student-t L\'{e}vy process with constant scale and arbitrary degrees of freedom. The model is observed at high frequency over an extending period, under which we can quantify how the sampling frequency affects estimation accuracy. In that setting, joint estimation of trend, scale, and degrees of freedom is a non-trivial problem. The bottleneck is that the Student-t distribution is not closed under convolution, making it difficult to estimate all the parameters fully based on the high-frequency time scale. To efficiently deal with the intricate nature from both theoretical and computational points of view, we propose a two-step quasi-likelihood analysis: first, we make use of the Cauchy quasi-likelihood for estimating the regression-coefficient vector and the scale parameter; then, we construct the sequence of the unit-period cumulative residuals to estimate the remaining degrees of freedom. In particular, using full data in the first step causes a problem stemming from the small-time Cauchy approximation, showing the need for data thinning.
An average-case variant of the $k$-SUM conjecture asserts that finding $k$ numbers that sum to 0 in a list of $r$ random numbers, each of the order $r^k$, cannot be done in much less than $r^{\lceil k/2 \rceil}$ time. On the other hand, in the dense regime of parameters, where the list contains more numbers and many solutions exist, the complexity of finding one of them can be significantly improved by Wagner's $k$-tree algorithm. Such algorithms for $k$-SUM in the dense regime have many applications, notably in cryptanalysis. In this paper, assuming the average-case $k$-SUM conjecture, we prove that known algorithms are essentially optimal for $k= 3,4,5$. For $k>5$, we prove the optimality of the $k$-tree algorithm for a limited range of parameters. We also prove similar results for $k$-XOR, where the sum is replaced with exclusive or. Our results are obtained by a self-reduction that, given an instance of $k$-SUM which has a few solutions, produces from it many instances in the dense regime. We solve each of these instances using the dense $k$-SUM oracle, and hope that a solution to a dense instance also solves the original problem. We deal with potentially malicious oracles (that repeatedly output correlated useless solutions) by an obfuscation process that adds noise to the dense instances. Using discrete Fourier analysis, we show that the obfuscation eliminates correlations among the oracle's solutions, even though its inputs are highly correlated.
The solutions of the equation f^{ (p--1) }+ f^p = h^p in the unknown function f overan algebraic function field of characteristic p are very closely linked to the structure and fac-torisations of linear differential operators with coefficients in function fields of characteristic p.However, while being able to solve this equation over general algebraic function fields is necessaryeven for operators with rational coefficients, no general resolution method has been developed.We present an algorithm for testing the existence of solutions in polynomial time in the ``size''of h and an algorithm based on the computation of Riemann-Roch spaces and the selection ofelements in the divisor class group, for computing solutions of size polynomial in the ``size'' of hin polynomial time in the size of h and linear in the characteristic p, and discuss its applicationsto the factorisation of linear differential operators in positive characteristic p.
We introduce a constructive analogue of $\Phi$-dimension, a notion of Hausdorff dimension developed using a restricted class of coverings of a set. A class of coverings $\Phi$ is said to be "faithful" to Hausdorff dimension if the $\Phi$-dimension and Hausdorff dimension coincide for every set. We prove a Point-to-Set Principle for $\Phi$-dimension, through which we get Point-to-Set Principles for Hausdorff Dimension, continued-fraction dimension and dimension of Cantor Coverings as special cases. Using the Point-to-Set Principle for Cantor coverings and a new technique for the construction of sequences satisfying a certain Kolmogorov complexity condition, we show that the notions of faithfulness of Cantor coverings at the Hausdorff and constructive levels are equivalent. We adapt the result by Albeverio, Ivanenko, Lebid, and Torbin to derive the necessary and sufficient conditions for the constructive dimension faithfulness of the coverings generated by the Cantor series expansion. This condition yields two general classes of representations of reals, one whose constructive dimensions that are equivalent to the constructive Hausdorff dimensions, and another, whose effective dimensions are different from the effective Hausdorff dimensions, completely classifying Cantor series expansions of reals.
Recently (Elkin, Filtser, Neiman 2017) introduced the concept of a {\it terminal embedding} from one metric space $(X,d_X)$ to another $(Y,d_Y)$ with a set of designated terminals $T\subset X$. Such an embedding $f$ is said to have distortion $\rho\ge 1$ if $\rho$ is the smallest value such that there exists a constant $C>0$ satisfying \begin{equation*} \forall x\in T\ \forall q\in X,\ C d_X(x, q) \le d_Y(f(x), f(q)) \le C \rho d_X(x, q) . \end{equation*} When $X,Y$ are both Euclidean metrics with $Y$ being $m$-dimensional, recently (Narayanan, Nelson 2019), following work of (Mahabadi, Makarychev, Makarychev, Razenshteyn 2018), showed that distortion $1+\epsilon$ is achievable via such a terminal embedding with $m = O(\epsilon^{-2}\log n)$ for $n := |T|$. This generalizes the Johnson-Lindenstrauss lemma, which only preserves distances within $T$ and not to $T$ from the rest of space. The downside of prior work is that evaluating their embedding on some $q\in \mathbb{R}^d$ required solving a semidefinite program with $\Theta(n)$ constraints in~$m$ variables and thus required some superlinear $\mathrm{poly}(n)$ runtime. Our main contribution in this work is to give a new data structure for computing terminal embeddings. We show how to pre-process $T$ to obtain an almost linear-space data structure that supports computing the terminal embedding image of any $q\in\mathbb{R}^d$ in sublinear time $O^* (n^{1-\Theta(\epsilon^2)} + d)$. To accomplish this, we leverage tools developed in the context of approximate nearest neighbor search.