$\mathsf{QMA}_1$ is $\mathsf{QMA}$ with perfect completeness, i.e., the prover must accept with a probability of exactly $1$ in the YES-case. Whether $\mathsf{QMA}_1$ and $\mathsf{QMA}$ are equal is still a major open problem. It is not even known whether $\mathsf{QMA}_1$ has a universal gateset; Solovay-Kitaev does not apply due to perfect completeness. Hence, we do not generally know whether $\mathsf{QMA}_1^G=\mathsf{QMA}_1^{G'}$ (superscript denoting gateset), given two universal gatesets $G,G'$. In this paper, we make progress towards the gateset question by proving that for all $k\in\mathbb N$, the gateset $G_{2^k}$ (Amy et al., RC 2024) is universal for all gatesets in the cyclotomic field $\mathbb{Q}(\zeta_{2^k}),\zeta_{2^k}=e^{2\pi i/2^k}$, i.e. $\mathsf{QMA}_1^G\subseteq\mathsf{QMA}_1^{G_{2^k}}$ for all gatesets $G$ in $\mathbb{Q}(\zeta_{2^k})$. For $\mathsf{BQP}_1$, we can even show that $G_2$ suffices for all $2^k$-th cyclotomic fields. We exhibit complete problems for all $\mathsf{QMA}_1^{G_{2^k}}$: Quantum $l$-SAT in $\mathbb{Q}(\zeta_{2^k})$ is complete for $\mathsf{QMA}_1^{G_{2^k}}$ for all $l\ge4$, and $l=3$ if $k\ge3$, where quantum $l$-SAT is the problem of deciding whether a set of $l$-local Hamiltonians has a common ground state. Additionally, we give the first $\mathsf{QMA}_1$-complete $2$-local Hamiltonian problem: It is $\mathsf{QMA}_1^{G_{2^k}}$-complete (for $k\ge3$) to decide whether a given $2$-local Hamiltonian $H$ in $\mathbb{Q}(\zeta_{2^k})$ has a nonempty nullspace. Our techniques also extend to sparse Hamiltonians, and so we can prove the first $\mathsf{QMA}_1(2)$-complete (i.e. $\mathsf{QMA}_1$ with two unentangled provers) Hamiltonian problem. Finally, we prove that the Gapped Clique Homology problem defined by King and Kohler (FOCS 2024) is $\mathsf{QMA}_1^{G_2}$-complete, and the Clique Homology problem without promise gap is PSPACE-complete.
Let $P$ be a set of $n$ points in the plane. We consider a variation of the classical Erd\H{o}s-Szekeres problem, presenting efficient algorithms with $O(n^3)$ running time and $O(n^2)$ space complexity that compute: (1) A subset $S$ of $P$ such that the boundary of the rectilinear convex hull of $S$ has the maximum number of points from $P$, (2) a subset $S$ of $P$ such that the boundary of the rectilinear convex hull of $S$ has the maximum number of points from $P$ and its interior contains no element of $P$, (3) a subset $S$ of $P$ such that the rectilinear convex hull of $S$ has maximum area and its interior contains no element of $P$, and (4) when each point of $P$ is assigned a weight, positive or negative, a subset $S$ of $P$ that maximizes the total weight of the points in the rectilinear convex hull of $S$. We also revisit the problems of computing a maximum-area orthoconvex polygon and computing a maximum-area staircase polygon, amidst a point set in a rectangular domain. We obtain new and simpler algorithms to solve both problems with the same complexity as in the state of the art.
We prove in this paper that there is a language $L_d$ accepted by some nondeterministic Turing machines but not by any ${\rm co}\mathcal{NP}$-machines (defined later). Then we further show that $L_d$ is in $\mathcal{NP}$, thus proving that $\mathcal{NP}\neq{\rm co}\mathcal{NP}$. The main techniques used in this paper are lazy-diagonalization and the novel new technique developed in the author's recent work \cite{Lin21}. Further, since there exists some oracle $A$ such that $\mathcal{NP}^A={\rm co}\mathcal{NP}^A$, we then explore what mystery behind it and show that if $\mathcal{NP}^A={\rm co}\mathcal{NP}^A$ and under some rational assumptions, the set of all polynomial-time co-nondeterministic oracle Turing machines with oracle $A$ is not enumerable, thus showing that the technique of lazy-diagonalization is not applicable for the first half of the whole step to separate $\mathcal{NP}^A$ from ${\rm co}\mathcal{NP}^A$. As a by-product, we reach the important result that $\mathcal{P}\neq\mathcal{NP}$ \cite{Lin21} once again, which is clear from the above outcome and the well-known fact that $\mathcal{P}={\rm co}\mathcal{P}$. Next, we show that the complexity class ${\rm co}\mathcal{NP}$ has intermediate languages, i.e., there exists language $L_{inter}\in{\rm co}\mathcal{NP}$ which is not in $\mathcal{P}$ and not ${\rm co}\mathcal{NP}$-complete. We also summarize other direct consequences implied by our main outcome such as $\mathcal{NEXP}\neq{\rm co}\mathcal{NEXP}$ and other which belongs to the area of proof complexity. Lastly, we show a lower bounds result for Frege proof systems, i.e., no Frege proof systems can be polynomially bounded.
A convergent numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation is derived. The method is based on applying a tailor-made projection operator to the initial data, and then solving exactly using the generalized method of characteristics. The projection step is the only step that introduces any approximation error. It is therefore crucial that its design ensures not only a good approximation of the initial data, but also that errors due to the energy dissipation at later times remain small. Furthermore, it is shown that the main quantity of interest, the wave profile, converges in $L^{\infty}$ for all $t \geq 0$, while a subsequence of the energy density converges weakly for almost every time.
We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (\UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is \NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the \UGC).
We propose and justify a matrix reduction method for calculating the optimal approximation of an observed matrix $A \in {\mathbb C}^{m \times n}$ by a sum $\sum_{i=1}^p \sum_{j=1}^q B_iX_{ij}C_j$ of matrix products where each $B_i \in {\mathbb C}^{m \times g_i}$ and $C_j \in {\mathbb C}^{h_j \times n}$ is known and where the unknown matrix kernels $X_{ij}$ are determined by minimizing the Frobenius norm of the error. The sum can be represented as a bounded linear mapping $BXC$ with unknown kernel $X$ from a prescribed subspace ${\mathcal T} \subseteq {\mathbb C}^n$ onto a prescribed subspace ${\mathcal S} \subseteq {\mathbb C}^m$ defined respectively by the collective domains and ranges of the given matrices $C_1,\ldots,C_q$ and $B_1,\ldots,B_p$. We show that the optimal kernel is $X = B^{\dag}AC^{\dag}$ and that the optimal approximation $BB^{\dag}AC^{\dag}C$ is the projection of the observed mapping $A$ onto a mapping from ${\mathcal T}$ to ${\mathcal S}$. If $A$ is large $B$ and $C$ may also be large and direct calculation of $B^{\dag}$ and $C^{\dag}$ becomes unwieldy and inefficient. { The proposed method avoids} this difficulty by reducing the solution process to finding the pseudo-inverses of a collection of much smaller matrices. This significantly reduces the computational burden.
We study the finite element approximation of problems involving the weighted $p$-Laplacian for $p \in (1,\infty)$ and weights belonging to the Muckenhoupt class $A_1$. In particular, we consider an equation and an obstacle problem for the weighted $p$-Laplacian and derive error estimates in both cases. The analysis is based on the language of weighted Orlicz and Orlicz--Sobolev spaces.
$\ell_1$ regularization is used to preserve edges or enforce sparsity in a solution to an inverse problem. We investigate the Split Bregman and the Majorization-Minimization iterative methods that turn this non-smooth minimization problem into a sequence of steps that include solving an $\ell_2$-regularized minimization problem. We consider selecting the regularization parameter in the inner generalized Tikhonov regularization problems that occur at each iteration in these $\ell_1$ iterative methods. The generalized cross validation and $\chi^2$ degrees of freedom methods are extended to these inner problems. In particular, for the $\chi^2$ method this includes extending the $\chi^2$ result for problems in which the regularization operator has more rows than columns, and showing how to use the $A-$weighted generalized inverse to estimate prior information at each inner iteration. Numerical experiments for image deblurring problems demonstrate that it is more effective to select the regularization parameter automatically within the iterative schemes than to keep it fixed for all iterations. Moreover, an appropriate regularization parameter can be estimated in the early iterations and used fixed to convergence.
This work introduces a novel approach to constructing DNA codes from linear codes over a non-chain extension of $\mathbb{Z}_4$. We study $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}=\mathbb{Z}_4+\omega\mathbb{Z}_4, \omega^2=\omega,$ with an $\mathfrak{R}$-automorphism $\text{\textbaro}$ and a $\text{\textbaro}$-derivation $\mathfrak{d}$ over $\mathfrak{R}.$ Further, we determine the generators of the $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}$ of any arbitrary length and establish the reverse constraint for these codes. Besides the necessary and sufficient criterion to derive reverse-complement codes, we present a construction to obtain DNA codes from these reversible codes. Moreover, we use another construction on the $(\text{\textbaro},\mathfrak{d},\gamma)$-constacyclic codes to generate additional optimal and new classical codes. Finally, we provide several examples of $(\text{\textbaro},\mathfrak{d}, \gamma)$ constacyclic codes and construct DNA codes from established results. The parameters of these linear codes over $\mathbb{Z}_4$ are better and optimal according to the codes available at \cite{z4codes}.
We consider two-dimensional $(\lambda_1, \lambda_2)$-constacyclic codes over $\mathbb{F}_{q}$ of area $M N$, where $q$ is some power of prime $p$ with $\gcd(M,p)=1$ and $\gcd(N,p)=1$. With the help of common zero (CZ) set, we characterize 2-D constacyclic codes. Further, we provide an algorithm to construct an ideal basis of these codes by using their essential common zero (ECZ) sets. We describe the dual of 2-D constacyclic codes. Finally, we provide an encoding scheme for generating 2-D constacyclic codes. We present an example to illustrate that 2-D constacyclic codes can have better minimum distance compared to their cyclic counterparts with the same code size and code rate.
We propose a $C^0$ interior penalty method for the fourth-order stream function formulation of the surface Stokes problem. The scheme utilizes continuous, piecewise polynomial spaces defined on an approximate surface. We show that the resulting discretization is positive definite and derive error estimates in various norms in terms of the polynomial degree of the finite element space as well as the polynomial degree to define the geometry approximation. A notable feature of the scheme is that it does not explicitly depend on the Gauss curvature of the surface. This is achieved via a novel integration-by-parts formula for the surface biharmonic operator.