We show that there is a constant $k$ such that Buss's intuitionistic theory $\mathsf{IS}^1_2$ does not prove that SAT requires co-nondeterministic circuits of size at least $n^k$. To our knowledge, this is the first unconditional unprovability result in bounded arithmetic in the context of worst-case fixed-polynomial size circuit lower bounds. We complement this result by showing that the upper bound $\mathsf{NP} \subseteq \mathsf{coNSIZE}[n^k]$ is unprovable in $\mathsf{IS}^1_2$.
This study demonstrates that the boundedness of the \( H^\infty \)-calculus for the negative discrete Laplace operator is independent of the spatial mesh size. Using this result, we deduce the discrete stochastic maximal \( L^p \)-regularity estimate for a spatial semidiscretization. Furthermore, we derive (nearly) sharp error estimates for the semidiscretization under the general spatial \( L^q \)-norms.
In this article, We introduce a condition that is both necessary and sufficient for a linear code to achieve minimality when analyzed over the rings $\mathbb{Z}_{n}$.The fundamental inquiry in minimal linear codes is the existence of a $[m,k]$ minimal linear code where $k$ is less than or equal to $m$. W. Lu et al. ( see \cite{nine}) showed that there exists a positive integer $m(k;q)$ such that for $m\geq m(k;q)$ a minimal linear code of length $m$ and dimension $k$ over a finite field $\mathbb{F}_q$ must exist. They give the upper and lower bound of $m(k;q)$. In this manuscript, we establish both an upper and lower bound for $m(k;p^l)$ and $m(k;p_1p_2)$ within the ring $\mathbb{Z}_{p^l}$ and $\mathbb{Z}_{p_1p_2}$ respectively.
We consider a distributed setup for reinforcement learning, where each agent has a copy of the same Markov Decision Process but transitions are sampled from the corresponding Markov chain independently by each agent. We show that in this setting, we can achieve a linear speedup for TD($\lambda$), a family of popular methods for policy evaluation, in the sense that $N$ agents can evaluate a policy $N$ times faster provided the target accuracy is small enough. Notably, this speedup is achieved by ``one shot averaging,'' a procedure where the agents run TD($\lambda$) with Markov sampling independently and only average their results after the final step. This significantly reduces the amount of communication required to achieve a linear speedup relative to previous work.
The importance of symmetries has recently been recognized in quantum machine learning from the simple motto: if a task exhibits a symmetry (given by a group $\mathfrak{G}$), the learning model should respect said symmetry. This can be instantiated via $\mathfrak{G}$-equivariant Quantum Neural Networks (QNNs), i.e., parametrized quantum circuits whose gates are generated by operators commuting with a given representation of $\mathfrak{G}$. In practice, however, there might be additional restrictions to the types of gates one can use, such as being able to act on at most $k$ qubits. In this work we study how the interplay between symmetry and $k$-bodyness in the QNN generators affect its expressiveness for the special case of $\mathfrak{G}=S_n$, the symmetric group. Our results show that if the QNN is generated by one- and two-body $S_n$-equivariant gates, the QNN is semi-universal but not universal. That is, the QNN can generate any arbitrary special unitary matrix in the invariant subspaces, but has no control over the relative phases between them. Then, we show that in order to reach universality one needs to include $n$-body generators (if $n$ is even) or $(n-1)$-body generators (if $n$ is odd). As such, our results brings us a step closer to better understanding the capabilities and limitations of equivariant QNNs.
Given a positive integer $d$, the d-CUT is the problem of deciding if an undirected graph $G=(V,E)$ has a cut $(A,B)$ such that every vertex in $A$ (resp. $B$) has at most $d$ neighbors in $B$ (resp. $A$). For $d=1$, the problem is referred to as MATCHING CUT. Gomes and Sau, in IPEC 2019, gave the first fixed parameter tractable algorithm for d-CUT parameterized by maximum number of the crossing edges in the cut (i.e. the size of edge cut). However, their paper doesn't provide an explicit bound on the running time, as it indirectly relies on a MSOL formulation and Courcelle's Theorem. Motivated by this, we design and present an FPT algorithm for d-CUT for general graphs with running time $2^{O(k\log k)}n^{O(1)}$ where $k$ is the maximum size of the edge cut. This is the first FPT algorithm for the d-CUT and MATCHING CUT with an explicit dependence on this parameter. We also observe that there is no algorithm solving MATCHING CUT in time $2^{o(k)}n^{O(1)}$ where $k$ is the maximum size of the edge cut unless ETH fails.
Photorealistic 3D reconstruction of street scenes is a critical technique for developing real-world simulators for autonomous driving. Despite the efficacy of Neural Radiance Fields (NeRF) for driving scenes, 3D Gaussian Splatting (3DGS) emerges as a promising direction due to its faster speed and more explicit representation. However, most existing street 3DGS methods require tracked 3D vehicle bounding boxes to decompose the static and dynamic elements for effective reconstruction, limiting their applications for in-the-wild scenarios. To facilitate efficient 3D scene reconstruction without costly annotations, we propose a self-supervised street Gaussian ($\textit{S}^3$Gaussian) method to decompose dynamic and static elements from 4D consistency. We represent each scene with 3D Gaussians to preserve the explicitness and further accompany them with a spatial-temporal field network to compactly model the 4D dynamics. We conduct extensive experiments on the challenging Waymo-Open dataset to evaluate the effectiveness of our method. Our $\textit{S}^3$Gaussian demonstrates the ability to decompose static and dynamic scenes and achieves the best performance without using 3D annotations. Code is available at: //github.com/nnanhuang/S3Gaussian/.
We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied from the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.
Follow-the-Regularized-Leader (FTRL) is a powerful framework for various online learning problems. By designing its regularizer and learning rate to be adaptive to past observations, FTRL is known to work adaptively to various properties of an underlying environment. However, most existing adaptive learning rates are for online learning problems with a minimax regret of $\Theta(\sqrt{T})$ for the number of rounds $T$, and there are only a few studies on adaptive learning rates for problems with a minimax regret of $\Theta(T^{2/3})$, which include several important problems dealing with indirect feedback. To address this limitation, we establish a new adaptive learning rate framework for problems with a minimax regret of $\Theta(T^{2/3})$. Our learning rate is designed by matching the stability, penalty, and bias terms that naturally appear in regret upper bounds for problems with a minimax regret of $\Theta(T^{2/3})$. As applications of this framework, we consider two major problems dealing with indirect feedback: partial monitoring and graph bandits. We show that FTRL with our learning rate and the Tsallis entropy regularizer improves existing Best-of-Both-Worlds (BOBW) regret upper bounds, which achieve simultaneous optimality in the stochastic and adversarial regimes. The resulting learning rate is surprisingly simple compared to the existing learning rates for BOBW algorithms for problems with a minimax regret of $\Theta(T^{2/3})$.
BCH codes are an important class of cyclic codes, and have wide applications in communication and storage systems. In this paper, we study the negacyclic BCH code and cyclic BCH code of length $n=\frac{q^m-1}{2}$.For negacyclic BCH code, we give the dimensions of $\mathcal C_{(n,-1,\delta,0)}$ for $\widetilde{\delta} =a\frac{q^m-1}{q-1},aq^{m-1}-1$($1\leq a <\frac{q-1}{2}$) and $\widetilde{\delta} =a\frac{q^m-1}{q-1}+b\frac{q^m-1}{q^2-1},aq^{m-1}+(a+b)q^{m-2}-1$ $(2\mid m,1\leq a+b \leq q-1$,$\left\lceil \frac{q-a-2}{2}\right\rceil\geq 1)$. The dimensions of negacyclic BCH codes $\mathcal C_{(n,-1,\delta,0)}$ with few nonzeros and $\mathcal C_{(n,-1,\delta,b)}$ with $b\neq 1$ are settled.For cyclic BCH code, we give the weight distributions of extended codes $\overline{\mathcal C}_{(n,1,\delta,1)}$ for $\delta=\delta_1,\delta_2$ and the parameters of dual code $\mathcal C^{\perp}_{(n,1,\delta,1)}$ for $\delta_2\leq \delta \leq \delta_1$.
We propose an approximate Thompson sampling algorithm that learns linear quadratic regulators (LQR) with an improved Bayesian regret bound of $O(\sqrt{T})$. Our method leverages Langevin dynamics with a meticulously designed preconditioner as well as a simple excitation mechanism. We show that the excitation signal induces the minimum eigenvalue of the preconditioner to grow over time, thereby accelerating the approximate posterior sampling process. Moreover, we identify nontrivial concentration properties of the approximate posteriors generated by our algorithm. These properties enable us to bound the moments of the system state and attain an $O(\sqrt{T})$ regret bound without the unrealistic restrictive assumptions on parameter sets that are often used in the literature.