We prove that the number of partitions of the hypercube ${\bf Z}_q^n$ into $q^m$ subcubes of dimension $n-m$ each for fixed $q$, $m$ and growing $n$ is asymptotically equal to $n^{(q^m-1)/(q-1)}$. For the proof, we introduce the operation of the bang of a star matrix and demonstrate that any star matrix, except for a fractal, is expandable under some bang, whereas a fractal remains to be a fractal under any bang.
Let $P$ be a $k$-colored set of $n$ points in the plane, $4 \leq k \leq n$. We study the problem of deciding if $P$ contains a subset of four points of different colors such that its Rectilinear Convex Hull has positive area. We show this problem to be equivalent to deciding if there exists a point $c$ in the plane such that each of the open quadrants defined by $c$ contains a point of $P$, each of them having a different color. We provide an $O(n \log n)$-time algorithm for this problem, where the hidden constant does not depend on $k$; then, we prove that this problem has time complexity $\Omega(n \log n)$ in the algebraic computation tree model. No general position assumptions for $P$ are required.
Let $p$ be an odd prime. In this paper, we have determined the Hamming distances for constacyclic codes of length $2p^s$ over the finite commutative non-chain ring $\mathcal{R}=\frac{\mathbb{F}_{p^m}[u, v]}{\langle u^2, v^2, uv-vu\rangle}$. Also their symbol-pair distances are completely obtained.
In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD) problems, called JDSVD, a large symmetric and generally indefinite correction equation is solved iteratively at each outer iteration, which constitutes the inner iterations and dominates the overall efficiency of JDSVD. In this paper, a convergence analysis is made on the minimal residual (MINRES) method for the correction equation. Motivated by the results obtained, at each outer iteration a new correction equation is derived that extracts useful information from current subspaces to construct effective preconditioners for the correction equation and is proven to retain the same convergence of outer iterations of JDSVD.The resulting method is called inner preconditioned JDSVD (IPJDSVD) method; it is also a new JDSVD method, and any viable preconditioner for the correction equations in JDSVD is straightforwardly applicable to those in IPJDSVD. Convergence results show that MINRES for the new correction equation can converge much faster when there is a cluster of singular values closest to a given target. A new thick-restart IPJDSVD algorithm with deflation and purgation is proposed that simultaneously accelerates the outer and inner convergence of the standard thick-restart JDSVD and computes several singular triplets. Numerical experiments justify the theory and illustrate the considerable superiority of IPJDSVD to JDSVD, and demonstrate that a similar two-stage IPJDSVD algorithm substantially outperforms the most advanced PRIMME\_SVDS software nowadays for computing the smallest singular triplets.
For a prime $p$, a restricted arithmetic progression in $\mathbb{F}_p^n$ is a triplet of vectors $x, x+a, x+2a$ in which the common difference $a$ is a non-zero element from $\{0,1,2\}^n$. What is the size of the largest $A\subseteq \mathbb{F}_p^n$ that is free of restricted arithmetic progressions? We show that the density of any such a set is at most $\frac{C}{(\log\log\log n)^c}$, where $c,C>0$ depend only on $p$, giving the first reasonable bounds for the density of such sets. Previously, the best known bound was $O(1/\log^{*} n)$, which follows from the density Hales-Jewett theorem.
We address the regression problem for a general function $f:[-1,1]^d\to \mathbb R$ when the learner selects the training points $\{x_i\}_{i=1}^n$ to achieve a uniform error bound across the entire domain. In this setting, known historically as nonparametric regression, we aim to establish a sample complexity bound that depends solely on the function's degree of smoothness. Assuming periodicity at the domain boundaries, we introduce PADUA, an algorithm that, with high probability, provides performance guarantees optimal up to constant or logarithmic factors across all problem parameters. Notably, PADUA is the first parametric algorithm with optimal sample complexity for this setting. Due to this feature, we prove that, differently from the non-parametric state of the art, PADUA enjoys optimal space complexity in the prediction phase. To validate these results, we perform numerical experiments over functions coming from real audio data, where PADUA shows comparable performance to state-of-the-art methods, while requiring only a fraction of the computational time.
We characterise the behaviour of the maximum Diaconis-Ylvisaker prior penalized likelihood estimator in high-dimensional logistic regression, where the number of covariates is a fraction $\kappa \in (0,1)$ of the number of observations $n$, as $n \to \infty$. We derive the estimator's aggregate asymptotic behaviour under this proportional asymptotic regime, when covariates are independent normal random variables with mean zero and the linear predictor has asymptotic variance $\gamma^2$. From this foundation, we devise adjusted $Z$-statistics, penalized likelihood ratio statistics, and aggregate asymptotic results with arbitrary covariate covariance. While the maximum likelihood estimate asymptotically exists only for a narrow range of $(\kappa, \gamma)$ values, the maximum Diaconis-Ylvisaker prior penalized likelihood estimate not only exists always but is also directly computable using maximum likelihood routines. Thus, our asymptotic results also hold for $(\kappa, \gamma)$ values where results for maximum likelihood are not attainable, with no overhead in implementation or computation. We study the estimator's shrinkage properties, compare it to alternative estimation methods that can operate with proportional asymptotics, and present procedures for the estimation of unknown constants that describe the asymptotic behaviour of our estimator. We also provide a conjecture about the behaviour of our estimator when an intercept parameter is present in the model. We present results from extensive numerical studies to demonstrate the theoretical advances and strong evidence to support the conjecture, and illustrate the methodology we put forward through the analysis of a real-world data set on digit recognition.
In this paper, based on the theory of defining sets, two classes of at most six-weight linear codes over $\mathbb{F}_p$ are constructed. The weight distributions of the linear codes are determined by means of Gaussian period and Weil sums. In some case, there is an almost optimal code with respect to Griesmer bound, which is also an optimal one according to the online code table. The linear codes can also be employed to get secret sharing schemes.
We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied by the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our Definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.
We introduce and study a purely syntactic notion of lax cones and $(\infty,\infty)$-limits on finite computads in \texttt{CaTT}, a type theory for $(\infty,\infty)$-categories due to Finster and Mimram. Conveniently, finite computads are precisely the contexts in \texttt{CaTT}. We define a cone over a context to be a context, which is obtained by induction over the list of variables of the underlying context. In the case where the underlying context is globular we give an explicit description of the cone and conjecture that an analogous description continues to hold also for general contexts. We use the cone to control the types of the term constructors for the universal cone. The implementation of the universal property follows a similar line of ideas. Starting with a cone as a context, a set of context extension rules produce a context with the shape of a transfor between cones, i.e.~a higher morphism between cones. As in the case of cones, we use this context as a template to control the types of the term constructor required for universal property.
In general $n$-dimensional simplicial meshes, we propose a family of interior penalty nonconforming finite element methods for $2m$-th order partial differential equations, where $m \geq 0$ and $n \geq 1$. For this family of nonconforming finite elements, the shape function space consists of polynomials with a degree not greater than $m$, making it minimal. This family of finite element spaces exhibits natural inclusion properties, analogous to those in the corresponding Sobolev spaces in the continuous case. By applying interior penalty to the bilinear form, we establish quasi-optimal error estimates in the energy norm. Due to the weak continuity of the nonconforming finite element spaces, the interior penalty terms in the bilinear form take a simple form, and an interesting property is that the penalty parameter needs only to be a positive constant of $\mathcal{O}(1)$. These theoretical results are further validated by numerical tests.