Linear complementary dual (LCD) codes can be used to against side-channel attacks and fault noninvasive attacks. Let $d_{a}(n,6)$ and $d_{l}(n,6)$ be the minimum weights of all binary optimal linear codes and LCD codes with length $n$ and dimension 6, respectively.In this article, we aim to obtain the values of $d_{l}(n,6)$ for $n\geq 51$ by investigating the nonexistence and constructions of LCD codes with given parameters. Suppose that $s \ge 0$ and $0\leq t\leq 62$ are two integers and $n=63s+t$. Using the theories of defining vectors, generalized anti-codes, reduced codes and nested codes, we exactly determine $d_{l}(n,6)$ for $t \notin\{21,22,25,26,33,34,37,38,45,46\}$, while we show that $d_{l}(n,6)\in$$\{d_{a}(n,6)$ $-1,d_{a}(n,6)\}$ for $t\in\{21,22,26,34,37,38,46\}$ and $ d_{l}(n,6)\in$$ \{d_{a}(n,6)-2,$ $d_{a}(n,6)-1\}$ for$t\in{25,33,45\}$.
Stein operators allow to characterise probability distributions via differential operators. Based on these characterisations, we develop a new method of point estimation for marginal parameters of strictly stationary and ergodic processes, which we call Stein's Method of Moments (SMOM). These SMOM estimators satisfy the desirable classical properties such as consistency and asymptotic normality. As a consequence of the usually simple form of the operator, we obtain explicit estimators in cases where standard methods such as (pseudo-) maximum likelihood estimation require a numerical procedure to calculate the estimate. In addition, with our approach, one can choose from a large class of test functions which allows to improve significantly on the moment estimator. Moreover, for i.i.d. observations, we retrieve data-dependent functions that result in asymptotically efficient estimators and give a sequence of explicit SMOM estimators that converge to the maximum likelihood estimator. Our simulation study demonstrates that for a number of important univariate continuous probability distributions our SMOM estimators possess excellent small sample behaviour, often outperforming the maximum likelihood estimator and other widely-used methods in terms of lower bias and mean squared error.
We identify a large class of positive-semidefinite kernels for which a certain polynomial rate of convergence of maximum mean discrepancies of Farey sequences is equivalent to the Riemann hypothesis. This class includes all Mat\'ern kernels of order at least one-half.
Recently, constructions of optimal linear codes from simplicial complexes have attracted much attention and some related nice works were presented. Let $q$ be a prime power. In this paper, by using the simplicial complexes of ${\mathbb F}_{q}^m$ with one single maximal element, we construct four families of linear codes over the ring ${\mathbb F}_{q}+u{\mathbb F}_{q}$ ($u^2=0$), which generalizes the results of [IEEE Trans. Inf. Theory 66(6):3657-3663, 2020]. The parameters and Lee weight distributions of these four families of codes are completely determined. Most notably, via the Gray map, we obtain several classes of optimal linear codes over ${\mathbb F}_{q}$, including (near) Griesmer codes and distance-optimal codes.
We prove optimal concentration of measure for lifted functions on high dimensional expanders (HDX). Let $X$ be a $k$-dimensional HDX. We show for any $i\leq k$ and $f:X(i)\to [0,1]$: \[\Pr_{s\in X(k)}\left[\left|\underset{{t\subseteq s}}{\mathbb{E}}[f(t)]-\mu\right|\geq\varepsilon\right]\leq exp\left(-\varepsilon^2\frac{k}{i}\right).\] Using this fact, we prove that high dimensional expanders are reverse hypercontractive, a powerful functional inequality from discrete analysis implying that for any sets $A,B \subset X(k)$, the probability a $\rho$-correlated pair passes between them is at least \[\Pr_{s,s' \sim T_\rho}[s \in A, s' \in B] \geq \Pr[A]^{O(1)} \Pr[B]^{O(1)}.\] Our results hold under weak spectral assumptions on $X$. Namely we prove exponential concentration of measure for any complex below the `Trickling-Down Threshold' (beyond which concentration may be arbitrarily poor), and optimal concentration for $\sqrt{k}$-skeletons of such complexes. We also show optimal bounds for the top dimension of stronger HDX among other settings. We leverage our inequalities to prove several new agreement testing theorems on high dimensional expanders, including a new 99%-regime test for subsets, and a variant of the `Z-test' achieving inverse exponential soundness under the stronger assumption of $\ell_\infty$-expansion. The latter gives rise to the first optimal testers beyond the complete complex and products, a stepping stone toward the use of HDX in strong soundness PCPs. We also give applications within expansion, analysis, combinatorics, and coding theory, including a proof that two-sided HDX have optimal geometric overlap (giving the first explicit bounded-degree construction), near-optimal double samplers, new super-exponential degree lower bounds for certain HDX, distance-amplified list-decodable and locally testable codes, a Frankl-R\"odl Theorem and more.
It is a notorious open question whether integer programs (IPs), with an integer coefficient matrix $M$ whose subdeterminants are all bounded by a constant $\Delta$ in absolute value, can be solved in polynomial time. We answer this question in the affirmative if we further require that, by removing a constant number of rows and columns from $M$, one obtains a submatrix $A$ that is the transpose of a network matrix. Our approach focuses on the case where $A$ arises from $M$ after removing $k$ rows only, where $k$ is a constant. We achieve our result in two main steps, the first related to the theory of IPs and the second related to graph minor theory. First, we derive a strong proximity result for the case where $A$ is a general totally unimodular matrix: Given an optimal solution of the linear programming relaxation, an optimal solution to the IP can be obtained by finding a constant number of augmentations by circuits of $[A\; I]$. Second, for the case where $A$ is transpose of a network matrix, we reformulate the problem as a maximum constrained integer potential problem on a graph $G$. We observe that if $G$ is $2$-connected, then it has no rooted $K_{2,t}$-minor for $t = \Omega(k \Delta)$. We leverage this to obtain a tree-decomposition of $G$ into highly structured graphs for which we can solve the problem locally. This allows us to solve the global problem via dynamic programming.
\v{C}ech Persistence diagrams (PDs) are topological descriptors routinely used to capture the geometry of complex datasets. They are commonly compared using the Wasserstein distances $OT_{p}$; however, the extent to which PDs are stable with respect to these metrics remains poorly understood. We partially close this gap by focusing on the case where datasets are sampled on an $m$-dimensional submanifold of $\mathbb{R}^{d}$. Under this manifold hypothesis, we show that convergence with respect to the $OT_{p}$ metric happens exactly when $p\gt m$. We also provide improvements upon the bottleneck stability theorem in this case and prove new laws of large numbers for the total $\alpha$-persistence of PDs. Finally, we show how these theoretical findings shed new light on the behavior of the feature maps on the space of PDs that are used in ML-oriented applications of Topological Data Analysis.
The present article aims to design and analyze efficient first-order strong schemes for a generalized A\"{i}t-Sahalia type model arising in mathematical finance and evolving in a positive domain $(0, \infty)$, which possesses a diffusion term with superlinear growth and a highly nonlinear drift that blows up at the origin. Such a complicated structure of the model unavoidably causes essential difficulties in the construction and convergence analysis of time discretizations. By incorporating implicitness in the term $\alpha_{-1} x^{-1}$ and a corrective mapping $\Phi_h$ in the recursion, we develop a novel class of explicit and unconditionally positivity-preserving (i.e., for any step-size $h>0$) Milstein-type schemes for the underlying model. In both non-critical and general critical cases, we introduce a novel approach to analyze mean-square error bounds of the novel schemes, without relying on a priori high-order moment bounds of the numerical approximations. The expected order-one mean-square convergence is attained for the proposed scheme. The above theoretical guarantee can be used to justify the optimal complexity of the Multilevel Monte Carlo method. Numerical experiments are finally provided to verify the theoretical findings.
A common method for estimating the Hessian operator from random samples on a low-dimensional manifold involves locally fitting a quadratic polynomial. Although widely used, it is unclear if this estimator introduces bias, especially in complex manifolds with boundaries and nonuniform sampling. Rigorous theoretical guarantees of its asymptotic behavior have been lacking. We show that, under mild conditions, this estimator asymptotically converges to the Hessian operator, with nonuniform sampling and curvature effects proving negligible, even near boundaries. Our analysis framework simplifies the intensive computations required for direct analysis.
The convex dimension of a $k$-uniform hypergraph is the smallest dimension $d$ for which there is an injective mapping of its vertices into $\mathbb{R}^d$ such that the set of $k$-barycenters of all hyperedges is in convex position. We completely determine the convex dimension of complete $k$-uniform hypergraphs, which settles an open question by Halman, Onn and Rothblum, who solved the problem for complete graphs. We also provide lower and upper bounds for the extremal problem of estimating the maximal number of hyperedges of $k$-uniform hypergraphs on $n$ vertices with convex dimension $d$. To prove these results, we restate them in terms of affine projections that preserve the vertices of the hypersimplex. More generally, we provide a full characterization of the projections that preserve its $i$-dimensional skeleton. In particular, we obtain a hypersimplicial generalization of the linear van Kampen-Flores theorem: for each $n$, $k$ and $i$ we determine onto which dimensions can the $(n,k)$-hypersimplex be linearly projected while preserving its $i$-skeleton. Our results have direct interpretations in terms of $k$-sets and $(i,j)$-partitions, and are closely related to the problem of finding large convexly independent subsets in Minkowski sums of $k$ point sets.
Graph Neural Networks (GNNs), especially message-passing-based models, have become prominent in top-k recommendation tasks, outperforming matrix factorization models due to their ability to efficiently aggregate information from a broader context. Although GNNs are evaluated with ranking-based metrics, e.g NDCG@k and Recall@k, they remain largely trained with proxy losses, e.g the BPR loss. In this work we explore the use of ranking loss functions to directly optimize the evaluation metrics, an area not extensively investigated in the GNN community for collaborative filtering. We take advantage of smooth approximations of the rank to facilitate end-to-end training of GNNs and propose a Personalized PageRank-based negative sampling strategy tailored for ranking loss functions. Moreover, we extend the evaluation of GNN models for top-k recommendation tasks with an inductive user-centric protocol, providing a more accurate reflection of real-world applications. Our proposed method significantly outperforms the standard BPR loss and more advanced losses across four datasets and four recent GNN architectures while also exhibiting faster training. Demonstrating the potential of ranking loss functions in improving GNN training for collaborative filtering tasks.