Galois self-orthogonal (SO) codes are generalizations of Euclidean and Hermitian SO codes. Algebraic geometry (AG) codes are the first known class of linear codes exceeding the Gilbert-Varshamov bound. Both of them have attracted much attention for their rich algebraic structures and wide applications in these years. In this paper, we consider them together and study Galois SO AG codes. A criterion for an AG code being Galois SO is presented. Based on this criterion, we construct several new classes of maximum distance separable (MDS) Galois SO AG codes from projective lines and several new classes of Galois SO AG codes from projective elliptic curves, hyper-elliptic curves and hermitian curves. In addition, we give an embedding method that allows us to obtain more MDS Galois SO codes from known MDS Galois SO AG codes.
We present a relational representation of odd Sugihara chains. The elements of the algebra are represented as weakening relations over a particular poset which consists of two densely embedded copies of the rationals. Our construction mimics that of Maddux (2010) where a relational representation of the even Sugihara chains is given. An order automorphism between the two copies of the rationals is the key to ensuring that the identity element of the monoid is fixed by the involution.
The recently-emerging field of higher order MDS codes has sought to unify a number of concepts in coding theory. Such areas captured by higher order MDS codes include maximally recoverable (MR) tensor codes, codes with optimal list-decoding guarantees, and codes with constrained generator matrices (as in the GM-MDS theorem). By proving these equivalences, Brakensiek-Gopi-Makam showed the existence of optimally list-decodable Reed-Solomon codes over exponential sized fields. Building on this, recent breakthroughs by Guo-Zhang and Alrabiah-Guruswami-Li have shown that randomly punctured Reed-Solomon codes achieve list-decoding capacity (which is a relaxation of optimal list-decodability) over linear size fields. We extend these works by developing a formal theory of relaxed higher order MDS codes. In particular, we show that there are two inequivalent relaxations which we call lower and upper relaxations. The lower relaxation is equivalent to relaxed optimal list-decodable codes and the upper relaxation is equivalent to relaxed MR tensor codes with a single parity check per column. We then generalize the techniques of GZ and AGL to show that both these relaxations can be constructed over constant size fields by randomly puncturing suitable algebraic-geometric codes. For this, we crucially use the generalized GM-MDS theorem for polynomial codes recently proved by Brakensiek-Dhar-Gopi. We obtain the following corollaries from our main result. First, randomly punctured AG codes of rate $R$ achieve list-decoding capacity with list size $O(1/\epsilon)$ and field size $\exp(O(1/\epsilon^2))$. Prior to this work, AG codes were not even known to achieve list-decoding capacity. Second, by randomly puncturing AG codes, we can construct relaxed MR tensor codes with a single parity check per column over constant-sized fields, whereas (non-relaxed) MR tensor codes require exponential field size.
The equioscillation theorem interleaves the Haar condition, the existence and uniqueness and strong uniqueness of the optimal Chebyshev approximation and its characterization by the equioscillation condition in a way that cannot extend to multivariate approximation: Rice~[\emph{Transaction of the AMS}, 1963] says ''A form of alternation is still present for functions of several variables. However, there is apparently no simple method of distinguishing between the alternation of a best approximation and the alternation of other approximating functions. This is due to the fact that there is no natural ordering of the critical points.'' In addition, in the context of multivariate approximation the Haar condition is typically not satisfied and strong uniqueness may hold or not. The present paper proposes an multivariate equioscillation theorem, which includes such a simple alternation condition on error extrema, existence and a sufficient condition for strong uniqueness. To this end, the relationship between the properties interleaved in the univariate equioscillation theorem is clarified: first, a weak Haar condition is proposed, which simply implies existence. Second, the equioscillation condition is shown to be equivalent to the optimality condition of convex optimization, hence characterizing optimality independently from uniqueness. It is reformulated as the synchronized oscillations between the error extrema and the components of a related Haar matrix kernel vector, in a way that applies to multivariate approximation. Third, an additional requirement on the involved Haar matrix and its kernel vector, called strong equioscillation, is proved to be sufficient for the strong uniqueness of the solution. These three disconnected conditions give rise to a multivariate equioscillation theorem, where existence, characterization by equioscillation and strong uniqueness are separated, without involving the too restrictive Haar condition. Remarkably, relying on optimality condition of convex optimization gives rise to a quite simple proof. Instances of multivariate problems with strongly unique, non-strong but unique and non-unique solutions are presented to illustrate the scope of the theorem.
Multiobjective evolutionary algorithms (MOEAs) are major methods for solving multiobjective optimization problems (MOPs). Many MOEAs have been proposed in the past decades, of which the operators need carefully handcrafted design with domain knowledge. Recently, some attempts have been made to replace the manually designed operators in MOEAs with learning-based operators (e.g., neural network models). However, much effort is still required for designing and training such models, and the learned operators might not generalize well to solve new problems. To tackle the above challenges, this work investigates a novel approach that leverages the powerful large language model (LLM) to design MOEA operators. With proper prompt engineering, we successfully let a general LLM serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a zero-shot manner. In addition, by learning from the LLM behavior, we further design an explicit white-box operator with randomness and propose a new version of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on different test benchmarks show that our proposed method can achieve competitive performance with widely used MOEAs. It is also promising to see the operator only learned from a few instances can have robust generalization performance on unseen problems with quite different patterns and settings. The results reveal the potential benefits of using pre-trained LLMs in the design of MOEAs.
Determining the weight distribution of a code is an old and fundamental topic in coding theory that has been thoroughly studied. In 1977, Helleseth, Kl{\o}ve, and Mykkeltveit presented a weight enumerator polynomial of the lifted code over $\mathbb{F}_{q^\ell}$ of a $q$-ary linear code with significant combinatorial properties, which can determine the support weight distribution of this linear code. The Solomon-Stiffler codes are a family of famous Griesmer codes, which were proposed by Solomon and Stiffler in 1965. In this paper, we determine the weight enumerator polynomials of the lifted codes of the projective Solomon-Stiffler codes using some combinatorial properties of subspaces. As a result, we determine the support weight distributions of the projective Solomon-Stiffler codes. In particular, we determine the weight hierarchies of the projective Solomon-Stiffler codes.
The proximal Galerkin finite element method is a high-order, low iteration complexity, nonlinear numerical method that preserves the geometric and algebraic structure of pointwise bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free boundary problems, enforce discrete maximum principles, and develop a scalable, mesh-independent algorithm for optimal design problems with pointwise bound constraints. This paper also provides a derivation of the latent variable proximal point (LVPP) algorithm, an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of its main benefits is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and certain infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.
Trust-region (TR) and adaptive regularization using cubics (ARC) have proven to have some very appealing theoretical properties for non-convex optimization by concurrently computing function value, gradient, and Hessian matrix to obtain the next search direction and the adjusted parameters. Although stochastic approximations help largely reduce the computational cost, it is challenging to theoretically guarantee the convergence rate. In this paper, we explore a family of stochastic TR and ARC methods that can simultaneously provide inexact computations of the Hessian matrix, gradient, and function values. Our algorithms require much fewer propagations overhead per iteration than TR and ARC. We prove that the iteration complexity to achieve $\epsilon$-approximate second-order optimality is of the same order as the exact computations demonstrated in previous studies. Additionally, the mild conditions on inexactness can be met by leveraging a random sampling technology in the finite-sum minimization problem. Numerical experiments with a non-convex problem support these findings and demonstrate that, with the same or a similar number of iterations, our algorithms require less computational overhead per iteration than current second-order methods.
We provide a new characterization of both belief update and belief revision in terms of a Kripke-Lewis semantics. We consider frames consisting of a set of states, a Kripke belief relation and a Lewis selection function. Adding a valuation to a frame yields a model. Given a model and a state, we identify the initial belief set K with the set of formulas that are believed at that state and we identify either the updated belief set or the revised belief set, prompted by the input represented by formula A, as the set of formulas that are the consequent of conditionals that (1) are believed at that state and (2) have A as antecedent. We show that this class of models characterizes both the Katsuno-Mendelzon (KM) belief update functions and the AGM belief revision functions, in the following sense: (1) each model gives rise to a partial belief function that can be completed into a full KM/AGM update/revision function, and (2) for every KM/AGM update/revision function there is a model whose associated belief function coincides with it. The difference between update and revision can be reduced to two semantic properties that appear in a stronger form in revision relative to update, thus confirming the finding by Peppas et al. (1996) that, "for a fixed theory K, revising K is much the same as updating K"
Optimum distance flag codes (ODFCs), as special flag codes, have received a lot of attention due to its application in random network coding. In 2021, Alonso-Gonz\'{a}lez et al. constructed optimal $(n,\mathcal{A})$-ODFC for $\mathcal {A}\subseteq \{1,2,\ldots,k,n-k,\ldots,n-1\}$ with $k\in \mathcal A$ and $k|n$. In this paper, we introduce a new construction of $(n,\mathcal A)_q$-ODFCs by maximum rank-metric codes. It is proved that there is an $(n,\mathcal{A})$-ODFC of size $\frac{q^n-q^{k+r}}{q^k-1}+1$ for any $\mathcal{A}\subseteq\{1,2,\ldots,k,n-k,\ldots,n-1\}$ with $\mathcal A\cap \{k,n-k\}\neq\emptyset$, where $r\equiv n\pmod k$ and $0\leq r<k$. Furthermore, when $k>\frac{q^r-1}{q-1}$, this $(n,\mathcal A)_q$-ODFC is optimal. Specially, when $r=0$, Alonso-Gonz\'{a}lez et al.'s result is also obtained.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.