The additive codes may have better parameters than linear codes. However, it is still a challenging problem to efficiently construct additive codes that outperform linear codes, especially those with greater distances than linear codes of the same lengths and dimensions. This paper focuses on constructing additive codes that outperform linear codes based on quasi-cyclic codes and combinatorial methods. Firstly, we propose a lower bound on the symplectic distance of 1-generator quasi-cyclic codes of index even. Secondly, we get many binary quasi-cyclic codes with large symplectic distances utilizing computer-supported combination and search methods, all of which correspond to good quaternary additive codes. Notably, some additive codes have greater distances than best-known quaternary linear codes in Grassl's code table (bounds on the minimum distance of quaternary linear codes //www.codetables.de) for the same lengths and dimensions. Moreover, employing a combinatorial approach, we partially determine the parameters of optimal quaternary additive 3.5-dimensional codes with lengths from $28$ to $254$. Finally, as an extension, we also construct some good additive complementary dual codes with larger distances than the best-known quaternary linear complementary dual codes in the literature.
We propose a matrix-free parallel two-level-deflation preconditioner combined with the Complex Shifted Laplacian preconditioner(CSLP) for the two-dimensional Helmholtz problems. The Helmholtz equation is widely studied in seismic exploration, antennas, and medical imaging. It is one of the hardest problems to solve both in terms of accuracy and convergence, due to scalability issues of the numerical solvers. Motivated by the observation that for large wavenumbers, the eigenvalues of the CSLP-preconditioned system shift towards zero, deflation with multigrid vectors, and further high-order vectors were incorporated to obtain wave-number-independent convergence. For large-scale applications, high-performance parallel scalable methods are also indispensable. In our method, we consider the preconditioned Krylov subspace methods for solving the linear system obtained from finite-difference discretization. The CSLP preconditioner is approximated by one parallel geometric multigrid V-cycle. For the two-level deflation, the matrix-free Galerkin coarsening as well as high-order re-discretization approaches on the coarse grid are studied. The results of matrix-vector multiplications in Krylov subspace methods and the interpolation/restriction operators are implemented based on the finite-difference grids without constructing any coefficient matrix. These adjustments lead to direct improvements in terms of memory consumption. Numerical experiments of model problems show that wavenumber independence has been obtained for medium wavenumbers. The matrix-free parallel framework shows satisfactory weak and strong parallel scalability.
Characterizing shapes of high-dimensional objects via Ricci curvatures plays a critical role in many research areas in mathematics and physics. However, even though several discretizations of Ricci curvatures for discrete combinatorial objects such as networks have been proposed and studied by mathematicians, the computational complexity aspects of these discretizations have escaped the attention of theoretical computer scientists to a large extent. In this paper, we study one such discretization, namely the Ollivier-Ricci curvature, from the perspective of efficient computation by fine-grained reductions and local query-based algorithms. Our main contributions are the following. (a) We relate our curvature computation problem to minimum weight perfect matching problem on complete bipartite graphs via fine-grained reduction. (b) We formalize the computational aspects of the curvature computation problems in suitable frameworks so that they can be studied by researchers in local algorithms. (c) We provide the first known lower and upper bounds on queries for query-based algorithms for the curvature computation problems in our local algorithms framework. En route, we also illustrate a localized version of our fine-grained reduction. We believe that our results bring forth an intriguing set of research questions, motivated both in theory and practice, regarding designing efficient algorithms for curvatures of objects.
Operator splitting is a popular divide-and-conquer strategy for solving differential equations. Typically, the right-hand side of the differential equation is split into a number of parts that are then integrated separately. Many methods are known that split the right-hand side into two parts. This approach is limiting, however, and there are situations when 3-splitting is more natural and ultimately more advantageous. The second-order Strang operator-splitting method readily generalizes to a right-hand side splitting into any number of operators. It is arguably the most popular method for 3-splitting because of its efficiency, ease of implementation, and intuitive nature. Other 3-splitting methods exist, but they are less well-known, and \rev{analysis and} evaluation of their performance in practice are scarce. We demonstrate the effectiveness of some alternative 3-split, second-order methods to Strang splitting on two problems: the reaction-diffusion Brusselator, which can be split into three parts that each have closed-form solutions, and the kinetic Vlasov--Poisson equations that is used in semi-Lagrangian plasma simulations. We find alternative second-order 3-operator-splitting methods that realize efficiency gains of 10\%--20\% over traditional Strang splitting. Our analysis for the practical assessment of efficiency of operator-splitting methods includes the computational cost of the integrators and can be used in method design.
Non-overlapping codes have been studied for almost 60 years. In such a code, no proper, non-empty prefix of any codeword is a suffix of any codeword. In this paper, we study codes in which overlaps of certain specified sizes are forbidden. We prove some general bounds and we give several constructions in the case of binary codes. Our techniques also allow us to provide an alternative, elementary proof of a lower bound on non-overlapping codes due to Levenshtein in 1964.
Stochastic inverse problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.
Neural networks have revolutionized the field of machine learning with increased predictive capability. In addition to improving the predictions of neural networks, there is a simultaneous demand for reliable uncertainty quantification on estimates made by machine learning methods such as neural networks. Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty. This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated. With an example dataset of images where the goal is to identify the amplitude of an event in the image, it is shown that epistemic uncertainty tends to be lower in images which are well-represented in the training dataset and tends to be high in images which are not well-represented. An algorithm for out-of-distribution (OoD) detection with BNN epistemic uncertainty is introduced along with various experiments demonstrating factors influencing the OoD detection capability in a BNN. The OoD detection capability with epistemic uncertainty is shown to be comparable to the OoD detection in the discriminator network of a generative adversarial network (GAN) with comparable network architecture.
This article re-examines Lawvere's abstract, category-theoretic proof of the fixed-point theorem whose contrapositive is a `universal' diagonal argument. The main result is that the necessary axioms for both the fixed-point theorem and the diagonal argument can be stripped back further, to a semantic analogue of a weak substructural logic lacking weakening or exchange.
Very distinct strategies can be deployed to recognize and characterize an unknown environment or a shape. A recent and promising approach, especially in robotics, is to reduce the complexity of the exploratory units to a minimum. Here, we show that this frugal strategy can be taken to the extreme by exploiting the power of statistical geometry and introducing new invariant features. We show that an elementary robot devoid of any orientation or observation system, exploring randomly, can access global information about an environment such as the values of the explored area and perimeter. The explored shapes are of arbitrary geometry and may even non-connected. From a dictionary, this most simple robot can thus identify various shapes such as famous monuments and even read a text.
Linear statistics of point processes yield Monte Carlo estimators of integrals. While the simplest approach relies on a homogeneous Poisson point process, more regularly spread point processes, such as scrambled low-discrepancy sequences or determinantal point processes, can yield Monte Carlo estimators with fast-decaying mean square error. Following the intuition that more regular configurations result in lower integration error, we introduce the repulsion operator, which reduces clustering by slightly pushing the points of a configuration away from each other. Our main theoretical result is that applying the repulsion operator to a homogeneous Poisson point process yields an unbiased Monte Carlo estimator with lower variance than under the original point process. On the computational side, the evaluation of our estimator is only quadratic in the number of integrand evaluations and can be easily parallelized without any communication across tasks. We illustrate our variance reduction result with numerical experiments and compare it to popular Monte Carlo methods. Finally, we numerically investigate a few open questions on the repulsion operator. In particular, the experiments suggest that the variance reduction also holds when the operator is applied to other motion-invariant point processes.
Ghost, or fictitious points allow to capture boundary conditions that are not located on the finite difference grid discretization. We explore in this paper the impact of ghost points on the stability of the explicit Euler finite difference scheme in the context of the diffusion equation. In particular, we consider the case of a one-touch option under the Black-Scholes model. The observations and results are however valid for a much wider range of financial contracts and models.