In this paper, we first study hypergraph rewriting in categorical terms in an attempt to define the notion of events and develop foundations of causality in graph rewriting. We introduce novel concepts within the framework of double-pushout rewriting in adhesive categories. Secondly, we will study the notion of events in $\lambda-$calculus, wherein we construct an algorithm to determine causal relations between events following the evaluation of a $\lambda-$expression satisfying certain conditions. Lastly, we attempt to extend this definition to arbitrary $\lambda-$expressions.
This paper addresses the inverse scattering problem for Maxwell's equations. We first show that a bianisotropic scatterer can be uniquely determined from multi-static far-field data through the factorization analysis of the far-field operator. Next, we investigate a modified version of the orthogonality sampling method for the numerical reconstruction of the scatterer. Finally, we apply this sampling method to invert unprocessed 3D experimental data obtained from the Fresnel Institute. Numerical examples with synthetic scattering data for bianisotropic targets are also presented to demonstrate the effectiveness of the method.
Computing the crossing number of a graph is one of the most classical problems in computational geometry. Both it and numerous variations of the problem have been studied, and overcoming their frequent computational difficulty is an active area of research. Particularly recently, there has been increased effort to show and understand the parameterized tractability of various crossing number variants. While many results in this direction use a similar approach, a general framework remains elusive. We suggest such a framework that generalizes important previous results, and can even be used to show the tractability of deciding crossing number variants for which this was stated as an open problem in previous literature. Our framework targets variants that prescribe a partial predrawing and some kind of topological restrictions on crossings. Additionally, to provide evidence for the non-generalizability of previous approaches for the partially crossing number problem to allow for geometric restrictions, we show a new more constrained hardness result for partially predrawn rectilinear crossing number. In particular, we show W-hardness of deciding Straight-Line Planarity Extension parameterized by the number of missing edges.
In this paper, a novel $h$-adaptive isogeometric solver utilizing high-order hierarchical splines is proposed to solve the all-electron Kohn--Sham equation. In virtue of the smooth nature of Kohn--Sham wavefunctions across the domain, except at the nuclear positions, high-order globally regular basis functions such as B-splines are well suited for achieving high accuracy. To further handle the singularities in the external potential at the nuclear positions, an $h$-adaptive framework based on the hierarchical splines is presented with a specially designed residual-type error indicator, allowing for different resolutions on the domain. The generalized eigenvalue problem raising from the discretized Kohn--Sham equation is effectively solved by the locally optimal block preconditioned conjugate gradient (LOBPCG) method with an elliptic preconditioner, and it is found that the eigensolver's convergence is independent of the spline basis order. A series of numerical experiments confirm the effectiveness of the $h$-adaptive framework, with a notable experiment that the numerical accuracy $10^{-3} \mathrm{~Hartree/particle}$ in the all-electron simulation of a methane molecule is achieved using only $6355$ degrees of freedom, demonstrating the competitiveness of our solver for the all-electron Kohn--Sham equation.
In this paper, we present a randomized extension of the deep splitting algorithm introduced in [Beck, Becker, Cheridito, Jentzen, and Neufeld (2021)] using random neural networks suitable to approximately solve both high-dimensional nonlinear parabolic PDEs and PIDEs with jumps having (possibly) infinite activity. We provide a full error analysis of our so-called random deep splitting method. In particular, we prove that our random deep splitting method converges to the (unique viscosity) solution of the nonlinear PDE or PIDE under consideration. Moreover, we empirically analyze our random deep splitting method by considering several numerical examples including both nonlinear PDEs and nonlinear PIDEs relevant in the context of pricing of financial derivatives under default risk. In particular, we empirically demonstrate in all examples that our random deep splitting method can approximately solve nonlinear PDEs and PIDEs in 10'000 dimensions within seconds.
In this paper, we proposed a monotone block coordinate descent method for solving absolute value equation (AVE). Under appropriate conditions, we analyzed the global convergence of the algorithm and conduct numerical experiments to demonstrate its feasibility and effectiveness.
In this paper, we focus on efficiently and flexibly simulating the Fokker-Planck equation associated with the Nonlinear Noisy Leaky Integrate-and-Fire (NNLIF) model, which reflects the dynamic behavior of neuron networks. We apply the Galerkin spectral method to discretize the spatial domain by constructing a variational formulation that satisfies complex boundary conditions. Moreover, the boundary conditions in the variational formulation include only zeroth-order terms, with first-order conditions being naturally incorporated. This allows the numerical scheme to be further extended to an excitatory-inhibitory population model with synaptic delays and refractory states. Additionally, we establish the consistency of the numerical scheme. Experimental results, including accuracy tests, blow-up events, and periodic oscillations, validate the properties of our proposed method.
Studying unified model averaging estimation for situations with complicated data structures, we propose a novel model averaging method based on cross-validation (MACV). MACV unifies a large class of new and existing model averaging estimators and covers a very general class of loss functions. Furthermore, to reduce the computational burden caused by the conventional leave-subject/one-out cross validation, we propose a SEcond-order-Approximated Leave-one/subject-out (SEAL) cross validation, which largely improves the computation efficiency. In the context of non-independent and non-identically distributed random variables, we establish the unified theory for analyzing the asymptotic behaviors of the proposed MACV and SEAL methods, where the number of candidate models is allowed to diverge with sample size. To demonstrate the breadth of the proposed methodology, we exemplify four optimal model averaging estimators under four important situations, i.e., longitudinal data with discrete responses, within-cluster correlation structure modeling, conditional prediction in spatial data, and quantile regression with a potential correlation structure. We conduct extensive simulation studies and analyze real-data examples to illustrate the advantages of the proposed methods.
In this paper we obtain the Wedderburn-Artin decomposition of a semisimple group algebra associated to a direct product of finite groups. We also provide formulae for the number of all possible group codes, and their dimensions, that can be constructed in a group algebra. As particular cases, we present the complete algebraic description of the group algebra of any direct product of groups whose direct factors are cyclic, dihedral, or generalised quaternion groups. Finally, in the specific case of semisimple dihedral group algebras, we give a method to build quantum error-correcting codes, based on the CSS construction.
When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.