We investigate the dynamics of chemical reaction networks (CRNs) with the goal of deriving an upper bound on their reaction rates. This task is challenging due to the nonlinear nature and discrete structure inherent in CRNs. To address this, we employ an information geometric approach, using the natural gradient, to develop a nonlinear system that yields an upper bound for CRN dynamics. We validate our approach through numerical simulations, demonstrating faster convergence in a specific class of CRNs. This class is characterized by the number of chemicals, the maximum value of stoichiometric coefficients of the chemical reactions, and the number of reactions. We also compare our method to a conventional approach, showing that the latter cannot provide an upper bound on reaction rates of CRNs. While our study focuses on CRNs, the ubiquity of hypergraphs in fields from natural sciences to engineering suggests that our method may find broader applications, including in information science.
We show a deterministic constant-time local algorithm for constructing an approximately maximum flow and minimum fractional cut in multisource-multitarget networks with bounded degrees and bounded edge capacities. Locality means that the decision we make about each edge only depends on its constant radius neighborhood. We show two applications of the algorithms: one is related to the Aldous-Lyons Conjecture, and the other is about approximating the neighborhood distribution of graphs by bounded-size graphs. The scope of our results can be extended to unimodular random graphs and networks. As a corollary, we generalize the Maximum Flow Minimum Cut Theorem to unimodular random flow networks.
In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.
Physics-informed neural networks (PINNs) are appealing data-driven tools for solving and inferring solutions to nonlinear partial differential equations (PDEs). Unlike traditional neural networks (NNs), which train only on solution data, a PINN incorporates a PDE's residual into its loss function and trains to minimize the said residual at a set of collocation points in the solution domain. This paper explores the use of the Schwarz alternating method as a means to couple PINNs with each other and with conventional numerical models (i.e., full order models, or FOMs, obtained via the finite element, finite difference or finite volume methods) following a decomposition of the physical domain. It is well-known that training a PINN can be difficult when the PDE solution has steep gradients. We investigate herein the use of domain decomposition and the Schwarz alternating method as a means to accelerate the PINN training phase. Within this context, we explore different approaches for imposing Dirichlet boundary conditions within each subdomain PINN: weakly through the loss and/or strongly through a solution transformation. As a numerical example, we consider the one-dimensional steady state advection-diffusion equation in the advection-dominated (high Peclet) regime. Our results suggest that the convergence of the Schwarz method is strongly linked to the choice of boundary condition implementation within the PINNs being coupled. Surprisingly, strong enforcement of the Schwarz boundary conditions does not always lead to a faster convergence of the method. While it is not clear from our preliminary study that the PINN-PINN coupling via the Schwarz alternating method accelerates PINN convergence in the advection-dominated regime, it reveals that PINN training can be improved substantially for Peclet numbers as high as 1e6 by performing a PINN-FOM coupling.
Positive and unlabelled learning is an important problem which arises naturally in many applications. The significant limitation of almost all existing methods lies in assuming that the propensity score function is constant (SCAR assumption), which is unrealistic in many practical situations. Avoiding this assumption, we consider parametric approach to the problem of joint estimation of posterior probability and propensity score functions. We show that under mild assumptions when both functions have the same parametric form (e.g. logistic with different parameters) the corresponding parameters are identifiable. Motivated by this, we propose two approaches to their estimation: joint maximum likelihood method and the second approach based on alternating maximization of two Fisher consistent expressions. Our experimental results show that the proposed methods are comparable or better than the existing methods based on Expectation-Maximisation scheme.
In the emerging field of mechanical metamaterials, using periodic lattice structures as a primary ingredient is relatively frequent. However, the choice of aperiodic lattices in these structures presents unique advantages regarding failure, e.g., buckling or fracture, because avoiding repeated patterns prevents global failures, with local failures occurring in turn that can beneficially delay structural collapse. Therefore, it is expedient to develop models for computing efficiently the effective mechanical properties in lattices from different general features while addressing the challenge of presenting topologies (or graphs) of different sizes. In this paper, we develop a deep learning model to predict energetically-equivalent mechanical properties of linear elastic lattices effectively. Considering the lattice as a graph and defining material and geometrical features on such, we show that Graph Neural Networks provide more accurate predictions than a dense, fully connected strategy, thanks to the geometrically induced bias through graph representation, closer to the underlying equilibrium laws from mechanics solved in the direct problem. Leveraging the efficient forward-evaluation of a vast number of lattices using this surrogate enables the inverse problem, i.e., to obtain a structure having prescribed specific behavior, which is ultimately suitable for multiscale structural optimization problems.
Core-periphery detection aims to separate the nodes of a complex network into two subsets: a core that is densely connected to the entire network and a periphery that is densely connected to the core but sparsely connected internally. The definition of core-periphery structure in multiplex networks that record different types of interactions between the same set of nodes but on different layers is nontrivial since a node may belong to the core in some layers and to the periphery in others. The current state-of-the-art approach relies on linear combinations of individual layer degree vectors whose layer weights need to be chosen a-priori. We propose a nonlinear spectral method for multiplex networks that simultaneously optimizes a node and a layer coreness vector by maximizing a suitable nonconvex homogeneous objective function by an alternating fixed point iteration. We prove global optimality and convergence guarantees for admissible hyper-parameter choices and convergence to local optima for the remaining cases. We derive a quantitative measure for the quality of a given multiplex core-periphery structure that allows the determination of the optimal core size. Numerical experiments on synthetic and real-world networks illustrate that our approach is robust against noisy layers and outperforms baseline methods with respect to a variety of core-periphery quality measures. In particular, all methods based on layer aggregation are improved when used in combination with the novel optimized layer coreness vector weights. As the runtime of our method depends linearly on the number of edges of the network it is scalable to large-scale multiplex networks.
We investigate the properties of the high-order discontinuous Galerkin spectral element method (DGSEM) with implicit backward-Euler time stepping for the approximation of hyperbolic linear scalar conservation equation in multiple space dimensions. We first prove that the DGSEM scheme in one space dimension preserves a maximum principle for the cell-averaged solution when the time step is large enough. This property however no longer holds in multiple space dimensions and we propose to use the flux-corrected transport limiting [Boris and Book, J. Comput. Phys., 11 (1973)] based on a low-order approximation using graph viscosity to impose a maximum principle on the cell-averaged solution. These results allow to use a linear scaling limiter [Zhang and Shu, J. Comput. Phys., 229 (2010)] in order to impose a maximum principle at nodal values within elements. Then, we investigate the inversion of the linear systems resulting from the time implicit discretization at each time step. We prove that the diagonal blocks are invertible and provide efficient algorithms for their inversion. Numerical experiments in one and two space dimensions are presented to illustrate the conclusions of the present analyses.
Accurately estimating the positions of multi-agent systems in indoor environments is challenging due to the lack of Global Navigation Satelite System (GNSS) signals. Noisy measurements of position and orientation can cause the integrated position estimate to drift without bound. Previous research has proposed using magnetic field simultaneous localization and mapping (SLAM) to compensate for position drift in a single agent. Here, we propose two novel algorithms that allow multiple agents to apply magnetic field SLAM using their own and other agents measurements. Our first algorithm is a centralized approach that uses all measurements collected by all agents in a single extended Kalman filter. This algorithm simultaneously estimates the agents position and orientation and the magnetic field norm in a central unit that can communicate with all agents at all times. In cases where a central unit is not available, and there are communication drop-outs between agents, our second algorithm is a distributed approach that can be employed. We tested both algorithms by estimating the position of magnetometers carried by three people in an optical motion capture lab with simulated odometry and simulated communication dropouts between agents. We show that both algorithms are able to compensate for drift in a case where single-agent SLAM is not. We also discuss the conditions for the estimate from our distributed algorithm to converge to the estimate from the centralized algorithm, both theoretically and experimentally. Our experiments show that, for a communication drop-out rate of 80 percent, our proposed distributed algorithm, on average, provides a more accurate position estimate than single-agent SLAM. Finally, we demonstrate the drift-compensating abilities of our centralized algorithm on a real-life pedestrian localization problem with multiple agents moving inside a building.
The main goal of this work is to improve the efficiency of training binary neural networks, which are low latency and low energy networks. The main contribution of this work is the proposal of two solutions comprised of topology changes and strategy training that allow the network to achieve near the state-of-the-art performance and efficient training. The time required for training and the memory required in the process are two factors that contribute to efficient training.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.