We consider problems where agents in a network seek a common quantity, measured independently and periodically by each agent through a local time-varying process. Numerous solvers addressing such problems have been developed in the past, featuring various adaptations of the local processing and the consensus step. However, existing solvers still lack support for advanced techniques, such as superiorization and over-the-air function computation (OTA-C). To address this limitation, we introduce a comprehensive framework for the analysis of distributed algorithms by characterizing them using the quasi-Fej\'er type algorithms and an extensive communication model. Under weak assumptions, we prove almost sure convergence of the algorithm to a common estimate for all agents. Moreover, we develop a specific class of algorithms within this framework to tackle distributed optimization problems with time-varying objectives, and, assuming that a time-invariant solution exists, prove its convergence to a solution. We also present a novel OTA-C protocol for consensus step in large decentralized networks, reducing communication overhead and enhancing network autonomy as compared to the existing protocols. The effectiveness of the algorithm, featuring superiorization and OTA-C, is demonstrated in a real-world application of distributed supervised learning over time-varying wireless networks, highlighting its low-latency and energy-efficiency compared to standard approaches.
The execution of Belief-Desire-Intention (BDI) agents in a Multi-Agent System (MAS) can be practically implemented on top of low-level concurrency mechanisms that impact on efficiency, determinism, and reproducibility. We argue that developers should specify the MAS behaviour independently of the execution model, and choose or configure the concurrency model later on, according to their target domain's specific needs, leaving the MAS specification unaffected. We identify patterns for mapping the agent execution over the underlying concurrency abstractions, and investigate which concurrency models are supported by some of the most commonly used BDI platforms. Although most frameworks support multiple concurrency models, we find that they tend to hide them under the hood, making them opaque to the developer, and effectively limiting the possibility of fine-tuning the MAS.
We examine behavior in an experimental collaboration game that incorporates endogenous network formation. The environment is modeled as a generalization of the voluntary contributions mechanism. By varying the information structure in a controlled laboratory experiment, we examine the underlying mechanisms of reciprocity that generate emergent patterns in linking and contribution decisions. Providing players more detailed information about the sharing behavior of others drastically increases efficiency, and positively affects a number of other key outcomes. To understand the driving causes of these changes in behavior we develop and estimate a structural model for actions and small network panels and identify how social preferences affect behavior. We find that the treatment reduces altruism but stimulates reciprocity, helping players coordinate to reach mutually beneficial outcomes. In a set of counterfactual simulations, we show that increasing trust in the community would encourage higher average contributions at the cost of mildly increased free-riding. Increasing overall reciprocity greatly increases collaborative behavior when there is limited information but can backfire in the treatment, suggesting that negative reciprocity and punishment can reduce efficiency. The largest returns would come from an intervention that drives players away from negative and toward positive reciprocity.
Current trust and reputation models continue to have significant limitations, such as the inability to deal with agents constantly entering or exiting open multi-agent systems (open MAS), as well as continuously changing behaviors. Our study is based on CA, a previously proposed decentralized computational trust model from the trustee's point of view, inspired by synaptic plasticity and the formation of assemblies in the human brain. It is designed to meet the requirements of highly dynamic and open MAS, and its main difference with most conventional trust and reputation models is that the trustor does not select a trustee to delegate a task; instead, the trustee determines whether it is qualified to successfully execute it. We ran a series of simulations to compare CA model to FIRE, a well-established, decentralized trust and reputation model for open MAS under conditions of continuous trustee and trustor population replacement, as well as continuous change of trustees' abilities to perform tasks. The main finding is that FIRE is superior to changes in the trustee population, whereas CA is resilient to the trustor population changes. When the trustees switch performance profiles FIRE clearly outperforms despite the fact that both models' performances are significantly impacted by this environmental change. Findings lead us to conclude that learning to use the appropriate trust model, according to the dynamic conditions in effect could maximize the trustor's benefits.
We demonstrate that neural networks can be FLOP-efficient integrators of one-dimensional oscillatory integrands. We train a feed-forward neural network to compute integrals of highly oscillatory 1D functions. The training set is a parametric combination of functions with varying characters and oscillatory behavior degrees. Numerical examples show that these networks are FLOP-efficient for sufficiently oscillatory integrands with an average FLOP gain of 1000 FLOPs. The network calculates oscillatory integrals better than traditional quadrature methods under the same computational budget or number of floating point operations. We find that feed-forward networks of 5 hidden layers are satisfactory for a relative accuracy of 0.001. The computational burden of inference of the neural network is relatively small, even compared to inner-product pattern quadrature rules. We postulate that our result follows from learning latent patterns in the oscillatory integrands that are otherwise opaque to traditional numerical integrators.
We present a new approach to compute eigenvalues and eigenvectors of locally definite multiparameter eigenvalue problems by its signed multiindex. The method has the interpretation of a semismooth Newton method applied to certain functions that have a unique zero. We can therefore show local quadratic convergence, and for certain extreme eigenvalues even global linear convergence of the method. Local definiteness is a weaker condition than right and left definiteness, which is often considered for multiparameter eigenvalue problems. These conditions are naturally satisfied for multiparameter Sturm-Liouville problems that arise when separation of variables can be applied to multidimensional boundary eigenvalue problems.
Learning approximations to smooth target functions of many variables from finite sets of pointwise samples is an important task in scientific computing and its many applications in computational science and engineering. Despite well over half a century of research on high-dimensional approximation, this remains a challenging problem. Yet, significant advances have been made in the last decade towards efficient methods for doing this, commencing with so-called sparse polynomial approximation methods and continuing most recently with methods based on Deep Neural Networks (DNNs). In tandem, there have been substantial advances in the relevant approximation theory and analysis of these techniques. In this work, we survey this recent progress. We describe the contemporary motivations for this problem, which stem from parametric models and computational uncertainty quantification; the relevant function classes, namely, classes of infinite-dimensional, Banach-valued, holomorphic functions; fundamental limits of learnability from finite data for these classes; and finally, sparse polynomial and DNN methods for efficiently learning such functions from finite data. For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning. Aiming to narrow this gap, we develop the topic of practical existence theory, which asserts the existence of dimension-independent DNN architectures and training strategies that achieve provably near-optimal generalization errors in terms of the amount of training data.
Boolean networks are extensively applied as models of complex dynamical systems, aiming at capturing essential features related to causality and synchronicity of the state changes of components along time. Dynamics of Boolean networks result from the application of their Boolean map according to a so-called update mode, specifying the possible transitions between network configurations. In this paper, we explore update modes that possess a memory on past configurations, and provide a generic framework to define them. We show that recently introduced modes such as the most permissive and interval modes can be naturally expressed in this framework. We propose novel update modes, the history-based and trapping modes, and provide a comprehensive comparison between them. Furthermore, we show that trapping dynamics, which further generalize the most permissive mode, correspond to a rich class of networks related to transitive dynamics and encompassing commutative networks. Finally, we provide a thorough characterization of the structure of minimal and principal trapspaces, bringing a combinatorial and algebraic understanding of these objects.
Characterizing multipartite quantum systems is crucial for quantum computing and many-body physics. The problem, however, becomes challenging when the system size is large and the properties of interest involve correlations among a large number of particles. Here we introduce a neural network model that can predict various quantum properties of many-body quantum states with constant correlation length, using only measurement data from a small number of neighboring sites. The model is based on the technique of multi-task learning, which we show to offer several advantages over traditional single-task approaches. Through numerical experiments, we show that multi-task learning can be applied to sufficiently regular states to predict global properties, like string order parameters, from the observation of short-range correlations, and to distinguish between quantum phases that cannot be distinguished by single-task networks. Remarkably, our model appears to be able to transfer information learnt from lower dimensional quantum systems to higher dimensional ones, and to make accurate predictions for Hamiltonians that were not seen in the training.
We propose a method for obtaining parsimonious decompositions of networks into higher order interactions which can take the form of arbitrary motifs.The method is based on a class of analytically solvable generative models, where vertices are connected via explicit copies of motifs, which in combination with non-parametric priors allow us to infer higher order interactions from dyadic graph data without any prior knowledge on the types or frequencies of such interactions. Crucially, we also consider 'degree--corrected' models that correctly reflect the degree distribution of the network and consequently prove to be a better fit for many real world--networks compared to non-degree corrected models. We test the presented approach on simulated data for which we recover the set of underlying higher order interactions to a high degree of accuracy. For empirical networks the method identifies concise sets of atomic subgraphs from within thousands of candidates that cover a large fraction of edges and include higher order interactions of known structural and functional significance. The method not only produces an explicit higher order representation of the network but also a fit of the network to analytically tractable models opening new avenues for the systematic study of higher order network structures.
We consider the problem of causal inference based on observational data (or the related missing data problem) with a binary or discrete treatment variable. In that context we study counterfactual density estimation, which provides more nuanced information than counterfactual mean estimation (i.e., the average treatment effect). We impose the shape-constraint of log-concavity (a unimodality constraint) on the counterfactual densities, and then develop doubly robust estimators of the log-concave counterfactual density (based on an augmented inverse-probability weighted pseudo-outcome), and show the consistency in various global metrics of that estimator. Based on that estimator we also develop asymptotically valid pointwise confidence intervals for the counterfactual density.