We propose fully-distributed algorithms for Nash equilibrium seeking in aggregative games over networks. We first consider the case where local constraints are present and we design an algorithm combining, for each agent, (i) the projected pseudo-gradient descent and (ii) a tracking mechanism to locally reconstruct the aggregative variable. To handle coupling constraints arising in generalized settings, we propose another distributed algorithm based on (i) a recently emerged augmented primal-dual scheme and (ii) two tracking mechanisms to reconstruct, for each agent, both the aggregative variable and the coupling constraint satisfaction. Leveraging tools from singular perturbations analysis, we prove linear convergence to the Nash equilibrium for both schemes. Finally, we run extensive numerical simulations to confirm the effectiveness of our methods and compare them with state-of-the-art distributed equilibrium-seeking algorithms.
Parameters of differential equations are essential to characterize intrinsic behaviors of dynamic systems. Numerous methods for estimating parameters in dynamic systems are computationally and/or statistically inadequate, especially for complex systems with general-order differential operators, such as motion dynamics. This article presents Green's matching, a computationally tractable and statistically efficient two-step method, which only needs to approximate trajectories in dynamic systems but not their derivatives due to the inverse of differential operators by Green's function. This yields a statistically optimal guarantee for parameter estimation in general-order equations, a feature not shared by existing methods, and provides an efficient framework for broad statistical inferences in complex dynamic systems.
In this paper, we aim to improve the percentage of packets meeting their deadline in discrete-time M/M/1 queues with infrequent monitoring. More specifically, we look into policies that only monitor the system (and subsequently take actions) after a packet arrival. We model the system as an MDP and provide the optimal policy for some special cases. Furthermore, we introduce a heuristic algorithm called "AB-n" for general deadlines. Finally, we provide numerical results demonstrating the desirable performance of "AB-n" policies.
A classic result by Stockmeyer gives a non-elementary lower bound to the emptiness problem for star-free generalized regular expressions. This result is intimately connected to the satisfiability problem for interval temporal logic, notably for formulas that make use of the so-called chop operator. Such an operator can indeed be interpreted as the inverse of the concatenation operation on regular languages, and this correspondence enables reductions between non-emptiness of star-free generalized regular expressions and satisfiability of formulas of the interval temporal logic of chop under the homogeneity assumption. In this paper, we study the complexity of the satisfiability problem for suitable weakenings of the chop interval temporal logic, that can be equivalently viewed as fragments of Halpern and Shoham interval logic. We first consider the logic $\mathsf{BD}_{hom}$ featuring modalities $B$, for \emph{begins}, corresponding to the prefix relation on pairs of intervals, and $D$, for \emph{during}, corresponding to the infix relation. The homogeneous models of $\mathsf{BD}_{hom}$ naturally correspond to languages defined by restricted forms of regular expressions, that use union, complementation, and the inverses of the prefix and infix relations. Such a fragment has been recently shown to be PSPACE-complete . In this paper, we study the extension $\mathsf{BD}_{hom}$ with the temporal neighborhood modality $A$ (corresponding to the Allen relation \emph{Meets}), and prove that it increases both its expressiveness and complexity. In particular, we show that the resulting logic $\mathsf{BDA}_{hom}$ is EXPSPACE-complete.
Inverse problems, particularly those governed by Partial Differential Equations (PDEs), are prevalent in various scientific and engineering applications, and uncertainty quantification (UQ) of solutions to these problems is essential for informed decision-making. This second part of a two-paper series builds upon the foundation set by the first part, which introduced CUQIpy, a Python software package for computational UQ in inverse problems using a Bayesian framework. In this paper, we extend CUQIpy's capabilities to solve PDE-based Bayesian inverse problems through a general framework that allows the integration of PDEs in CUQIpy, whether expressed natively or using third-party libraries such as FEniCS. CUQIpy offers concise syntax that closely matches mathematical expressions, streamlining the modeling process and enhancing the user experience. The versatility and applicability of CUQIpy to PDE-based Bayesian inverse problems are demonstrated on examples covering parabolic, elliptic and hyperbolic PDEs. This includes problems involving the heat and Poisson equations and application case studies in electrical impedance tomography and photo-acoustic tomography, showcasing the software's efficiency, consistency, and intuitive interface. This comprehensive approach to UQ in PDE-based inverse problems provides accessibility for non-experts and advanced features for experts.
On the Boolean domain, there is a class of symmetric signatures called ``Fibonacci gates" for which a beautiful P-time combinatorial algorithm has been designed for the corresponding $\operatorname{Holant}^*$ problems. In this work, we give a combinatorial view for $\operatorname{Holant}^*(\mathcal{F})$ problems on a domain of size 3 where $\mathcal{F}$ is a set of arity 3 functions with inputs taking values on the domain of size 3 and the functions share some common properties. The combinatorial view can also be extended to the domain of size 4. Specifically, we extend the definition of "Fibonacci gates" to the domain of size 3 and the domain of size 4. Moreover, we give the corresponding combinatorial algorithms.
While graph convolutional networks show great practical promises, the theoretical understanding of their generalization properties as a function of the number of samples is still in its infancy compared to the more broadly studied case of supervised fully connected neural networks. In this article, we predict the performances of a single-layer graph convolutional network (GCN) trained on data produced by attributed stochastic block models (SBMs) in the high-dimensional limit. Previously, only ridge regression on contextual-SBM (CSBM) has been considered in Shi et al. 2022; we generalize the analysis to arbitrary convex loss and regularization for the CSBM and add the analysis for another data model, the neural-prior SBM. We also study the high signal-to-noise ratio limit, detail the convergence rates of the GCN and show that, while consistent, it does not reach the Bayes-optimal rate for any of the considered cases.
We present a specific-purpose globalized and preconditioned Newton-CG solver to minimize a metric-aware curved high-order mesh distortion. The solver is specially devised to optimize curved high-order meshes for high polynomial degrees with a target metric featuring non-uniform sizing, high stretching ratios, and curved alignment -- exactly the features that stiffen the optimization problem. To this end, we consider two ingredients: a specific-purpose globalization and a specific-purpose Jacobi-$\text{iLDL}^{\text{T}}(0)$ preconditioning with varying accuracy and curvature tolerances (dynamic forcing terms) for the CG method. These improvements are critical in stiff problems because, without them, the large number of non-linear and linear iterations makes curved optimization impractical. Finally, to analyze the performance of our method, the results compare the specific-purpose solver with standard optimization methods. For this, we measure the matrix-vector products indicating the solver computational cost and the line-search iterations indicating the total amount of objective function evaluations. When we combine the globalization and the linear solver ingredients, we conclude that the specific-purpose Newton-CG solver reduces the total number of matrix-vector products by one order of magnitude. Moreover, the number of non-linear and line-search iterations is mainly smaller but of similar magnitude.
Emotion Recognition in Conversation (ERC) has attracted growing attention in recent years as a result of the advancement and implementation of human-computer interface technologies. In this paper, we propose an emotional inertia and contagion-driven dependency modeling approach (EmotionIC) for ERC task. Our EmotionIC consists of three main components, i.e., Identity Masked Multi-Head Attention (IMMHA), Dialogue-based Gated Recurrent Unit (DiaGRU), and Skip-chain Conditional Random Field (SkipCRF). Compared to previous ERC models, EmotionIC can model a conversation more thoroughly at both the feature-extraction and classification levels. The proposed model attempts to integrate the advantages of attention- and recurrence-based methods at the feature-extraction level. Specifically, IMMHA is applied to capture identity-based global contextual dependencies, while DiaGRU is utilized to extract speaker- and temporal-aware local contextual information. At the classification level, SkipCRF can explicitly mine complex emotional flows from higher-order neighboring utterances in the conversation. Experimental results show that our method can significantly outperform the state-of-the-art models on four benchmark datasets. The ablation studies confirm that our modules can effectively model emotional inertia and contagion.
Lattices are architected metamaterials whose properties strongly depend on their geometrical design. The analogy between lattices and graphs enables the use of graph neural networks (GNNs) as a faster surrogate model compared to traditional methods such as finite element modelling. In this work, we generate a big dataset of structure-property relationships for strut-based lattices. The dataset is made available to the community which can fuel the development of methods anchored in physical principles for the fitting of fourth-order tensors. In addition, we present a higher-order GNN model trained on this dataset. The key features of the model are (i) SE(3) equivariance, and (ii) consistency with the thermodynamic law of conservation of energy. We compare the model to non-equivariant models based on a number of error metrics and demonstrate its benefits in terms of predictive performance and reduced training requirements. Finally, we demonstrate an example application of the model to an architected material design task. The methods which we developed are applicable to fourth-order tensors beyond elasticity such as piezo-optical tensor etc.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.