In many real-world situations, there are constraints on the ways in which a physical system can be manipulated. We investigate the entropy production (EP) and extractable work involved in bringing a system from some initial distribution $p$ to some final distribution $p'$, given that the set of master equations available to the driving protocol obeys some constraints. We first derive general bounds on EP and extractable work, as well as a decomposition of the nonequilibrium free energy into an "accessible free energy" (which can be extracted as work, given a set of constraints) and an "inaccessible free energy" (which must be dissipated as EP). In a similar vein, we consider the thermodynamics of information in the presence of constraints, and decompose the information acquired in a measurement into "accessible" and "inaccessible" components. This decomposition allows us to consider the thermodynamic efficiency of different measurements of the same system, given a set of constraints. We use our framework to analyze protocols subject to symmetry, modularity, and coarse-grained constraints, and consider various examples including the Szilard box, the 2D Ising model, and a multi-particle flashing ratchet.
In addition to traditional concerns such as throughput and latency, freshness is becoming increasingly important. To stay fresh, applications stream status updates among their components. Existing studies propose the metric age of information (AoI) to gauge the freshness and design systems to achieve low AoI. Despite active research in this area, existing results are not applicable to general wired networks for two reasons. First, they focus on wireless settings where AoI is mostly affected by interference and collision while queueing is more dominant in wired settings. Second, the legacy drop-adverse flows are not taken into account in the literature. Scheduling mixed flows with distinct performance objective is not yet addressed. In this paper, we study wired networks shared by two classes of flows, aiming for high throughput and low AoI respectively, and achieve a good trade-off between their throughput and AoI. Our approach to the problem consists of two layers: freshness-aware traffic engineering (FATE) and in-network freshness control (IFC). FATE derives sending rate/update frequency for flows via optimization, and its solution is then enforced by IFC through efficient scheduling mechanisms at each outport of in-network nodes. We also present efficient Linux implementation of IFC and demonstrate the effectiveness of FATE/IFC through extensive emulations. Our results show that it is possible to trade a little throughput (5 % lower) for much shorter AoI (49 to 71% shorter) compared to state-of-the-art traffic engineering.
We introduce a notion of compatibility between constraint encoding and compositional structure. Phrased in the language of category theory, it is given by a "composable constraint encoding". We show that every composable constraint encoding can be used to construct an equivalent notion of a constrained category in which morphisms are supplemented with the constraints they satisfy. We further describe how to express the compatibility of constraints with additional categorical structures of their targets, such as parallel composition, compactness, and time-symmetry. We present a variety of concrete examples. Some are familiar in the study of quantum protocols and quantum foundations, such as signalling and sectorial constraints; others arise by construction from basic categorical notions. We use the language developed to discuss the notion of intersectability of constraints and the simplifications it allows for when present, and to show that any time-symmetric theory of relational constraints admits a faithful notion of intersection.
In this paper we consider the spatial semi-discretization of conservative PDEs. Such finite dimensional approximations of infinite dimensional dynamical systems can be described as flows in suitable matrix spaces, which in turn leads to the need to solve polynomial matrix equations, a classical and important topic both in theoretical and in applied mathematics. Solving numerically these equations is challenging due to the presence of several conservation laws which our finite models incorporate and which must be retained while integrating the equations of motion. In the last thirty years, the theory of geometric integration has provided a variety of techniques to tackle this problem. These numerical methods require to solve both direct and inverse problems in matrix spaces. We present two algorithms to solve a cubic matrix equation arising in the geometric integration of isospectral flows. This type of ODEs includes finite models of ideal hydrodynamics, plasma dynamics, and spin particles, which we use as test problems for our algorithms.
We address the generalized Nash equilibrium seeking problem in a partial-decision information scenario, where each agent can only exchange information with some neighbors, although its cost function possibly depends on the strategies of all agents. The few existing methods build on projected pseudo-gradient dynamics, and require either double-layer iterations or conservative conditions on the step sizes. To overcome both these flaws and improve efficiency, we design the first fully-distributed single-layer algorithms based on proximal best-response. Our schemes are fixed-step and allow for inexact updates, which is crucial for reducing the computational complexity. Under standard assumptions on the game primitives, we establish convergence to a variational equilibrium (with linear rate for games without coupling constraints) by recasting our algorithms as proximal-point methods, opportunely preconditioned to distribute the computation among the agents. Since our analysis hinges on a restricted monotonicity property, we also provide new general results that significantly extend the domain of applicability of proximal-point methods. Besides, the operator-theoretic approach favors the implementation of provably correct acceleration schemes that can further improve the convergence speed. Finally, the potential of our algorithms is demonstrated numerically, revealing much faster convergence with respect to projected pseudo-gradient methods and validating our theoretical findings.
Devising optimal interventions for constraining stochastic systems is a challenging endeavour that has to confront the interplay between randomness and nonlinearity. Existing methods for identifying the necessary dynamical adjustments resort either to space discretising solutions of ensuing partial differential equations, or to iterative stochastic path sampling schemes. Yet, both approaches become computationally demanding for increasing system dimension. Here, we propose a generally applicable and practically feasible non-iterative methodology for obtaining optimal dynamical interventions for diffusive nonlinear systems. We estimate the necessary controls from an interacting particle approximation to the logarithmic gradient of two forward probability flows evolved following deterministic particle dynamics. Applied to several biologically inspired models, we show that our method provides the necessary optimal controls in settings with terminal-, transient-, or generalised collective-state constraints and arbitrary system dynamics.
This article is concerned with the nonconforming finite element method for distributed elliptic optimal control problems with pointwise constraints on the control and gradient of the state variable. We reduce the minimization problem into a pure state constraint minimization problem. In this case, the solution of the minimization problem can be characterized as fourth-order elliptic variational inequalities of the first kind. To discretize the control problem we have used the bubble enriched Morley finite element method. To ensure the existence of the solution to discrete problems three bubble functions corresponding to the mean of the edge are added to the discrete space. We derive the error in the state variable in $H^2$-type energy norm. Numerical results are presented to illustrate our analytical findings.
This paper presents a framework for the safety-critical control of robotic systems, when safety is defined on safe regions in the configuration space. To maintain safety, we synthesize a safe velocity based on control barrier function theory without relying on a -- potentially complicated -- high-fidelity dynamical model of the robot. Then, we track the safe velocity with a tracking controller. This culminates in model-free safety critical control. We prove theoretical safety guarantees for the proposed method. Finally, we demonstrate that this approach is application-agnostic. We execute an obstacle avoidance task with a Segway in high-fidelity simulation, as well as with a Drone and a Quadruped in hardware experiments.
Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.