We study financial networks with debt contracts and credit default swaps between specific pairs of banks. Given such a financial system, we want to decide which of the banks are in default, and how much of their liabilities can these defaulting banks pay. There can easily be multiple different solutions to this problem, leading to a situation of default ambiguity, and a range of possible solutions to implement for a financial authority. In this paper, we study the properties of the solution space of such financial systems, and analyze a wide range of reasonable objective functions for selecting from the set of solutions. Examples of such objective functions include minimizing the number of defaulting banks, minimizing the amount of unpaid debt, maximizing the number of satisfied banks, and many others. We show that for all of these objectives, it is NP-hard to approximate the optimal solution to an $n^{1-\epsilon}$ factor for any $\epsilon>0$, with $n$ denoting the number of banks. Furthermore, we show that this situation is rather difficult to avoid from a financial regulator's perspective: the same hardness results also hold if we apply strong restrictions on the weights of the debts, the structure of the network, or the amount of funds that banks must possess. However, if we restrict both the network structure and the amount of funds simultaneously, then the solution becomes unique, and it can be found efficiently.
Given a simple polygon $\cal P$, in the Art Gallery problem the goal is to find the minimum number of guards needed to cover the entire $\cal P$, where a guard is a point and can see another point $q$ when $\overline{pq}$ does not cross the edges of $\cal P$. This paper studies a variant of the Art Gallery problem in which guards are restricted to lie on a dense grid inside $\cal P$. In the general problem, guards can be anywhere inside or on the boundary of $\cal P$. The general problem is called the \emph{point} guarding problem. It was proved that the point guarding problem is APX-complete, meaning that we cannot do better than a constant-factor approximation algorithm unless $P = NP$. A huge amount of research is committed to the studies of combinatorial and algorithmic aspects of this problem, and as of this time, we could not find a constant factor approximation for simple polygons. The last best-known approximation factor for point guarding a simple polygon was $\mathcal{O}(\log (|OPT|))$ introduced by E. Bonnet and T. Miltzow in 2020, where $|OPT|$ is the size of the optimal solution. Here, we propose an algorithm with a constant approximation factor for the point guarding problem where the location of guards is restricted to a grid. The running time of the proposed algorithm depends on the number of cells of the grid. The approximation factor is constant regardless of the grid we use, the running time could be super-polynomial if the grid size becomes exponential.
We present a loosely-stabilizing phase clock for population protocols. In the population model we are given a system of $n$ identical agents which interact in a sequence of randomly chosen pairs. Our phase clock is leaderless and it requires $O(\log n)$ states. It runs forever and is, at any point of time, in a synchronous state w.h.p. When started in an arbitrary configuration, it recovers rapidly and enters a synchronous configuration within $O(n\log n)$ interactions w.h.p. Once the clock is synchronized, it stays in a synchronous configuration for at least poly $n$ parallel time w.h.p. We use our clock to design a loosely-stabilizing protocol that solves the comparison problem introduced by Alistarh et al., 2021. In this problem, a subset of agents has at any time either $A$ or $B$ as input. The goal is to keep track which of the two opinions is (momentarily) the majority. We show that if the majority has a support of at least $\Omega(\log n)$ agents and a sufficiently large bias is present, then the protocol converges to a correct output within $O(n\log n)$ interactions and stays in a correct configuration for poly $n$ interactions, w.h.p.
Let $G$ be a connected $n$-vertex graph in a proper minor-closed class $\mathcal G$. We prove that the extension complexity of the spanning tree polytope of $G$ is $O(n^{3/2})$. This improves on the $O(n^2)$ bounds following from the work of Wong (1980) and Martin (1991). It also extends a result of Fiorini, Huynh, Joret, and Pashkovich (2017), who obtained a $O(n^{3/2})$ bound for graphs embedded in a fixed surface. Our proof works more generally for all graph classes admitting strongly sublinear balanced separators: We prove that for every constant $\beta$ with $0<\beta<1$, if $\mathcal G$ is a graph class closed under induced subgraphs such that all $n$-vertex graphs in $\mathcal G$ have balanced separators of size $O(n^\beta)$, then the extension complexity of the spanning tree polytope of every connected $n$-vertex graph in $\mathcal{G}$ is $O(n^{1+\beta})$. We in fact give two proofs of this result, one is a direct construction of the extended formulation, the other is via communication protocols. Using the latter approach we also give a short proof of the $O(n)$ bound for planar graphs due to Williams (2002).
Once you have invented digital money, you may need a ledger to track who owns what -- and an interface to that ledger so that users of your money can transact. On the Tezos blockchain this implies: a smart contract (distributed program), storing in its state a ledger to map owner addresses to token quantities, and standardised entrypoints to transact on accounts. A bank does a similar job -- it maps account numbers to account quantities and permits users to transact -- but in return the bank demands trust, it incurs expense to maintain a centralised server and staff, it uses a proprietary interface ... and it may speculate using your money and/or display rent-seeking behaviour. A blockchain ledger is by design decentralised, inexpensive, open, and it won't just bet your tokens on risky derivatives (unless you ask). The FA1.2 standard is an open standard for ledger-keeping smart contracts on the Tezos blockchain. Several FA1.2 implementations already exist. Or do they? Is the standard sensible and complete? Are the implementations correct? And what are they implementations \emph{of}? The FA1.2 standard is written in English, a specification language favoured by wet human brains but notorious for its incompleteness and ambiguity when rendered into dry and unforgiving code. In this paper we report on a formalisation of the FA1.2 standard as a Coq specification, and on a formal verification of three FA1.2-compliant smart contracts with respect to that specification. Errors were found and ambiguities were resolved; but also, there now exists a \emph{mathematically precise} and battle-tested specification of the FA1.2 ledger standard. We will describe FA1.2 itself, outline the structure of the Coq theories -- which in itself captures some non-trivial and novel design decisions of the development -- and review the detailed verification of the implementations.
We study variants of the mean problem under the $p$-Dynamic Time Warping ($p$-DTW) distance, a popular and robust distance measure for sequential data. In our setting we are given a set of finite point sequences over an arbitrary metric space and we want to compute a mean point sequence of given length that minimizes the sum of $p$-DTW distances, each raised to the $q$th power, between the input sequences and the mean sequence. In general, the problem is $\mathrm{NP}$-hard and known not to be fixed-parameter tractable in the number of sequences. We show that it is even hard to approximate within any constant factor unless $\mathrm{P} = \mathrm{NP}$ and moreover if there exists a $\delta>0$ such that there is a $(\log n)^{\delta}$-approximation algorithm for DTW mean then $\mathrm{NP} \subseteq \mathrm{QP}$. On the positive side, we show that restricting the length of the mean sequence significantly reduces the hardness of the problem. We give an exact algorithm running in polynomial time for constant-length means. We explore various approximation algorithms that provide a trade-off between the approximation factor and the running time. Our approximation algorithms have a running time with only linear dependency on the number of input sequences. In addition, we use our mean algorithms to obtain clustering algorithms with theoretical guarantees.
This work derives explicit series reversions for the solution of Calder\'on's problem. The governing elliptic partial differential equation is $\nabla\cdot(A\nabla u)=0$ in a bounded Lipschitz domain and with a matrix-valued coefficient. The corresponding forward map sends $A$ to a projected version of a local Neumann-to-Dirichlet operator, allowing for the use of partial boundary data and finitely many measurements. It is first shown that the forward map is analytic, and subsequently reversions of its Taylor series up to specified orders lead to a family of numerical methods for solving the inverse problem with increasing accuracy. The convergence of these methods is shown under conditions that ensure the invertibility of the Fr\'echet derivative of the forward map. The introduced numerical methods are of the same computational complexity as solving the linearised inverse problem. The analogous results are also presented for the smoothened complete electrode model.
Defeaturing consists in simplifying geometrical models by removing the geometrical features that are considered not relevant for a given simulation. Feature removal and simplification of computer-aided design models enables faster simulations for engineering analysis problems, and simplifies the meshing problem that is otherwise often unfeasible. The effects of defeaturing on the analysis are then neglected and, as of today, there are basically very few strategies to quantitatively evaluate such an impact. Understanding well the effects of this process is an important step for automatic integration of design and analysis. We formalize the process of defeaturing by understanding its effect on the solution of Poisson equation defined on the geometrical model of interest containing a single feature, with Neumann boundary conditions on the feature itself. We derive an a posteriori estimator of the energy error between the solutions of the exact and the defeatured geometries in $\mathbb{R}^n$, $n\in\{2,3\}$, that is simple, reliable and efficient up to oscillations. The dependence of the estimator upon the size of the features is explicit.
Large linear systems of saddle-point type have arisen in a wide variety of applications throughout computational science and engineering. The discretizations of distributed control problems have a saddle-point structure. The numerical solution of saddle-point problems has attracted considerable interest in recent years. In this work, we propose a novel Braess-Sarazin multigrid relaxation scheme for finite element discretizations of the distributed control problems, where we use the stiffness matrix obtained from the five-point finite difference method for the Laplacian to approximate the inverse of the mass matrix arising in the saddle-point system. We apply local Fourier analysis to examine the smoothing properties of the Braess-Sarazin multigrid relaxation. From our analysis, the optimal smoothing factor for Braess-Sarazin relaxation is derived. Numerical experiments validate our theoretical results. The relaxation scheme considered here shows its high efficiency and robustness with respect to the regularization parameter and grid size.
Learning how to act when there are many available actions in each state is a challenging task for Reinforcement Learning (RL) agents, especially when many of the actions are redundant or irrelevant. In such cases, it is sometimes easier to learn which actions not to take. In this work, we propose the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions. The AEN is trained to predict invalid actions, supervised by an external elimination signal provided by the environment. Simulations demonstrate a considerable speedup and added robustness over vanilla DQN in text-based games with over a thousand discrete actions.
Scene text detection has been made great progress in recent years. The detection manners are evolving from axis-aligned rectangle to rotated rectangle and further to quadrangle. However, current datasets contain very little curve text, which can be widely observed in scene images such as signboard, product name and so on. To raise the concerns of reading curve text in the wild, in this paper, we construct a curve text dataset named CTW1500, which includes over 10k text annotations in 1,500 images (1000 for training and 500 for testing). Based on this dataset, we pioneering propose a polygon based curve text detector (CTD) which can directly detect curve text without empirical combination. Moreover, by seamlessly integrating the recurrent transverse and longitudinal offset connection (TLOC), the proposed method can be end-to-end trainable to learn the inherent connection among the position offsets. This allows the CTD to explore context information instead of predicting points independently, resulting in more smooth and accurate detection. We also propose two simple but effective post-processing methods named non-polygon suppress (NPS) and polygonal non-maximum suppression (PNMS) to further improve the detection accuracy. Furthermore, the proposed approach in this paper is designed in an universal manner, which can also be trained with rectangular or quadrilateral bounding boxes without extra efforts. Experimental results on CTW-1500 demonstrate our method with only a light backbone can outperform state-of-the-art methods with a large margin. By evaluating only in the curve or non-curve subset, the CTD + TLOC can still achieve the best results. Code is available at //github.com/Yuliang-Liu/Curve-Text-Detector.