We show that if a disc triangulation has all internal vertex degrees at least 6, then the full triangulation may be determined from the pairwise graph distance between boundary vertices. A similar result holds for quadrangulations with all internal degrees at least 4. This confirms a conjecture of Itai Benjamini. Both degree bounds are best possible, and correspond to local non-positive curvature. However, we show that a natural conjecture for a "mixed" version of the two results is not true.
We analyse a class of time discretizations for solving the Gross-Pitaevskii equation at low-regularity on an arbitrary Lipschitz domain $\Omega \subset \mathbb{R}^d$, $d \le 3$, with a non-smooth potential. We show that these schemes, together with their optimal local error structure, allow for convergence under lower regularity assumptions on both the solution and the potential than is required by classical methods, such as splitting or exponential integrator methods. Moreover, we show convergence in the case of periodic boundary conditions, in any fractional positive Sobolev space $H^{r}$, $r \ge 0$ beyond the more typical $L^2$-error analysis.
The arboricity of a graph is the minimum number of forests required to cover all its edges. In this paper, we examine arboricity from a game-theoretic perspective and investigate cost-sharing in the minimum forest cover problem. We introduce the arboricity game as a cooperative cost game defined on a graph. The players are edges, and the cost of each coalition is the arboricity of the subgraph induced by the coalition. We study properties of the core and propose an efficient algorithm for computing the nucleolus when the core is not empty. In order to compute the nucleolus in the core, we introduce the prime partition which is built on the densest subgraph lattice. The prime partition decomposes the edge set of a graph into a partially ordered set defined from minimal densest minors and their invariant precedence relation. Moreover, edges from the same partition always have the same value in a core allocation. Consequently, when the core is not empty, the prime partition significantly reduces the number of variables and constraints required in the linear programs of Maschler's scheme and allows us to compute the nucleolus in polynomial time. Besides, the prime partition provides a graph decomposition analogous to the celebrated core decomposition and the density-friendly decomposition, which may be of independent interest.
We prove a bound of $O( k (n+m)\log^{d-1})$ on the number of incidences between $n$ points and $m$ axis parallel boxes in $\mathbb{R}^d$, if no $k$ boxes contain $k$ common points. That is, the incidence graph between the points and the boxes does not contain $K_{k,k}$ as a subgraph. This new bound improves over previous work by a factor of $\log^d n$, for $d >2$. We also study other variants of the problem. For halfspaces, using shallow cuttings, we get a near linear bound in two and three dimensions. Finally, we present near linear bound for the case of shapes in the plane with low union complexity (e.g. fat triangles).
The low-rank matrix approximation problem is ubiquitous in computational mathematics. Traditionally, this problem is solved in spectral or Frobenius norms, where the accuracy of the approximation is related to the rate of decrease of the singular values of the matrix. However, recent results indicate that this requirement is not necessary for other norms. In this paper, we propose a method for solving the low-rank approximation problem in the Chebyshev norm, which is capable of efficiently constructing accurate approximations for matrices, whose singular values do not decrease or decrease slowly.
Let $ X_{n} $ be $ n\times N $ random complex matrices, $R_{n}$ and $T_{n}$ be non-random complex matrices with dimensions $n\times N$ and $n\times n$, respectively. We assume that the entries of $ X_{n} $ are independent and identically distributed, $ T_{n} $ are nonnegative definite Hermitian matrices and $T_{n}R_{n}R_{n}^{*}= R_{n}R_{n}^{*}T_{n} $. The general information-plus-noise type matrices are defined by $C_{n}=\frac{1}{N}T_{n}^{\frac{1}{2}} \left( R_{n} +X_{n}\right) \left(R_{n}+X_{n}\right)^{*}T_{n}^{\frac{1}{2}} $. In this paper, we establish the limiting spectral distribution of the large dimensional general information-plus-noise type matrices $C_{n}$. Specifically, we show that as $n$ and $N$ tend to infinity proportionally, the empirical distribution of the eigenvalues of $C_{n}$ converges weakly to a non-random probability distribution, which is characterized in terms of a system of equations of its Stieltjes transform.
Structural identifiability is a property of a differential model with parameters that allows for the parameters to be determined from the model equations in the absence of noise. The method of input-output equations is one method for verifying structural identifiability. This method stands out in its importance because the additional insights it provides can be used to analyze and improve models. However, its complete theoretical grounds and applicability are still to be established. A subtlety and key for this method to work correctly is knowing whether the coefficients of these equations are identifiable. In this paper, to address this, we prove identifiability of the coefficients of input-output equations for types of differential models that often appear in practice, such as linear models with one output and linear compartment models in which, from each compartment, one can reach either a leak or an input. This shows that checking identifiability via input-output equations for these models is legitimate and, as we prove, that the field of identifiable functions is generated by the coefficients of the input-output equations. Finally, we exploit a connection between input-output equations and the transfer function matrix to show that, for a linear compartment model with an input and strongly connected graph, the field of all identifiable functions is generated by the coefficients of the transfer function matrix even if the initial conditions are generic.
Let $f(t,y,y')=\sum_{i=0}^n a_i(t,y)y'^i=0$ be an irreducible first order ordinary differential equation with polynomial coefficients. Eremenko in 1998 proved that there exists a constant $C$ such that every rational solution of $f(t,y,y')=0$ is of degree not greater than $C$. Examples show that this degree bound $C$ depends not only on the degrees of $f$ in $t,y,y'$ but also on the coefficients of $f$ viewed as the polynomial in $t,y,y'$. In this paper, we show that if $f$ satisfies $deg(f,y)<deg(f,y')$ or $\max_{i=0}^n \{deg(a_i,y)-2(n-i)\}>0 $ then the degree bound $C$ only depends on the degrees of $f$ in $t,y,y'$, and furthermore we present an explicit expression for $C$ in terms of the degrees of $f$ in $t,y,y'$.
A generalization of L{\"u}roth's theorem expresses that every transcendence degree 1 subfield of the rational function field is a simple extension. In this note we show that a classical proof of this theorem also holds to prove this generalization.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.