Consider the quotient of a Hilbert space by a subgroup of its automorphisms. We study whether this orbit space can be embedded into a Hilbert space by a bilipschitz map, and we identify constraints on such embeddings.
We develop further the theory of monoidal bicategories by introducing and studying bicategorical counterparts of the notions of a linear explonential comonad, as considered in the study of linear logic, and of a codereliction transformation, introduced to study differential linear logic via differential categories. As an application, we extend the differential calculus of Joyal's analytic functors to analytic functors between presheaf categories, just as ordinary calculus extends from a single variable to many variables.
Latent variable models serve as powerful tools to infer underlying dynamics from observed neural activity. However, due to the absence of ground truth data, prediction benchmarks are often employed as proxies. In this study, we reveal the limitations of the widely-used 'co-smoothing' prediction framework and propose an improved few-shot prediction approach that encourages more accurate latent dynamics. Utilizing a student-teacher setup with Hidden Markov Models, we demonstrate that the high co-smoothing model space can encompass models with arbitrary extraneous dynamics within their latent representations. To address this, we introduce a secondary metric -- a few-shot version of co-smoothing. This involves performing regression from the latent variables to held-out channels in the data using fewer trials. Our results indicate that among models with near-optimal co-smoothing, those with extraneous dynamics underperform in the few-shot co-smoothing compared to 'minimal' models devoid of such dynamics. We also provide analytical insights into the origin of this phenomenon. We further validate our findings on real neural data using two state-of-the-art methods: LFADS and STNDT. In the absence of ground truth, we suggest a proxy measure to quantify extraneous dynamics. By cross-decoding the latent variables of all model pairs with high co-smoothing, we identify models with minimal extraneous dynamics. We find a correlation between few-shot co-smoothing performance and this new measure. In summary, we present a novel prediction metric designed to yield latent variables that more accurately reflect the ground truth, offering a significant improvement for latent dynamics inference.
We study the graph parameter elimination distance to bounded degree, which was introduced by Bulian and Dawar in their study of the parameterized complexity of the graph isomorphism problem. We prove that the problem is fixed-parameter tractable on planar graphs, that is, there exists an algorithm that given a planar graph $G$ and integers $d$ and $k$ decides in time $f(k,d)\cdot n^c$ for a computable function~$f$ and constant $c$ whether the elimination distance of $G$ to the class of degree $d$ graphs is at most $k$.
Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.
In this paper we analyze a conforming virtual element method to approximate the eigenfunctions and eigenvalues of the two dimensional Oseen eigenvalue problem. We consider the classic velocity-pressure formulation which allows us to consider the divergence-conforming virtual element spaces employed for the Stokes equations. Under standard assumptions on the meshes we derive a priori error estimates for the proposed method with the aid of the compact operators theory. We report some numerical tests to confirm the theoretical results.
We study the expressivity and the complexity of various logics in probabilistic team semantics with the Boolean negation. In particular, we study the extension of probabilistic independence logic with the Boolean negation, and a recently introduced logic FOPT. We give a comprehensive picture of the relative expressivity of these logics together with the most studied logics in probabilistic team semantics setting, as well as relating their expressivity to a numerical variant of second-order logic. In addition, we introduce novel entropy atoms and show that the extension of first-order logic by entropy atoms subsumes probabilistic independence logic. Finally, we obtain some results on the complexity of model checking, validity, and satisfiability of our logics.
Intending to introduce a method for the topological analysis of fields, we present a pipeline that takes as an input a weighted and based chain complex, produces a factored chain complex, and encodes it as a barcode of tagged intervals (briefly, a tagged barcode). We show how to apply this pipeline to the weighted and based chain complex of a gradient-like Morse-Smale vector field on a compact Riemannian manifold in both the smooth and discrete settings. Interestingly for computations, it turns out that there is an isometry between factored chain complexes endowed with the interleaving distance and their tagged barcodes endowed with the bottleneck distance. Concerning stability, we show that the map taking a generic enough gradient-like vector field to its barcode of tagged intervals is continuous. Finally, we prove that the tagged barcode of any such vector field can be approximated by the tagged barcode of a combinatorial version of it with arbitrary precision.
We consider the fundamental problem of constructing fast and small circuits for binary addition. We propose a new algorithm with running time $\mathcal O(n \log_2 n)$ for constructing linear-size $n$-bit adder circuits with a significantly better depth guarantee compared to previous approaches: Our circuits have a depth of at most $\log_2 n + \log_2 \log_2 n + \log_2 \log_2 \log_2 n + \text{const}$, improving upon the previously best circuits by [12] with a depth of at most $\log_2 n + 8 \sqrt{\log_2 n} + 6 \log_2 \log_2 n + \text{const}$. Hence, we decrease the gap to the lower bound of $\log_2 n + \log_2 \log_2 n + \text{const}$ by [5] significantly from $\mathcal O (\sqrt{\log_2 n})$ to $\mathcal O(\log_2 \log_2 \log_2 n)$. Our core routine is a new algorithm for the construction of a circuit for a single carry bit, or, more generally, for an And-Or path, i.e., a Boolean function of type $t_0 \lor ( t_1 \land (t_2 \lor ( \dots t_{m-1}) \dots ))$. We compute linear-size And-Or path circuits with a depth of at most $\log_2 m + \log_2 \log_2 m + 0.65$ in time $\mathcal O(m \log_2 m)$. These are the first And-Or path circuits known that, up to an additive constant, match the lower bound by [5] and at the same time have a linear size. The previously fastest And-Or path circuits are only by an additive constant worse in depth, but have a much higher size in the order of $\mathcal O (m \log_2 m)$.
The greedy spanner in a low dimensional Euclidean space is a fundamental geometric construction that has been extensively studied over three decades as it possesses the two most basic properties of a good spanner: constant maximum degree and constant lightness. Recently, Eppstein and Khodabandeh showed that the greedy spanner in $\mathbb{R}^2$ admits a sublinear separator in a strong sense: any subgraph of $k$ vertices of the greedy spanner in $\mathbb{R}^2$ has a separator of size $O(\sqrt{k})$. Their technique is inherently planar and is not extensible to higher dimensions. They left showing the existence of a small separator for the greedy spanner in $\mathbb{R}^d$ for any constant $d\geq 3$ as an open problem. In this paper, we resolve the problem of Eppstein and Khodabandeh by showing that any subgraph of $k$ vertices of the greedy spanner in $\mathbb{R}^d$ has a separator of size $O(k^{1-1/d})$. We introduce a new technique that gives a simple characterization for any geometric graph to have a sublinear separator that we dub $\tau$-lanky: a geometric graph is $\tau$-lanky if any ball of radius $r$ cuts at most $\tau$ edges of length at least $r$ in the graph. We show that any $\tau$-lanky geometric graph of $n$ vertices in $\mathbb{R}^d$ has a separator of size $O(\tau n^{1-1/d})$. We then derive our main result by showing that the greedy spanner is $O(1)$-lanky. We indeed obtain a more general result that applies to unit ball graphs and point sets of low fractal dimensions in $\mathbb{R}^d$. Our technique naturally extends to doubling metrics. We use the $\tau$-lanky characterization to show that there exists a $(1+\epsilon)$-spanner for doubling metrics of dimension $d$ with a constant maximum degree and a separator of size $O(n^{1-\frac{1}{d}})$; this result resolves an open problem posed by Abam and Har-Peled a decade ago.
Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.