In a 2005 paper, Casacuberta, Scevenels and Smith construct a homotopy idempotent functor $E$ on the category of simplicial sets with the property that whether it can be expressed as localization with respect to a map $f$ is independent of the ZFC axioms. We show that this construction can be carried out in homotopy type theory. More precisely, we give a general method of associating to a suitable (possibly large) family of maps, a reflective subuniverse of any universe $\mathcal{U}$. When specialized to an appropriate family, this produces a localization which when interpreted in the $\infty$-topos of spaces agrees with the localization corresponding to $E$. Our approach generalizes the approach of [CSS] in two ways. First, by working in homotopy type theory, our construction can be interpreted in any $\infty$-topos. Second, while the local objects produced by [CSS] are always 1-types, our construction can produce $n$-types, for any $n$. This is new, even in the $\infty$-topos of spaces. In addition, by making use of universes, our proof is very direct. Along the way, we prove many results about "small" types that are of independent interest. As an application, we give a new proof that separated localizations exist. We also give results that say when a localization with respect to a family of maps can be presented as localization with respect to a single map, and show that the simplicial model satisfies a strong form of the axiom of choice which implies that sets cover and that the law of excluded middle holds.
We consider the problem of finite-time identification of linear dynamical systems from $T$ samples of a single trajectory. Recent results have predominantly focused on the setup where no structural assumption is made on the system matrix $A^* \in \mathbb{R}^{n \times n}$, and have consequently analyzed the ordinary least squares (OLS) estimator in detail. We assume prior structural information on $A^*$ is available, which can be captured in the form of a convex set $\mathcal{K}$ containing $A^*$. For the solution of the ensuing constrained least squares estimator, we derive non-asymptotic error bounds in the Frobenius norm that depend on the local size of $\mathcal{K}$ at $A^*$. To illustrate the usefulness of these results, we instantiate them for four examples, namely when (i) $A^*$ is sparse and $\mathcal{K}$ is a suitably scaled $\ell_1$ ball; (ii) $\mathcal{K}$ is a subspace; (iii) $\mathcal{K}$ consists of matrices each of which is formed by sampling a bivariate convex function on a uniform $n \times n$ grid (convex regression); (iv) $\mathcal{K}$ consists of matrices each row of which is formed by uniform sampling (with step size $1/T$) of a univariate Lipschitz function. In all these situations, we show that $A^*$ can be reliably estimated for values of $T$ much smaller than what is needed for the unconstrained setting.
We generalize McDiarmid's inequality for functions with bounded differences on a high probability set, using an extension argument. Those functions concentrate around their conditional expectations. We further extend the results to concentration in general metric spaces.
Network reconstruction consists in determining the unobserved pairwise couplings between $N$ nodes given only observational data on the resulting behavior that is conditioned on those couplings -- typically a time-series or independent samples from a graphical model. A major obstacle to the scalability of algorithms proposed for this problem is a seemingly unavoidable quadratic complexity of $\Omega(N^2)$, corresponding to the requirement of each possible pairwise coupling being contemplated at least once, despite the fact that most networks of interest are sparse, with a number of non-zero couplings that is only $O(N)$. Here we present a general algorithm applicable to a broad range of reconstruction problems that significantly outperforms this quadratic baseline. Our algorithm relies on a stochastic second neighbor search (Dong et al., 2011) that produces the best edge candidates with high probability, thus bypassing an exhaustive quadratic search. If we rely on the conjecture that the second-neighbor search finishes in log-linear time (Baron & Darling, 2020; 2022), we demonstrate theoretically that our algorithm finishes in subquadratic time, with a data-dependent complexity loosely upper bounded by $O(N^{3/2}\log N)$, but with a more typical log-linear complexity of $O(N\log^2N)$. In practice, we show that our algorithm achieves a performance that is many orders of magnitude faster than the quadratic baseline -- in a manner consistent with our theoretical analysis -- allows for easy parallelization, and thus enables the reconstruction of networks with hundreds of thousands and even millions of nodes and edges.
We develop an inferential toolkit for analyzing object-valued responses, which correspond to data situated in general metric spaces, paired with Euclidean predictors within the conformal framework. To this end we introduce conditional profile average transport costs, where we compare distance profiles that correspond to one-dimensional distributions of probability mass falling into balls of increasing radius through the optimal transport cost when moving from one distance profile to another. The average transport cost to transport a given distance profile to all others is crucial for statistical inference in metric spaces and underpins the proposed conditional profile scores. A key feature of the proposed approach is to utilize the distribution of conditional profile average transport costs as conformity score for general metric space-valued responses, which facilitates the construction of prediction sets by the split conformal algorithm. We derive the uniform convergence rate of the proposed conformity score estimators and establish asymptotic conditional validity for the prediction sets. The finite sample performance for synthetic data in various metric spaces demonstrates that the proposed conditional profile score outperforms existing methods in terms of both coverage level and size of the resulting prediction sets, even in the special case of scalar and thus Euclidean responses. We also demonstrate the practical utility of conditional profile scores for network data from New York taxi trips and for compositional data reflecting energy sourcing of U.S. states.
This work introduces a novel framework for dynamic factor model-based group-level analysis of multiple subjects time series data, called GRoup Integrative DYnamic factor (GRIDY) models. The framework identifies and characterizes inter-subject similarities and differences between two pre-determined groups by considering a combination of group spatial information and individual temporal dynamics. Furthermore, it enables the identification of intra-subject similarities and differences over time by employing different model configurations for each subject. Methodologically, the framework combines a novel principal angle-based rank selection algorithm and a non-iterative integrative analysis framework. Inspired by simultaneous component analysis, this approach also reconstructs identifiable latent factor series with flexible covariance structures. The performance of the GRIDY models is evaluated through simulations conducted under various scenarios. An application is also presented to compare resting-state functional MRI data collected from multiple subjects in autism spectrum disorder and control groups.
We initiate the study of Hamiltonian structure learning from real-time evolution: given the ability to apply $e^{-\mathrm{i} Ht}$ for an unknown local Hamiltonian $H = \sum_{a = 1}^m \lambda_a E_a$ on $n$ qubits, the goal is to recover $H$. This problem is already well-studied under the assumption that the interaction terms, $E_a$, are given, and only the interaction strengths, $\lambda_a$, are unknown. But is it possible to learn a local Hamiltonian without prior knowledge of its interaction structure? We present a new, general approach to Hamiltonian learning that not only solves the challenging structure learning variant, but also resolves other open questions in the area, all while achieving the gold standard of Heisenberg-limited scaling. In particular, our algorithm recovers the Hamiltonian to $\varepsilon$ error with an evolution time scaling with $1/\varepsilon$, and has the following appealing properties: (1) it does not need to know the Hamiltonian terms; (2) it works beyond the short-range setting, extending to any Hamiltonian $H$ where the sum of terms interacting with a qubit has bounded norm; (3) it evolves according to $H$ in constant time $t$ increments, thus achieving constant time resolution. To our knowledge, no prior algorithm with Heisenberg-limited scaling existed with even one of these properties. As an application, we can also learn Hamiltonians exhibiting power-law decay up to accuracy $\varepsilon$ with total evolution time beating the standard limit of $1/\varepsilon^2$.
We develop a multiscale scanning method to find anomalies in a $d$-dimensional random field in the presence of nuisance parameters. This covers the common situation that either the baseline-level or additional parameters such as the variance are unknown and have to be estimated from the data. We argue that state of the art approaches to determine asymptotically correct critical values for multiscale scanning statistics will in general fail when such parameters are naively replaced by plug-in estimators. Opposed to this, we suggest to estimate the nuisance parameters on the largest scale and to use (only) smaller scales for multiscale scanning. We prove a uniform invariance principle for the resulting adjusted multiscale statistic (AMS), which is widely applicable and provides a computationally feasible way to simulate asymptotically correct critical values. We illustrate the implications of our theoretical results in a simulation study and in a real data example from super-resolution STED microscopy. This allows us to identify interesting regions inside a specimen in a pre-scan with controlled family-wise error rate.
In this paper, the construction of $C^{1}$ cubic quasi-interpolants on a three-direction mesh of $\RR^{2}$ is addressed. The quasi-interpolating splines are defined by directly setting their Bernstein-B\'{e}zier coefficients relative to each triangle from point and gradient values in order to reproduce the polynomials of the highest possible degree. Moreover, additional global properties are required. Finally, we provide some numerical tests confirming the approximation properties.
This paper analyses conforming and nonconforming virtual element formulations of arbitrary polynomial degrees on general polygonal meshes for the coupling of solid and fluid phases in deformable porous plates. The governing equations consist of one fourth-order equation for the transverse displacement of the middle surface coupled with a second-order equation for the pressure head relative to the solid with mixed boundary conditions. We propose novel enrichment operators that connect nonconforming virtual element spaces of general degree to continuous Sobolev spaces. These operators satisfy additional orthogonal and best-approximation properties (referred to as a conforming companion operator in the context of finite element methods), which play an important role in the nonconforming methods. This paper proves a priori error estimates in the best-approximation form, and derives residual--based reliable and efficient a posteriori error estimates in appropriate norms, and shows that these error bounds are robust with respect to the main model parameters. The computational examples illustrate the numerical behaviour of the suggested virtual element discretisations and confirm the theoretical findings on different polygonal meshes with mixed boundary conditions.
We suggest new closely related methods for numerical inversion of $Z$-transform and Wiener-Hopf factorization of functions on the unit circle, based on sinh-deformations of the contours of integration, corresponding changes of variables and the simplified trapezoid rule. As applications, we consider evaluation of high moments of probability distributions and construction of causal filters. Programs in Matlab running on a Mac with moderate characteristics achieves the precision E-14 in several dozen of microseconds and E-11 in several milliseconds, respectively.