In this paper, we prove a Logarithmic Conjugation Theorem on finitely-connected tori. The theorem states that a harmonic function can be written as the real part of a function whose derivative is analytic and a finite sum of terms involving the logarithm of the modulus of a modified Weierstrass sigma function. We implement the method using arbitrary precision and use the result to find approximate solutions to the Laplace problem and Steklov eigenvalue problem. Using a posteriori estimation, we show that the solution of the Laplace problem on a torus with a few circular holes has error less than $10^{-100}$ using a few hundred degrees of freedom and the Steklov eigenvalues have similar error.
We combine the recent relaxation approach with multiderivative Runge-Kutta methods to preserve conservation or dissipation of entropy functionals for ordinary and partial differential equations. Relaxation methods are minor modifications of explicit and implicit schemes, requiring only the solution of a single scalar equation per time step in addition to the baseline scheme. We demonstrate the robustness of the resulting methods for a range of test problems including the 3D compressible Euler equations. In particular, we point out improved error growth rates for certain entropy-conservative problems including nonlinear dispersive wave equations.
We study the design of embeddings into Euclidean space with outliers. Given a metric space $(X,d)$ and an integer $k$, the goal is to embed all but $k$ points in $X$ (called the ``outliers") into $\ell_2$ with the smallest possible distortion $c$. Finding the optimal distortion $c$ for a given outlier set size $k$, or alternately the smallest $k$ for a given target distortion $c$ are both NP-hard problems. In fact, it is UGC-hard to approximate $k$ to within a factor smaller than $2$ even when the metric sans outliers is isometrically embeddable into $\ell_2$. We consider bi-criteria approximations. Our main result is a polynomial time algorithm that approximates the outlier set size to within an $O(\log^2 k)$ factor and the distortion to within a constant factor. The main technical component in our result is an approach for constructing Lipschitz extensions of embeddings into Banach spaces (such as $\ell_p$ spaces). We consider a stronger version of Lipschitz extension that we call a \textit{nested composition of embeddings}: given a low distortion embedding of a subset $S$ of the metric space $X$, our goal is to extend this embedding to all of $X$ such that the distortion over $S$ is preserved, whereas the distortion over the remaining pairs of points in $X$ is bounded by a function of the size of $X\setminus S$. Prior work on Lipschitz extension considers settings where the size of $X$ is potentially much larger than that of $S$ and the expansion bounds depend on $|S|$. In our setting, the set $S$ is nearly all of $X$ and the remaining set $X\setminus S$, a.k.a. the outliers, is small. We achieve an expansion bound that is logarithmic in $|X\setminus S|$.
This paper introduces an assumption-lean method that constructs valid and efficient lower predictive bounds (LPBs) for survival times with censored data. We build on recent work by Cand\`es et al. (2021), whose approach first subsets the data to discard any data points with early censoring times, and then uses a reweighting technique (namely, weighted conformal inference (Tibshirani et al., 2019)) to correct for the distribution shift introduced by this subsetting procedure. For our new method, instead of constraining to a fixed threshold for the censoring time when subsetting the data, we allow for a covariate-dependent and data-adaptive subsetting step, which is better able to capture the heterogeneity of the censoring mechanism. As a result, our method can lead to LPBs that are less conservative and give more accurate information. We show that in the Type I right-censoring setting, if either of the censoring mechanism or the conditional quantile of survival time is well estimated, our proposed procedure achieves nearly exact marginal coverage, where in the latter case we additionally have approximate conditional coverage. We evaluate the validity and efficiency of our proposed algorithm in numerical experiments, illustrating its advantage when compared with other competing methods. Finally, our method is applied to a real dataset to generate LPBs for users' active times on a mobile app.
Imaging through perturbed multimode fibres based on deep learning has been widely researched. However, existing methods mainly use target-speckle pairs in different configurations. It is challenging to reconstruct targets without trained networks. In this paper, we propose a physics-assisted, unsupervised, learning-based fibre imaging scheme. The role of the physical prior is to simplify the mapping relationship between the speckle pattern and the target image, thereby reducing the computational complexity. The unsupervised network learns target features according to the optimized direction provided by the physical prior. Therefore, the reconstruction process of the online learning only requires a few speckle patterns and unpaired targets. The proposed scheme also increases the generalization ability of the learning-based method in perturbed multimode fibres. Our scheme has the potential to extend the application of multimode fibre imaging.
We construct a bipartite generalization of Alon and Szegedy's nearly orthogonal vectors, thereby obtaining strong bounds for several extremal problems involving the Lov\'asz theta function, vector chromatic number, minimum semidefinite rank, nonnegative rank, and extension complexity of polytopes. In particular, we derive a couple of general lower bounds for the vector chromatic number which may be of independent interest.
Quantum computing has recently emerged as a transformative technology. Yet, its promised advantages rely on efficiently translating quantum operations into viable physical realizations. In this work, we use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation. Leveraging text-conditioning, we steer the model to produce desired quantum operations within gate-based quantum circuits. Notably, DMs allow to sidestep during training the exponential overhead inherent in the classical simulation of quantum dynamics -- a consistent bottleneck in preceding ML techniques. We demonstrate the model's capabilities across two tasks: entanglement generation and unitary compilation. The model excels at generating new circuits and supports typical DM extensions such as masking and editing to, for instance, align the circuit generation to the constraints of the targeted quantum device. Given their flexibility and generalization abilities, we envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.
In this paper we propose a definition of the distributional Riemann curvature tensor in dimension $N\geq 2$ if the underlying metric tensor $g$ defined on a triangulation $\mathcal{T}$ possesses only single-valued tangential-tangential components on codimension 1 simplices. We analyze the convergence of the curvature approximation in the $H^{-2}$-norm if a sequence of interpolants $g_h$ of polynomial order $k\geq 0$ of a smooth metric $g$ is given. We show that for dimension $N=2$ convergence rates of order $\mathcal{O}(h^{k+1})$ are obtained. For $N\geq 3$ convergence holds only in the case $k\geq 1$. Numerical examples demonstrate that our theoretical results are sharp. By choosing appropriate test functions we show that the distributional Gauss and scalar curvature in 2D respectively any dimension are obtained. Further, a first definition of the distributional Ricci curvature tensor in arbitrary dimension is derived, for which our analysis is applicable.
Weights are geometrical degrees of freedom that allow to generalise Lagrangian finite elements. They are defined through integrals over specific supports, well understood in terms of differential forms and integration, and lie within the framework of finite element exterior calculus. In this work we exploit this formalism with the target of identifying supports that are appealing for finite element approximation. To do so, we study the related parametric matrix-sequences, with the matrix order tending to infinity as the mesh size tends to zero. We describe the conditioning and the spectral global behavior in terms of the standard Toeplitz machinery and GLT theory, leading to the identification of the optimal choices for weights. Moreover, we propose and test ad hoc preconditioners, in dependence of the discretization parameters and in connection with conjugate gradient method. The model problem we consider is a onedimensional Laplacian, both with constant and non constant coefficients. Numerical visualizations and experimental tests are reported and critically discussed, demonstrating the advantages of weights-induced bases over standard Lagrangian ones. Open problems and future steps are listed in the conclusive section, especially regarding the multidimensional case.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language