$\bf{Purpose}$: To assess whether Growth & Remodeling (G&R) theory could explain staphyloma formation from a local scleral weakening. $\bf{Methods}$: A finite element model of a healthy eye was reconstructed, including the following connective tissues: the lamina cribrosa, the peripapillary sclera, and the peripheral sclera. The scleral shell was modelled as a constrained mixture, consisting of an isotropic ground matrix and two collagen fiber families (circumferential and meridional). The homogenized constrained mixture model was employed to simulate the adaptation of the sclera to alterations in its biomechanical environment over a duration of 13.7 years. G&R processes were triggered by reducing the shear stiffness of the ground matrix in the peripapillary sclera and lamina cribrosa by 85%. Three distinct G&R scenarios were investigated: (1) low mass turnover rate in combination with transmural volumetric growth; (2) high mass turnover rate in combination with transmural volumetric growth; and (3) high mass turnover rate in combination with mass density growth. $\bf{Results}$: In scenario 1, we observed a significant outpouching of the posterior pole, closely resembling the shape of a Type-III staphyloma. Additionally, we found a notable change in scleral curvature and a thinning of the peripapillary sclera by 84%. In contrast, scenarios 2 and 3 exhibited less drastic deformations, with stable posterior staphylomas after approximately 7 years. $\bf{Conclusions}$: Our framework suggests that local scleral weakening is sufficient to trigger staphyloma formation under normal intraocular pressure. With patient-specific scleral geometries (obtainable via wide-field optical coherence tomography), our framework could aid in identifying individuals at risk of developing posterior staphylomas.
We provide the sandwiched R\'enyi divergence of order $\alpha\in(\frac{1}{2},1)$, as well as its induced quantum information quantities, with an operational interpretation in the characterization of the exact strong converse exponents of quantum tasks. Specifically, we consider (a) smoothing of the max-relative entropy, (b) quantum privacy amplification, and (c) quantum information decoupling. We solve the problem of determining the exact strong converse exponents for these three tasks, with the performance being measured by the fidelity or purified distance. The results are given in terms of the sandwiched R\'enyi divergence of order $\alpha\in(\frac{1}{2},1)$, and its induced quantum R\'enyi conditional entropy and quantum R\'enyi mutual information. This is the first time to find the precise operational meaning for the sandwiched R\'enyi divergence with R\'enyi parameter in the interval $\alpha\in(\frac{1}{2},1)$.
The classical theory of Kosambi-Cartan-Chern (KCC) developed in differential geometry provides a powerful method for analyzing the behaviors of dynamical systems. In the KCC theory, the properties of a dynamical system are described in terms of five geometrical invariants, of which the second corresponds to the so-called Jacobi stability of the system. Different from that of the Lyapunov stability that has been studied extensively in the literature, the analysis of the Jacobi stability has been investigated more recently using geometrical concepts and tools. It turns out that the existing work on the Jacobi stability analysis remains theoretical and the problem of algorithmic and symbolic treatment of Jacobi stability analysis has yet to be addressed. In this paper, we initiate our study on the problem for a class of ODE systems of arbitrary dimension and propose two algorithmic schemes using symbolic computation to check whether a nonlinear dynamical system may exhibit Jacobi stability. The first scheme, based on the construction of the complex root structure of a characteristic polynomial and on the method of quantifier elimination, is capable of detecting the existence of the Jacobi stability of the given dynamical system. The second algorithmic scheme exploits the method of semi-algebraic system solving and allows one to determine conditions on the parameters for a given dynamical system to have a prescribed number of Jacobi stable fixed points. Several examples are presented to demonstrate the effectiveness of the proposed algorithmic schemes.
The sign-constrained Stiefel manifold in $\mathbb{R}^{n\times r}$ is a segment of the Stiefel manifold with fixed signs (nonnegative or nonpositive) for some columns of the matrices. It includes the nonnegative Stiefel manifold as a special case. We present global and local error bounds that provide an inequality with easily computable residual functions and explicit coefficients to bound the distance from matrices in $\mathbb{R}^{n\times r}$ to the sign-constrained Stiefel manifold. Moreover, we show that the error bounds cannot be improved except for the multiplicative constants under some mild conditions, which explains why two square-root terms are necessary in the bounds when $1< r <n$ and why the $\ell_1$ norm can be used in the bounds when $r = n$ or $r = 1$ for the sign constraints and orthogonality, respectively. The error bounds are applied to derive exact penalty methods for minimizing a Lipschitz continuous function with orthogonality and sign constraints.
Surface integral equations (SIEs)-based boundary element methods are widely used for analyzing electromagnetic scattering scenarii. However, after discretization of SIEs, the spectrum and eigenvectors of the boundary element matrices are not usually representative of the spectrum and eigenfunctions of the underlying surface integral operators, which can be problematic for methods that rely heavily on spectral properties. To address this issue, we delineate some efficient algorithms that allow for the computation of matrix square roots and inverse square roots of the Gram matrices corresponding to the discretization scheme, which can be used for revealing the spectrum of standard electromagnetic integral operators. The algorithms, which are based on properly chosen expansions of the square root and inverse square root functions, are quite effective when applied to several of the most relevant Gram matrices used for boundary element discretizations in electromagnetics. Tables containing different sets of expansion coefficients are provided along with comparative numerical experiments that evidence advantages and disadvantages of the different approaches. In addition, to demonstrate the spectrum-revealing properties of the proposed techniques, they are applied to the discretization of the problem of scattering by a sphere for which the analytic spectrum is known.
We introduce and characterize the operational diversity order (ODO) in fading channels, as a proxy to the classical notion of diversity order at any arbitrary operational signal-to-noise ratio (SNR). Thanks to this definition, relevant insights are brought up in a number of cases: (i) We quantify that in line-of-sight scenarios an increased diversity order is attainable compared to that achieved asymptotically; (ii) this effect is attenuated, but still visible, in the presence of an additional dominant specular component; (iii) we confirm that the decay slope in Rayleigh product channels increases very slowly and never fully achieves unitary slope for finite values of SNR.
We study active learning methods for single index models of the form $F({\mathbf x}) = f(\langle {\mathbf w}, {\mathbf x}\rangle)$, where $f:\mathbb{R} \to \mathbb{R}$ and ${\mathbf x,\mathbf w} \in \mathbb{R}^d$. In addition to their theoretical interest as simple examples of non-linear neural networks, single index models have received significant recent attention due to applications in scientific machine learning like surrogate modeling for partial differential equations (PDEs). Such applications require sample-efficient active learning methods that are robust to adversarial noise. I.e., that work even in the challenging agnostic learning setting. We provide two main results on agnostic active learning of single index models. First, when $f$ is known and Lipschitz, we show that $\tilde{O}(d)$ samples collected via {statistical leverage score sampling} are sufficient to learn a near-optimal single index model. Leverage score sampling is simple to implement, efficient, and already widely used for actively learning linear models. Our result requires no assumptions on the data distribution, is optimal up to log factors, and improves quadratically on a recent ${O}(d^{2})$ bound of \cite{gajjar2023active}. Second, we show that $\tilde{O}(d)$ samples suffice even in the more difficult setting when $f$ is \emph{unknown}. Our results leverage tools from high dimensional probability, including Dudley's inequality and dual Sudakov minoration, as well as a novel, distribution-aware discretization of the class of Lipschitz functions.
This paper presents sufficient conditions for the stability and $\ell_2$-gain performance of recurrent neural networks (RNNs) with ReLU activation functions. These conditions are derived by combining Lyapunov/dissipativity theory with Quadratic Constraints (QCs) satisfied by repeated ReLUs. We write a general class of QCs for repeated RELUs using known properties for the scalar ReLU. Our stability and performance condition uses these QCs along with a "lifted" representation for the ReLU RNN. We show that the positive homogeneity property satisfied by a scalar ReLU does not expand the class of QCs for the repeated ReLU. We present examples to demonstrate the stability / performance condition and study the effect of the lifting horizon.
We consider the construction of maximal families of polynomials over the finite field $\mathbb{F}_q$, all having the same degree $n$ and a nonzero constant term, where the degree of the GCD of any two polynomials is $d$ with $1 \le d\le n$. The motivation for this problem lies in a recent construction for subspace codes based on cellular automata. More precisely, the minimum distance of such subspace codes relates to the maximum degree $d$ of the pairwise GCD in this family of polynomials. Hence, characterizing the maximal families of such polynomials is equivalent to determining the maximum cardinality of the corresponding subspace codes for a given minimum distance. We first show a lower bound on the cardinality of such families, and then focus on the specific case where $d=1$. There, we characterize the maximal families of polynomials over the binary field $\mathbb{F}_2$. Our findings prompt several more open questions, which we plan to address in an extended version of this work.
The Learning With Errors ($\mathsf{LWE}$) problem asks to find $\mathbf{s}$ from an input of the form $(\mathbf{A}, \mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}) \in (\mathbb{Z}/q\mathbb{Z})^{m \times n} \times (\mathbb{Z}/q\mathbb{Z})^{m}$, for a vector $\mathbf{e}$ that has small-magnitude entries. In this work, we do not focus on solving $\mathsf{LWE}$ but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create $\mathbf{s}$ and $\mathbf{e}$ and then set $\mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}$. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample $(\mathbf{A}, \mathbf{A}\mathbf{s}+\mathbf{e})$, namely, without knowing the underlying $\mathbf{s}$. A variant of the assumption that oblivious $\mathsf{LWE}$ sampling is hard has been used in a series of works to analyze the security of candidate constructions of Succinct Non interactive Arguments of Knowledge (SNARKs). As the assumption is related to $\mathsf{LWE}$, these SNARKs have been conjectured to be secure in the presence of quantum adversaries. Our main result is a quantum polynomial-time algorithm that samples well-distributed $\mathsf{LWE}$ instances while provably not knowing the solution, under the assumption that $\mathsf{LWE}$ is hard. Moreover, the approach works for a vast range of $\mathsf{LWE}$ parametrizations, including those used in the above-mentioned SNARKs. This invalidates the assumptions used in their security analyses, although it does not yield attacks against the constructions themselves.
We investigate the decidability of the monadic second-order (MSO) theory of the structure $\langle \mathbb{N};<,P_1, \ldots,P_k \rangle$, for various unary predicates $P_1,\ldots,P_k \subseteq \mathbb{N}$. We focus in particular on "arithmetic" predicates arising in the study of linear recurrence sequences, such as fixed-base powers $\mathsf{Pow}_k = \{k^n : n \in \mathbb{N}\}$, $k$-th powers $\mathsf{N}_k = \{n^k : n \in \mathbb{N}\}$, and the set of terms of the Fibonacci sequence $\mathsf{Fib} = \{0,1,2,3,5,8,13,\ldots\}$ (and similarly for other linear recurrence sequences having a single, non-repeated, dominant characteristic root). We obtain several new unconditional and conditional decidability results, a select sample of which are the following: $\bullet$ The MSO theory of $\langle \mathbb{N};<,\mathsf{Pow}_2, \mathsf{Fib} \rangle$ is decidable; $\bullet$ The MSO theory of $\langle \mathbb{N};<, \mathsf{Pow}_2, \mathsf{Pow}_3, \mathsf{Pow}_6 \rangle$ is decidable; $\bullet$ The MSO theory of $\langle \mathbb{N};<, \mathsf{Pow}_2, \mathsf{Pow}_3, \mathsf{Pow}_5 \rangle$ is decidable assuming Schanuel's conjecture; $\bullet$ The MSO theory of $\langle \mathbb{N};<, \mathsf{Pow}_4, \mathsf{N}_2 \rangle$ is decidable; $\bullet$ The MSO theory of $\langle \mathbb{N};<, \mathsf{Pow}_2, \mathsf{N}_2 \rangle$ is Turing-equivalent to the MSO theory of $\langle \mathbb{N};<,S \rangle$, where $S$ is the predicate corresponding to the binary expansion of $\sqrt{2}$. (As the binary expansion of $\sqrt{2}$ is widely believed to be normal, the corresponding MSO theory is in turn expected to be decidable.) These results are obtained by exploiting and combining techniques from dynamical systems, number theory, and automata theory.