For every $g\geq 2$ we distinguish real period matrices of real Riemann surfaces of topological type $(g,0,0)$ from the ones of topological type $(g,k,1)$, with $k$ equal to one or two for $g$ even or odd respectively (Theorem B). To that purpose, we exhibit new invariants of real principally polarized abelian varieties of orthosymmetric type (Theorem A.1). As a direct application, we obtain an exhaustive criterion to decide about the existence of real points on a real Riemann surface, requiring only a real period matrix of its and the evaluation of the sign of at most one (real) theta constant (Theorem C). A part of our real, algebro-geometric instruments first appeared in the framework of nonlinear integrable partial differential equations.
This work considers the low-rank approximation of a matrix $A(t)$ depending on a parameter $t$ in a compact set $D \subset \mathbb{R}^d$. Application areas that give rise to such problems include computational statistics and dynamical systems. Randomized algorithms are an increasingly popular approach for performing low-rank approximation and they usually proceed by multiplying the matrix with random dimension reduction matrices (DRMs). Applying such algorithms directly to $A(t)$ would involve different, independent DRMs for every $t$, which is not only expensive but also leads to inherently non-smooth approximations. In this work, we propose to use constant DRMs, that is, $A(t)$ is multiplied with the same DRM for every $t$. The resulting parameter-dependent extensions of two popular randomized algorithms, the randomized singular value decomposition and the generalized Nystr\"{o}m method, are computationally attractive, especially when $A(t)$ admits an affine linear decomposition with respect to $t$. We perform a probabilistic analysis for both algorithms, deriving bounds on the expected value as well as failure probabilities for the approximation error when using Gaussian random DRMs. Both, the theoretical results and numerical experiments, show that the use of constant DRMs does not impair their effectiveness; our methods reliably return quasi-best low-rank approximations.
Given a boolean formula $\Phi$(X, Y, Z), the Max\#SAT problem asks for finding a partial model on the set of variables X, maximizing its number of projected models over the set of variables Y. We investigate a strict generalization of Max\#SAT allowing dependencies for variables in X, effectively turning it into a synthesis problem. We show that this new problem, called DQMax\#SAT, subsumes both the DQBF and DSSAT problems. We provide a general resolution method, based on a reduction to Max\#SAT, together with two improvements for dealing with its inherent complexity. We further discuss a concrete application of DQMax\#SAT for symbolic synthesis of adaptive attackers in the field of program security. Finally, we report preliminary results obtained on the resolution of benchmark problems using a prototype DQMax\#SAT solver implementation.
In this note, we give sufficient conditions for the almost sure and the convergence in $\mathbb{L}^p$ of a $U$-statistic of order $m$ built on a strictly stationary but not necessarily ergodic sequence.
Let the costs $C(i,j)$ for an instance of the asymmetric traveling salesperson problem be independent uniform $[0,1]$ random variables. We consider the efficiency of branch and bound algorithms that use the assignment relaxation as a lower bound. We show that w.h.p. the number of steps taken in any such branch and bound algorithm is $e^{\Omega(n^a)}$ for some small absolute constant $a>0$.
We consider {\it local} balances of momentum and angular momentum for the incompressible Navier-Stokes equations. First, we formulate new weak forms of the physical balances (conservation laws) of these quantities, and prove they are equivalent to the usual conservation law formulations. We then show that continuous Galerkin discretizations of the Navier-Stokes equations using the EMAC form of the nonlinearity preserve discrete analogues of the weak form conservation laws, both in the Eulerian formulation and the Lagrangian formulation (which are not equivalent after discretizations). Numerical tests illustrate the new theory.
Given a target distribution $\pi$ and an arbitrary Markov infinitesimal generator $L$ on a finite state space $\mathcal{X}$, we develop three structured and inter-related approaches to generate new reversiblizations from $L$. The first approach hinges on a geometric perspective, in which we view reversiblizations as projections onto the space of $\pi$-reversible generators under suitable information divergences such as $f$-divergences. With different choices of functions $f$, we not only recover nearly all established reversiblizations but also unravel and generate new reversiblizations. Along the way, we unveil interesting geometric results such as bisection properties, Pythagorean identities, parallelogram laws and a Markov chain counterpart of the arithmetic-geometric-harmonic mean inequality governing these reversiblizations. This further serves as motivation for introducing the notion of information centroids of a sequence of Markov chains and to give conditions for their existence and uniqueness. Building upon the first approach, we view reversiblizations as generalized means. In this second approach, we construct new reversiblizations via different natural notions of generalized means such as the Cauchy mean or the dual mean. In the third approach, we combine the recently introduced locally-balanced Markov processes framework and the notion of convex $*$-conjugate in the study of $f$-divergence. The latter offers a rich source of balancing functions to generate new reversiblizations.
Let $G$ be a graph on $n$ vertices with adjacency matrix $A$, and let $\mathbf{1}$ be the all-ones vector. We call $G$ controllable if the set of vectors $\mathbf{1}, A\mathbf{1}, \dots, A^{n-1}\mathbf{1}$ spans the whole space $\mathbb{R}^n$. We characterize the isomorphism problem of controllable graphs in terms of other combinatorial, geometric and logical problems. We also describe a polynomial time algorithm for graph isomorphism that works for almost all graphs.
We introduce and study a scale of operator classes on the annulus that is motivated by the $\mathcal{C}_{\rho}$ classes of $\rho$-contractions of Nagy and Foia\c{s}. In particular, our classes are defined in terms of the contractivity of the double-layer potential integral operator over the annulus. We prove that if, in addition, complete contractivity is assumed, then one obtains a complete characterization involving certain variants of the $\mathcal{C}_{\rho}$ classes. Recent work of Crouzeix-Greenbaum and Schwenninger-de Vries allows us to also obtain relevant K-spectral estimates, generalizing existing results from the literature on the annulus. Finally, we exhibit a special case where these estimates can be significantly strengthened.
Let $E$ be a separable Banach space and let $X, X_1,\dots, X_n, \dots$ be i.i.d. Gaussian random variables taking values in $E$ with mean zero and unknown covariance operator $\Sigma: E^{\ast}\mapsto E.$ The complexity of estimation of $\Sigma$ based on observations $X_1,\dots, X_n$ is naturally characterized by the so called effective rank of $\Sigma:$ ${\bf r}(\Sigma):= \frac{{\mathbb E}_{\Sigma}\|X\|^2}{\|\Sigma\|},$ where $\|\Sigma\|$ is the operator norm of $\Sigma.$ Given a smooth real valued functional $f$ defined on the space $L(E^{\ast},E)$ of symmetric linear operators from $E^{\ast}$ into $E$ (equipped with the operator norm), our goal is to study the problem of estimation of $f(\Sigma)$ based on $X_1,\dots, X_n.$ The estimators of $f(\Sigma)$ based on jackknife type bias reduction are considered and the dependence of their Orlicz norm error rates on effective rank ${\bf r}(\Sigma),$ the sample size $n$ and the degree of H\"older smoothness $s$ of functional $f$ are studied. In particular, it is shown that, if ${\bf r}(\Sigma)\lesssim n^{\alpha}$ for some $\alpha\in (0,1)$ and $s\geq \frac{1}{1-\alpha},$ then the classical $\sqrt{n}$-rate is attainable and, if $s> \frac{1}{1-\alpha},$ then asymptotic normality and asymptotic efficiency of the resulting estimators hold. Previously, the results of this type (for different estimators) were obtained only in the case of finite dimensional Euclidean space $E={\mathbb R}^d$ and for covariance operators $\Sigma$ whose spectrum is bounded away from zero (in which case, ${\bf r}(\Sigma)\asymp d$).
Let $f:[0,1]^d\to\mathbb{R}$ be a completely monotone integrand as defined by Aistleitner and Dick (2015) and let points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d$ have a non-negative local discrepancy (NNLD) everywhere in $[0,1]^d$. We show how to use these properties to get a non-asymptotic and computable upper bound for the integral of $f$ over $[0,1]^d$. An analogous non-positive local discrepancy (NPLD) property provides a computable lower bound. It has been known since Gabai (1967) that the two dimensional Hammersley points in any base $b\ge2$ have non-negative local discrepancy. Using the probabilistic notion of associated random variables, we generalize Gabai's finding to digital nets in any base $b\ge2$ and any dimension $d\ge1$ when the generator matrices are permutation matrices. We show that permutation matrices cannot attain the best values of the digital net quality parameter when $d\ge3$. As a consequence the computable absolutely sure bounds we provide come with less accurate estimates than the usual digital net estimates do in high dimensions. We are also able to construct high dimensional rank one lattice rules that are NNLD. We show that those lattices do not have good discrepancy properties: any lattice rule with the NNLD property in dimension $d\ge2$ either fails to be projection regular or has all its points on the main diagonal.