Consider the linear ill-posed problems of the form $\sum_{i=1}^{b} A_i x_i =y$, where, for each $i$, $A_i$ is a bounded linear operator between two Hilbert spaces $X_i$ and ${\mathcal Y}$. When $b$ is huge, solving the problem by an iterative method using the full gradient at each iteration step is both time-consuming and memory insufficient. Although randomized block coordinate decent (RBCD) method has been shown to be an efficient method for well-posed large-scale optimization problems with a small amount of memory, there still lacks a convergence analysis on the RBCD method for solving ill-posed problems. In this paper, we investigate the convergence property of the RBCD method with noisy data under either {\it a priori} or {\it a posteriori} stopping rules. We prove that the RBCD method combined with an {\it a priori} stopping rule yields a sequence that converges weakly to a solution of the problem almost surely. We also consider the early stopping of the RBCD method and demonstrate that the discrepancy principle can terminate the iteration after finite many steps almost surely. For a class of ill-posed problems with special tensor product form, we obtain strong convergence results on the RBCD method. Furthermore, we consider incorporating the convex regularization terms into the RBCD method to enhance the detection of solution features. To illustrate the theory and the performance of the method, numerical simulations from the imaging modalities in computed tomography and compressive temporal imaging are reported.
Given finite i.i.d.~samples in a Hilbert space with zero mean and trace-class covariance operator $\Sigma$, the problem of recovering the spectral projectors of $\Sigma$ naturally arises in many applications. In this paper, we consider the problem of finding distributional approximations of the spectral projectors of the empirical covariance operator $\hat \Sigma$, and offer a dimension-free framework where the complexity is characterized by the so-called relative rank of $\Sigma$. In this setting, novel quantitative limit theorems and bootstrap approximations are presented subject only to mild conditions in terms of moments and spectral decay. In many cases, these even improve upon existing results in a Gaussian setting.
Determining the complexity of computing Gr\"{o}bner bases is an important problem both in theory and in practice, and for that the solving degree plays a key role. In this paper, we study the solving degrees of affine semi-regular sequences and their homogenized sequences. Some of our results are considered to give mathematically rigorous proofs of the correctness of methods for computing Gr\"{o}bner bases of the ideal generated by an affine semi-regular sequence. This paper is a sequel of the authors' previous work and gives additional results on the solving degrees and important behaviors of Gr\"obner basis computation. We also define the generalized degree of regularity for a sequence of homogeneous polynomials. For the homogenization of an affine semi-regular sequence, we relate its generalized degree of regularity with its maximal Gr\"{o}bner basis degree (i.e., the solving degree of the homogenized sequence). The definition of a generalized (cryptographic) semi-regular sequence is also given, and it derives a new cryptographic assumption to estimate the security of cryptosystems and signature schemes. From our experimental observation, we raise a conjecture and some questions related to this generalized semi-regularity. These new definitions and our results provide a theoretical formulation of (somehow heuristic) discussions done so far in the cryptographic community.
We consider the problem of enumerating all minimal transversals (also called minimal hitting sets) of a hypergraph $\mathcal{H}$. An equivalent formulation of this problem known as the \emph{transversal hypergraph} problem (or \emph{hypergraph dualization} problem) is to decide, given two hypergraphs, whether one corresponds to the set of minimal transversals of the other. The existence of a polynomial time algorithm to solve this problem is a long standing open question. In \cite{fredman_complexity_1996}, the authors present the first sub-exponential algorithm to solve the transversal hypergraph problem which runs in quasi-polynomial time, making it unlikely that the problem is (co)NP-complete. In this paper, we show that when one of the two hypergraphs is of bounded VC-dimension, the transversal hypergraph problem can be solved in polynomial time, or equivalently that if $\mathcal{H}$ is a hypergraph of bounded VC-dimension, then there exists an incremental polynomial time algorithm to enumerate its minimal transversals. This result generalizes most of the previously known polynomial cases in the literature since they almost all consider classes of hypergraphs of bounded VC-dimension. As a consequence, the hypergraph transversal problem is solvable in polynomial time for any class of hypergraphs closed under partial subhypergraphs. We also show that the proposed algorithm runs in quasi-polynomial time in general hypergraphs and runs in polynomial time if the conformality of the hypergraph is bounded, which is one of the few known polynomial cases where the VC-dimension is unbounded.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples is difficult and highly subjective through standard methods. Inference for high quantiles can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. We develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in the threshold estimation and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation, relative to the leading existing methods, and show how the method's effectiveness is not sensitive to the tuning parameters. We apply our method to the well-known, troublesome example of the River Nidd dataset.
The use of operator-splitting methods to solve differential equations is widespread, but the methods are generally only defined for a given number of operators, most commonly two. Most operator-splitting methods are not generalizable to problems with $N$ operators for arbitrary $N$. In fact, there are only two known methods that can be applied to general $N$-split problems: the first-order Lie--Trotter (or Godunov) method and the second-order Strang (or Strang--Marchuk) method. In this paper, we derive two second-order operator-splitting methods that also generalize to $N$-split problems. These methods are complex valued but have positive real parts, giving them favorable stability properties, and require few sub-integrations per stage, making them computationally inexpensive. They can also be used as base methods from which to construct higher-order $N$-split operator-splitting methods with positive real parts. We verify the orders of accuracy of these new $N$-split methods and demonstrate their favorable efficiency properties against well-known real-valued operator-splitting methods on both real-valued and complex-valued differential equations.
We obtain the smallest unsatisfiable formulas in subclasses of $k$-CNF (exactly $k$ distinct literals per clause) with bounded variable or literal occurrences. Smaller unsatisfiable formulas of this type translate into stronger inapproximability results for MaxSAT in the considered formula class. Our results cover subclasses of 3-CNF and 4-CNF; in all subclasses of 3-CNF we considered we were able to determine the smallest size of an unsatisfiable formula; in the case of 4-CNF with at most 5 occurrences per variable we decreased the size of the smallest known unsatisfiable formula. Our methods combine theoretical arguments and symmetry-breaking exhaustive search based on SAT Modulo Symmetries (SMS), a recent framework for isomorph-free SAT-based graph generation. To this end, and as a standalone result of independent interest, we show how to encode formulas as graphs efficiently for SMS.
We consider optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs) under model uncertainty. Specifically, we consider inverse problems in which, in addition to the inversion parameters, the governing PDEs include secondary uncertain parameters. We focus on problems with infinite-dimensional inversion and secondary parameters and present a scalable computational framework for optimal design of such problems. The proposed approach enables Bayesian inversion and OED under uncertainty within a unified framework. We build on the Bayesian approximation error (BAE) approach, to incorporate modeling uncertainties in the Bayesian inverse problem, and methods for A-optimal design of infinite-dimensional Bayesian nonlinear inverse problems. Specifically, a Gaussian approximation to the posterior at the maximum a posteriori probability point is used to define an uncertainty aware OED objective that is tractable to evaluate and optimize. In particular, the OED objective can be computed at a cost, in the number of PDE solves, that does not grow with the dimension of the discretized inversion and secondary parameters. The OED problem is formulated as a binary bilevel PDE constrained optimization problem and a greedy algorithm, which provides a pragmatic approach, is used to find optimal designs. We demonstrate the effectiveness of the proposed approach for a model inverse problem governed by an elliptic PDE on a three-dimensional domain. Our computational results also highlight the pitfalls of ignoring modeling uncertainties in the OED and/or inference stages.
We prove that the Weihrauch degree of the problem of finding a bad sequence in a non-well quasi order ($\mathsf{BS}$) is strictly above that of finding a descending sequence in an ill-founded linear order ($\mathsf{DS}$). This corrects our mistaken claim in arXiv:2010.03840, which stated that they are Weihrauch equivalent. We prove that K\"onig's lemma $\mathsf{KL}$ and the problem $\mathsf{wList}_{2^{\mathbb{N}},\leq\omega}$ of enumerating a given non-empty countable closed subset of $2^\mathbb{N}$ are not Weihrauch reducible to $\mathsf{DS}$ either, resolving two main open questions raised in arXiv:2010.03840.
Compared to widely used likelihood-based approaches, the minimum contrast (MC) method offers a computationally efficient method for estimation and inference of spatial point processes. These relative gains in computing time become more pronounced when analyzing complicated multivariate point process models. Despite this, there has been little exploration of the MC method for multivariate spatial point processes. Therefore, this article introduces a new MC method for parametric multivariate spatial point processes. A contrast function is computed based on the trace of the power of the difference between the conjectured $K$-function matrix and its nonparametric unbiased edge-corrected estimator. Under standard assumptions, we derive the asymptotic normality of our MC estimator. The performance of the proposed method is demonstrated through simulation studies of bivariate log-Gaussian Cox processes and five-variate product-shot-noise Cox processes.
The Landau--Lifshitz--Baryakhtar (LLBar) and the Landau--Lifshitz--Bloch (LLBloch) equations are nonlinear vector-valued PDEs which arise in the theory of micromagnetics to describe the dynamics of magnetic spin field in a ferromagnet at elevated temperatures. We consider the LLBar and the regularised LLBloch equations in a unified manner, thus allowing us to treat the numerical approximations for both problems at once. In this paper, we propose a semi-discrete mixed finite element scheme and two fully discrete mixed finite element schemes based on a semi-implicit Euler method and a semi-implicit Crank--Nicolson method to solve the problems. These numerical schemes provide accurate approximations to both the magnetisation vector and the effective magnetic field. Moreover, they are proven to be unconditionally energy-stable and preserve energy dissipativity of the system at the discrete level. Error analysis is performed which shows optimal rates of convergence in $\mathbb{L}^2$, $\mathbb{L}^\infty$, and $\mathbb{H}^1$ norms. These theoretical results are further corroborated by several numerical experiments.