We introduce and analyse a family of hash and predicate functions that are more likely to produce collisions for small reducible configurations of vectors. These may offer practical improvements to lattice sieving for short vectors. In particular, in one asymptotic regime the family exhibits significantly different convergent behaviour than existing hash functions and predicates.
We study the accuracy of reconstruction of a family of functions $f_\epsilon(x)$, $x\in\mathbb R^2$, $\epsilon\to0$, from their discrete Radon transform data sampled with step size $O(\epsilon)$. For each $\epsilon>0$ sufficiently small, the function $f_\epsilon$ has a jump across a rough boundary $\mathcal S_\epsilon$, which is modeled by an $O(\epsilon)$-size perturbation of a smooth boundary $\mathcal S$. The function $H_0$, which describes the perturbation, is assumed to be of bounded variation. Let $f_\epsilon^{\text{rec}}$ denote the reconstruction, which is computed by interpolating discrete data and substituting it into a continuous inversion formula. We prove that $(f_\epsilon^{\text{rec}}-K_\epsilon*f_\epsilon)(x_0+\epsilon\check x)=O(\epsilon^{1/2}\ln(1/\epsilon))$, where $x_0\in\mathcal S$ and $K_\epsilon$ is an easily computable kernel.
We propose a general optimization-based framework for computing differentially private M-estimators and a new method for constructing differentially private confidence regions. Firstly, we show that robust statistics can be used in conjunction with noisy gradient descent or noisy Newton methods in order to obtain optimal private estimators with global linear or quadratic convergence, respectively. We establish local and global convergence guarantees, under both local strong convexity and self-concordance, showing that our private estimators converge with high probability to a small neighborhood of the non-private M-estimators. Secondly, we tackle the problem of parametric inference by constructing differentially private estimators of the asymptotic variance of our private M-estimators. This naturally leads to approximate pivotal statistics for constructing confidence regions and conducting hypothesis testing. We demonstrate the effectiveness of a bias correction that leads to enhanced small-sample empirical performance in simulations. We illustrate the benefits of our methods in several numerical examples.
Formalized $1$-category theory forms a core component of various libraries of mathematical proofs. However, more sophisticated results in fields from algebraic topology to theoretical physics, where objects have "higher structure," rely on infinite-dimensional categories in place of $1$-dimensional categories, and $\infty$-category theory has thusfar proved unamenable to computer formalization. Using a new proof assistant called Rzk, which is designed to support Riehl--Shulman's simplicial extension of homotopy type theory for synthetic $\infty$-category theory, we provide the first formalizations of results from $\infty$-category theory. This includes in particular a formalization of the Yoneda lemma, often regarded as the fundamental theorem of category theory, a theorem which roughly states that an object of a given category is determined by its relationship to all of the other objects of the category. A key feature of our framework is that, thanks to the synthetic theory, many constructions are automatically natural or functorial. We plan to use Rzk to formalize further results from $\infty$-category theory, such as the theory of limits and colimits and adjunctions.
Gaussian graphical model is one of the powerful tools to analyze conditional independence between two variables for multivariate Gaussian-distributed observations. When the dimension of data is moderate or high, penalized likelihood methods such as the graphical lasso are useful to detect significant conditional independence structures. However, the estimates are affected by outliers due to the Gaussian assumption. This paper proposes a novel robust posterior distribution for inference of Gaussian graphical models using the $\gamma$-divergence which is one of the robust divergences. In particular, we focus on the Bayesian graphical lasso by assuming the Laplace-type prior for elements of the inverse covariance matrix. The proposed posterior distribution matches its maximum a posteriori estimate with the minimum $\gamma$-divergence estimate provided by the frequentist penalized method. We show that the proposed method satisfies the posterior robustness which is a kind of measure of robustness in the Bayesian analysis. The property means that the information of outliers is automatically ignored in the posterior distribution as long as the outliers are extremely large, which also provides theoretical robustness of point estimate for the existing frequentist method. A sufficient condition for the posterior propriety of the proposed posterior distribution is also shown. Furthermore, an efficient posterior computation algorithm via the weighted Bayesian bootstrap method is proposed. The performance of the proposed method is illustrated through simulation studies and real data analysis.
Finite element methods and kinematically coupled schemes that decouple the fluid velocity and structure displacement have been extensively studied for incompressible fluid-structure interaction (FSI) over the past decade. While these methods are known to be stable and easy to implement, optimal error analysis has remained challenging. Previous work has primarily relied on the classical elliptic projection technique, which is only suitable for parabolic problems and does not lead to optimal convergence of numerical solutions to the FSI problems in the standard $L^2$ norm. In this article, we propose a new stable fully discrete kinematically coupled scheme for incompressible FSI thin-structure model and establish a new approach for the numerical analysis of FSI problems in terms of a newly introduced coupled non-stationary Ritz projection, which allows us to prove the optimal-order convergence of the proposed method in the $L^2$ norm. The methodology presented in this article is also applicable to numerous other FSI models and serves as a fundamental tool for advancing research in this field.
In survival analysis, complex machine learning algorithms have been increasingly used for predictive modeling. Given a collection of features available for inclusion in a predictive model, it may be of interest to quantify the relative importance of a subset of features for the prediction task at hand. In particular, in HIV vaccine trials, participant baseline characteristics are used to predict the probability of infection over the intended follow-up period, and investigators may wish to understand how much certain types of predictors, such as behavioral factors, contribute toward overall predictiveness. Time-to-event outcomes such as time to infection are often subject to right censoring, and existing methods for assessing variable importance are typically not intended to be used in this setting. We describe a broad class of algorithm-agnostic variable importance measures for prediction in the context of survival data. We propose a nonparametric efficient estimation procedure that incorporates flexible learning of nuisance parameters, yields asymptotically valid inference, and enjoys double-robustness. We assess the performance of our proposed procedure via numerical simulations and analyze data from the HVTN 702 study to inform enrollment strategies for future HIV vaccine trials.
The joint modeling of multiple longitudinal biomarkers together with a time-to-event outcome is a challenging modeling task of continued scientific interest. In particular, the computational complexity of high dimensional (generalized) mixed effects models often restricts the flexibility of shared parameter joint models, even when the subject-specific marker trajectories follow highly nonlinear courses. We propose a parsimonious multivariate functional principal components representation of the shared random effects. This allows better scalability, as the dimension of the random effects does not directly increase with the number of markers, only with the chosen number of principal component basis functions used in the approximation of the random effects. The functional principal component representation additionally allows to estimate highly flexible subject-specific random trajectories without parametric assumptions. The modeled trajectories can thus be distinctly different for each biomarker. We build on the framework of flexible Bayesian additive joint models implemented in the R-package 'bamlss', which also supports estimation of nonlinear covariate effects via Bayesian P-splines. The flexible yet parsimonious functional principal components basis used in the estimation of the joint model is first estimated in a preliminary step. We validate our approach in a simulation study and illustrate its advantages by analyzing a study on primary biliary cholangitis.
Subsurface storage of CO$_2$ is an important means to mitigate climate change, and to investigate the fate of CO$_2$ over several decades in vast reservoirs, numerical simulation based on realistic models is essential. Faults and other complex geological structures introduce modeling challenges as their effects on storage operations are uncertain due to limited data. In this work, we present a computational framework for forward propagation of uncertainty, including stochastic upscaling and copula representation of flow functions for a CO$_2$ storage site using the Vette fault zone in the Smeaheia formation in the North Sea as a test case. The upscaling method leads to a reduction of the number of stochastic dimensions and the cost of evaluating the reservoir model. A viable model that represents the upscaled data needs to capture dependencies between variables, and allow sampling. Copulas provide representation of dependent multidimensional random variables and a good fit to data, allow fast sampling, and coupling to the forward propagation method via independent uniform random variables. The non-stationary correlation within some of the upscaled flow function are accurately captured by a data-driven transformation model. The uncertainty in upscaled flow functions and other parameters are propagated to uncertain leakage estimates using numerical reservoir simulation of a two-phase system. The expectations of leakage are estimated by an adaptive stratified sampling technique, where samples are sequentially concentrated to regions of the parameter space to greedily maximize variance reduction. We demonstrate cost reduction compared to standard Monte Carlo of one or two orders of magnitude for simpler test cases with only fault and reservoir layer permeabilities assumed uncertain, and factors 2--8 cost reduction for stochastic multi-phase flow properties and more complex stochastic models.
The $K$-core of a graph is the unique maximum subgraph within which each vertex connects to $K$ or more other vertices. The optimal $K$-core attack problem asks to delete the minimum number of vertices from the $K$-core to induce its complete collapse. A hierarchical cycle-tree packing model is introduced here for this challenging combinatorial optimization problem. We convert the temporally long-range correlated $K$-core pruning dynamics into locally tree-like static patterns and analyze this model through the replica-symmetric cavity method of statistical physics. A set of coarse-grained belief propagation equations are derived to predict single vertex marginal probabilities efficiently. The associated hierarchical cycle-tree guided attack ({\tt hCTGA}) algorithm is able to construct nearly optimal attack solutions for regular random graphs and Erd\"os-R\'enyi random graphs. Our cycle-tree packing model may also be helpful for constructing optimal initial conditions for other irreversible dynamical processes on sparse random graphs.
Finite-dimensional truncations are routinely used to approximate partial differential equations (PDEs), either to obtain numerical solutions or to derive reduced-order models. The resulting discretized equations are known to violate certain physical properties of the system. In particular, first integrals of the PDE may not remain invariant after discretization. Here, we use the method of reduced-order nonlinear solutions (RONS) to ensure that the conserved quantities of the PDE survive its finite-dimensional truncation. In particular, we develop two methods: Galerkin RONS and finite volume RONS. Galerkin RONS ensures the conservation of first integrals in Galerkin-type truncations, whether used for direct numerical simulations or reduced-order modeling. Similarly, finite volume RONS conserves any number of first integrals of the system, including its total energy, after finite volume discretization. Both methods are applicable to general time-dependent PDEs and can be easily incorporated in existing Galerkin-type or finite volume code. We demonstrate the efficacy of our methods on two examples: direct numerical simulations of the shallow water equation and a reduced-order model of the nonlinear Schrodinger equation. As a byproduct, we also generalize RONS to phenomena described by a system of PDEs.