This research conducts a thorough reevaluation of seismic fragility curves by utilizing ordinal regression models, moving away from the commonly used log-normal distribution function known for its simplicity. It explores the nuanced differences and interrelations among various ordinal regression approaches, including Cumulative, Sequential, and Adjacent Category models, alongside their enhanced versions that incorporate category-specific effects and variance heterogeneity. The study applies these methodologies to empirical bridge damage data from the 2008 Wenchuan earthquake, using both frequentist and Bayesian inference methods, and conducts model diagnostics using surrogate residuals. The analysis covers eleven models, from basic to those with heteroscedastic extensions and category-specific effects. Through rigorous leave-one-out cross-validation, the Sequential model with category-specific effects emerges as the most effective. The findings underscore a notable divergence in damage probability predictions between this model and conventional Cumulative probit models, advocating for a substantial transition towards more adaptable fragility curve modeling techniques that enhance the precision of seismic risk assessments. In conclusion, this research not only readdresses the challenge of fitting seismic fragility curves but also advances methodological standards and expands the scope of seismic fragility analysis. It advocates for ongoing innovation and critical reevaluation of conventional methods to advance the predictive accuracy and applicability of seismic fragility models within the performance-based earthquake engineering domain.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples is difficult and highly subjective through standard methods. Inference for high quantiles can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. We develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in the threshold estimation and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation, relative to the leading existing methods, and show how the method's effectiveness is not sensitive to the tuning parameters. We apply our method to the well-known, troublesome example of the River Nidd dataset.
We study the asymptotic properties of an estimator of Hurst parameter of a stochastic differential equation driven by a fractional Brownian motion with $H > 1/2$. Utilizing the theory of asymptotic expansion of Skorohod integrals introduced by Nualart and Yoshida [NY19], we derive an asymptotic expansion formula of the distribution of the estimator. As an corollary, we also obtain a mixed central limit theorem for the statistic, indicating that the rate of convergence is $n^{-\frac12}$, which improves the results in the previous literature. To handle second-order quadratic variations appearing in the estimator, a theory of exponent has been developed based on weighted graphs to estimate asymptotic orders of norms of functionals involved.
We consider optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs) under model uncertainty. Specifically, we consider inverse problems in which, in addition to the inversion parameters, the governing PDEs include secondary uncertain parameters. We focus on problems with infinite-dimensional inversion and secondary parameters and present a scalable computational framework for optimal design of such problems. The proposed approach enables Bayesian inversion and OED under uncertainty within a unified framework. We build on the Bayesian approximation error (BAE) approach, to incorporate modeling uncertainties in the Bayesian inverse problem, and methods for A-optimal design of infinite-dimensional Bayesian nonlinear inverse problems. Specifically, a Gaussian approximation to the posterior at the maximum a posteriori probability point is used to define an uncertainty aware OED objective that is tractable to evaluate and optimize. In particular, the OED objective can be computed at a cost, in the number of PDE solves, that does not grow with the dimension of the discretized inversion and secondary parameters. The OED problem is formulated as a binary bilevel PDE constrained optimization problem and a greedy algorithm, which provides a pragmatic approach, is used to find optimal designs. We demonstrate the effectiveness of the proposed approach for a model inverse problem governed by an elliptic PDE on a three-dimensional domain. Our computational results also highlight the pitfalls of ignoring modeling uncertainties in the OED and/or inference stages.
Generalized linear models (GLMs) arguably represent the standard approach for statistical regression beyond the Gaussian likelihood scenario. When Bayesian formulations are employed, the general absence of a tractable posterior distribution has motivated the development of deterministic approximations, which are generally more scalable than sampling techniques. Among them, expectation propagation (EP) showed extreme accuracy, usually higher than many variational Bayes solutions. However, the higher computational cost of EP posed concerns about its practical feasibility, especially in high-dimensional settings. We address these concerns by deriving a novel efficient formulation of EP for GLMs, whose cost scales linearly in the number of covariates p. This reduces the state-of-the-art O(p^2 n) per-iteration computational cost of the EP routine for GLMs to O(p n min{p,n}), with n being the sample size. We also show that, for binary models and log-linear GLMs approximate predictive means can be obtained at no additional cost. To preserve efficient moment matching for count data, we propose employing a combination of log-normal Laplace transform approximations, avoiding numerical integration. These novel results open the possibility of employing EP in settings that were believed to be practically impossible. Improvements over state-of-the-art approaches are illustrated both for simulated and real data. The efficient EP implementation is available at //github.com/niccoloanceschi/EPglm.
We present a space-time continuous-Galerkin finite element method for solving incompressible Navier-Stokes equations. To ensure stability of the discrete variational problem, we apply ideas from the variational multi-scale method. The finite element problem is posed on the ``full" space-time domain, considering time as another dimension. We provide a rigorous analysis of the stability and convergence of the stabilized formulation. And finally, we apply this method on two benchmark problems in computational fluid dynamics, namely, lid-driven cavity flow and flow past a circular cylinder. We validate the current method with existing results from literature and show that very large space-time blocks can be solved using our approach.
The paper considers standard iterative methods for solving the generalized Stokes problem arising from the time and space approximation of the time-dependent incompressible Navier-Stokes equations. Various preconditioning techniques are considered (Cahouet&Chabard and augmented Lagrangian), and one investigates whether these methods can compete with traditional pressure-correction and velocity-correction methods in terms of CPU time per degree of freedom and per time step. Numerical tests on fine unstructured meshes (68 millions degrees of freedoms) demonstrate convergence rates that are independent of the mesh size and improve with the Reynolds number. Three conclusions are drawn from the paper: (1) Although very good parallel scalability is observed for the augmented Lagrangian method, thorough tests on large problems reveal that the overall CPU time per degree of freedom and per time step is best for the standard Cahouet&Chabar preconditioner. (2) Whether solving the pressure Schur complement problem or solving the full couple system at once does not make any significant difference in term of CPU time per degree of freedom and per time step. (3) All the methods tested in the paper, whether matrix-free or not, are on average 30 times slower than traditional pressure-correction and velocity-correction methods. Hence, although all these methods are very efficient for solving steady state problems, they are not yet competitive for solving time-dependent problems.
In shape-constrained nonparametric inference, it is often necessary to perform preliminary tests to verify whether a probability mass function (p.m.f.) satisfies qualitative constraints such as monotonicity, convexity or in general $k$-monotonicity. In this paper, we are interested in testing $k$-monotonicity of a compactly supported p.m.f. and we put our main focus on monotonicity and convexity; i.e., $k \in \{1,2\}$. We consider new testing procedures that are directly derived from the definition of $k$-monotonicity and rely exclusively on the empirical measure, as well as tests that are based on the projection of the empirical measure on the class of $k$-monotone p.m.f.s. The asymptotic behaviour of the introduced test statistics is derived and a simulation study is performed to assess the finite sample performance of all the proposed tests. Applications to real datasets are presented to illustrate the theory.
This article presents a novel and succinct algorithmic framework via alternating quantum walks, unifying quantum spatial search, state transfer and uniform sampling on a large class of graphs. Using the framework, we can achieve exact uniform sampling over all vertices and perfect state transfer between any two vertices, provided that eigenvalues of Laplacian matrix of the graph are all integers. Furthermore, if the graph is vertex-transitive as well, then we can achieve deterministic quantum spatial search that finds a marked vertex with certainty. In contrast, existing quantum search algorithms generally has a certain probability of failure. Even if the graph is not vertex-transitive, such as the complete bipartite graph, we can still adjust the algorithmic framework to obtain deterministic spatial search, which thus shows the flexibility of it. Besides unifying and improving plenty of previous results, our work provides new results on more graphs. The approach is easy to use since it has a succinct formalism that depends only on the depth of the Laplacian eigenvalue set of the graph, and may shed light on the solution of more problems related to graphs.
The use of variable grid BDF methods for parabolic equations leads to structures that are called variable (coefficient) Toeplitz. Here, we consider a more general class of matrix-sequences and we prove that they belong to the maximal $*$-algebra of generalized locally Toeplitz (GLT) matrix-sequences. Then, we identify the associated GLT symbols in the general setting and in the specific case, by providing in both cases a spectral and singular value analysis. More specifically, we use the GLT tools in order to study the asymptotic behaviour of the eigenvalues and singular values of the considered BDF matrix-sequences, in connection with the given non-uniform grids. Numerical examples, visualizations, and open problems end the present work.
A functional nonlinear regression approach, incorporating time information in the covariates, is proposed for temporal strong correlated manifold map data sequence analysis. Specifically, the functional regression parameters are supported on a connected and compact two--point homogeneous space. The Generalized Least--Squares (GLS) parameter estimator is computed in the linearized model, having error term displaying manifold scale varying Long Range Dependence (LRD). The performance of the theoretical and plug--in nonlinear regression predictors is illustrated by simulations on sphere, in terms of the empirical mean of the computed spherical functional absolute errors. In the case where the second--order structure of the functional error term in the linearized model is unknown, its estimation is performed by minimum contrast in the functional spectral domain. The linear case is illustrated in the Supplementary Material, revealing the effect of the slow decay velocity in time of the trace norms of the covariance operator family of the regression LRD error term. The purely spatial statistical analysis of atmospheric pressure at high cloud bottom, and downward solar radiation flux in Alegria et al. (2021) is extended to the spatiotemporal context, illustrating the numerical results from a generated synthetic data set.