Describing the equality conditions of the Alexandrov--Fenchel inequality has been a major open problem for decades. We prove that in the case of convex polytopes, this description is not in the polynomial hierarchy unless the polynomial hierarchy collapses to a finite level. This is the first hardness result for the problem, and is a complexity counterpart of the recent result by Shenfeld and van Handel (arXiv:archive/201104059), which gave a geometric characterization of the equality conditions. The proof involves Stanley's order polytopes and employs poset theoretic technology.
In this paper we consider the finite element approximation of Maxwell's problem and analyse the prescription of essential boundary conditions in a weak sense using Nitsche's method. To avoid indefiniteness of the problem, the original equations are augmented with the gradient of a scalar field that allows one to impose the zero divergence of the magnetic induction, even if the exact solution for this scalar field is zero. Two finite element approximations are considered, namely, one in which the approximation spaces are assumed to satisfy the appropriate inf-sup condition that render the standard Galerkin method stable, and another augmented and stabilised one that permits the use of finite element interpolations of arbitrary order. Stability and convergence results are provided for the two finite element formulations considered.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples can be difficult through standard methods. Inference can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. In this paper, we develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in this threshold choice and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation. We apply our method to the well-known, troublesome example of the River Nidd dataset.
We introduce a convergent hierarchy of lower bounds on the minimum value of a real homogeneous polynomial over the sphere. The main practical advantage of our hierarchy over the sum-of-squares (SOS) hierarchy is that the lower bound at each level of our hierarchy is obtained by a minimum eigenvalue computation, as opposed to the full semidefinite program (SDP) required at each level of SOS. In practice, this allows us to go to much higher levels than are computationally feasible for the SOS hierarchy. For both hierarchies, the underlying space at the $k$-th level is the set of homogeneous polynomials of degree $2k$. We prove that our hierarchy converges as $O(1/k)$ in the level $k$, matching the best-known convergence of the SOS hierarchy when the number of variables $n$ is less than the half-degree $d$ (the best-known convergence of SOS when $n \geq d$ is $O(1/k^2)$). More generally, we introduce a convergent hierarchy of minimum eigenvalue computations for minimizing the inner product between a real tensor and an element of the spherical Segre-Veronese variety, with similar convergence guarantees. As examples, we obtain hierarchies for computing the (real) tensor spectral norm, and for minimizing biquadratic forms over the sphere. Hierarchies of eigencomputations for more general constrained polynomial optimization problems are discussed.
We present a new high-order accurate spectral element solution to the two-dimensional scalar Poisson equation subject to a general Robin boundary condition. The solution is based on a simplified version of the shifted boundary method employing a continuous arbitrary order $hp$-Galerkin spectral element method as the numerical discretization procedure. The simplification relies on a polynomial correction to avoid explicitly evaluating high-order partial derivatives from the Taylor series expansion, which traditionally have been used within the shifted boundary method. In this setting, we apply an extrapolation and novel interpolation approach to project the basis functions from the true domain onto the approximate surrogate domain. The resulting solution provides a method that naturally incorporates curved geometrical features of the domain, overcomes complex and cumbersome mesh generation, and avoids problems with small-cut-cells. Dirichlet, Neumann, and general Robin boundary conditions are enforced weakly through: i) a generalized Nitsche's method and ii) a generalized Aubin's method. For this, a consistent asymptotic preserving formulation of the embedded Robin formulations is presented. We present several numerical experiments and analysis of the algorithmic properties of the different weak formulations. With this, we include convergence studies under polynomial, $p$, increase of the basis functions, mesh, $h$, refinement, and matrix conditioning to highlight the spectral and algebraic convergence features, respectively. This is done to assess the influence of errors across variational formulations, polynomial order, mesh size, and mappings between the true and surrogate boundaries.
The $n$-vehicle exploration problem (NVEP) is a combinatorial optimization problem, which tries to find an optimal permutation of a fleet to maximize the length traveled by the last vehicle. NVEP has a fractional form of objective function, and its computational complexity of general case remains open. We show that Hamiltonian Path $\leq_P$ NVEP, and prove that NVEP is NP-complete.
Gaussian approximations are routinely employed in Bayesian statistics to ease inference when the target posterior is intractable. Although these approximations are asymptotically justified by Bernstein-von Mises type results, in practice the expected Gaussian behavior may poorly represent the shape of the posterior, thus affecting approximation accuracy. Motivated by these considerations, we derive an improved class of closed-form approximations of posterior distributions which arise from a new treatment of a third-order version of the Laplace method yielding approximations in a tractable family of skew-symmetric distributions. Under general assumptions which account for misspecified models and non-i.i.d. settings, this family of approximations is shown to have a total variation distance from the target posterior whose rate of convergence improves by at least one order of magnitude the one established by the classical Bernstein-von Mises theorem. Specializing this result to the case of regular parametric models shows that the same improvement in approximation accuracy can be also derived for polynomially bounded posterior functionals. Unlike other higher-order approximations, our results prove that it is possible to derive closed-form and valid densities which are expected to provide, in practice, a more accurate, yet similarly-tractable, alternative to Gaussian approximations of the target posterior, while inheriting its limiting frequentist properties. We strengthen such arguments by developing a practical skew-modal approximation for both joint and marginal posteriors that achieves the same theoretical guarantees of its theoretical counterpart by replacing the unknown model parameters with the corresponding MAP estimate. Empirical studies confirm that our theoretical results closely match the remarkable performance observed in practice, even in finite, possibly small, sample regimes.
The prevailing statistical approach to analyzing persistence diagrams is concerned with filtering out topological noise. In this paper, we adopt a different viewpoint and aim at estimating the actual distribution of a random persistence diagram, which captures both topological signal and noise. To that effect, Chazel and Divol (2019) proved that, under general conditions, the expected value of a random persistence diagram is a measure admitting a Lebesgue density, called the persistence intensity function. In this paper, we are concerned with estimating the persistence intensity function and a novel, normalized version of it -- called the persistence density function. We present a class of kernel-based estimators based on an i.i.d. sample of persistence diagrams and derive estimation rates in the supremum norm. As a direct corollary, we obtain uniform consistency rates for estimating linear representations of persistence diagrams, including Betti numbers and persistence surfaces. Interestingly, the persistence density function delivers stronger statistical guarantees.
Trust is essential for our interactions with others but also with artificial intelligence (AI) based systems. To understand whether a user trusts an AI, researchers need reliable measurement tools. However, currently discussed markers mostly rely on expensive and invasive sensors, like electroencephalograms, which may cause discomfort. The analysis of mouse trajectory has been suggested as a convenient tool for trust assessment. However, the relationship between trust, confidence and mouse trajectory is not yet fully understood. To provide more insights into this relationship, we asked participants (n = 146) to rate whether several tweets were offensive while an AI suggested its assessment. Our results reveal which aspects of the mouse trajectory are affected by the users subjective trust and confidence ratings; yet they indicate that these measures might not explain sufficiently the variance to be used on their own. This work examines a potential low-cost trust assessment in AI systems.
Recently it has become common for applied works to combine commonly used survival analysis modeling methods, such as the multivariable Cox model, and propensity score weighting with the intention of forming a doubly robust estimator that is unbiased in large samples when either the Cox model or the propensity score model is correctly specified. This combination does not, in general, produce a doubly robust estimator, even after regression standardization, when there is truly a causal effect. We demonstrate via simulation this lack of double robustness for the semiparametric Cox model, the Weibull proportional hazards model, and a simple proportional hazards flexible parametric model, with both the latter models fit via maximum likelihood. We provide a novel proof that the combination of propensity score weighting and a proportional hazards survival model, fit either via full or partial likelihood, is consistent under the null of no causal effect of the exposure on the outcome under particular censoring mechanisms if either the propensity score or the outcome model is correctly specified and contains all confounders. Given our results suggesting that double robustness only exists under the null, we outline two simple alternative estimators that are doubly robust for the survival difference at a given time point (in the above sense), provided the censoring mechanism can be correctly modeled, and one doubly robust method of estimation for the full survival curve. We provide R code to use these estimators for estimation and inference in the supplementary materials.
Classical inequality curves and inequality measures are defined for distributions with finite mean value. Moreover, their empirical counterparts are not resistant to outliers. For these reasons, quantile versions of known inequality curves such as the Lorenz, Bonferroni, Zenga and $D$ curves, and quantile versions of inequality measures such as the Gini, Bonferroni, Zenga and $D$ indices have been proposed in the literature. We propose various nonparametric estimators of quantile versions of inequality curves and inequality measures, prove their consistency, and compare their accuracy in a~simulation study. We also give examples of the use of quantile versions of inequality measures in real data analysis.