We address the issue of designing robust stabilization terms for the nonconforming virtual element method. To this end, we transfer the problem of defining the stabilizing bilinear form from the elemental nonconforming virtual element space, whose functions are not known in closed form, to the dual space spanned by the known functionals providing the degrees of freedom. By this approach, we manage to construct different bilinear forms yielding optimal or quasi-optimal stability bounds and error estimates, under weaker assumptions on the tessellation than the ones usually considered in this framework. In particular, we prove optimality under geometrical assumptions allowing a mesh to have a very large number of arbitrarily small edges per element. Finally, we numerically assess the performance of the VEM for several different stabilizations fitting with our new framework on a set of representative test cases.
When designing a message transmission system, from the point of view of making sure that the information transmitted is as fresh as possible, two rules of thumb seem reasonable: use small buffers and adopt a last-in-first-out policy. In this paper, we measure freshness of information using the recently adopted "age of information" performance measure. Considering it as a stochastic process operating in a stationary regime, we compute not just the first moment but the whole marginal distribution of the age of information (something important in applications) for two well-performing systems. In neither case do we allow for preemption of the message being processed because this may be difficult to implement in practice. We assume that the arrival process is Poisson and that the messages have independent sizes (service times) with common distribution. We use Palm and Markov-renewal theory to derive explicit results for Laplace transforms which, in many cases can be inverted analytically. We discuss how well the systems we analyze performs and examine how close to optimality they are. In particular, we answer an open question that was raised in [9] regarding the optimality of the system denoted as P2.
Let $\mathbf{X}$ be a random variable uniformly distributed on the discrete cube $\left\{ -1,1\right\} ^{n}$, and let $T_{\rho}$ be the noise operator acting on Boolean functions $f:\left\{ -1,1\right\} ^{n}\to\left\{ 0,1\right\} $, where $\rho\in[0,1]$ is the noise parameter, representing the correlation coefficient between each coordination of $\mathbf{X}$ and its noise-corrupted version. Given a convex function $\Phi$ and the mean $\mathbb{E}f(\mathbf{X})=a\in[0,1]$, which Boolean function $f$ maximizes the $\Phi$-stability $\mathbb{E}\left[\Phi\left(T_{\rho}f(\mathbf{X})\right)\right]$ of $f$? Special cases of this problem include the (symmetric and asymmetric) $\alpha$-stability problems and the "Most Informative Boolean Function" problem. In this paper, we provide several upper bounds for the maximal $\Phi$-stability. Considering specific $\Phi$'s, we partially resolve Mossel and O'Donnell's conjecture on $\alpha$-stability with $\alpha>2$, Li and M\'edard's conjecture on $\alpha$-stability with $1<\alpha<2$, and Courtade and Kumar's conjecture on the "Most Informative Boolean Function" which corresponds to a conjecture on $\alpha$-stability with $\alpha=1$. Our proofs are based on discrete Fourier analysis, optimization theory, and improvements of the Friedgut--Kalai--Naor (FKN) theorem. Our improvements of the FKN Theorem are sharp or asymptotically sharp for certain cases.
We revisit the satisfiability problem for two-variable logic, denoted by SAT(FO2), which is known to be NEXP-complete. The upper bound is usually derived from its well known "exponential size model" property. Whether it can be determinized/randomized efficiently is still an open question. In this paper we present a different approach by reducing it to a novel graph-theoretic problem that we call "Conditional Independent Set" (CIS). We show that CIS is NP-complete and present three simple algorithms for it: Deterministic, randomized with zero error and randomized with small one-sided error, with run time O(1.4423^n), O(1.6181^n) and O(1.3661^n), respectively. We then show that without the equality predicate SAT(FO2) is in fact equivalent to CIS in succinct representation. This yields the same three simple algorithms as above for SAT(FO2) without the the equality predicate with run time O(1.4423^(2^n)), O(1.6181^(2^n)) and O(1.3661^(2^n)), respectively, where n is the number of predicates in the input formula. To the best of our knowledge, these are the first deterministic/randomized algorithms for an NEXP-complete decidable logic with time complexity significantly lower than O(2^(2^n)). We also identify a few lower complexity fragments of SAT(FO2) which correspond to the tractable fragments of CIS. For the fragment with the equality predicate, we present a linear time many-one reduction to the fragment without the equality predicate. The reduction yields equi-satisfiable formulas and incurs a small constant blow-up in the number of predicates.
We design a Fortin operator for the lowest-order Taylor-Hood element in any dimension, which was previously constructed only in 2D. In the construction we use tangential edge bubble functions for the divergence correcting operator. This naturally leads to an alternative inf-sup stable reduced finite element pair. Furthermore, we provide a counterexample to the inf-sup stability and hence to existence of a Fortin operator for the $P_2$-$P_0$ and the augmented Taylor-Hood element in 3D.
We present a non-nested multilevel algorithm for solving the Poisson equation discretized at scattered points using polyharmonic radial basis function (PHS-RBF) interpolations. We append polynomials to the radial basis functions to achieve exponential convergence of discretization errors. The interpolations are performed over local clouds of points and the Poisson equation is collocated at each of the scattered points, resulting in a sparse set of discrete equations for the unkown variables. To solve this set of equations, we have developed a non-nested multilevel algorithm utilizing multiple independently generated coarse sets of points. The restriction and prolongation operators are also constructed with the same RBF interpolations procedure. The performance of the algorithm for Dirichlet and all-Neumann boundary conditions is evaluated in three model geometries using a manufactured solution. For Dirichlet boundary conditions, rapid convergence is observed using SOR point solver as the relaxation scheme. For cases of all-Neumann boundary conditions, convergence is seen to slow down with the degree of the appended polynomial. However, when the multilevel procedure is combined with a GMRES algorithm, the convergence is seen to significantly improve. The GMRES accelerated multilevel algorithm is included in a fractional step method to solve incompressible Navier-Stokes equations.
When a finite order vector autoregressive model is fitted to VAR($\infty$) data the asymptotic distribution of statistics obtained via smooth functions of least-squares estimates requires care. L\"utkepohl and Poskitt (1991) provide a closed-form expression for the limiting distribution of (structural) impulse responses for sieve VAR models based on the Delta method. Yet, numerical simulations have shown that confidence intervals built in such way appear overly conservative. In this note I argue that these results stem naturally from the limit arguments used in L\"utkepohl and Poskitt (1991), that they manifest when sieve inference is improperly applied, and that they can be "remedied" by either using bootstrap resampling or, simply, by using standard (non-sieve) asymptotics.
The main idea of nested sampling is to substitute the high-dimensional likelihood integral over the parameter space $\Omega$ by an integral over the unit line $[0,1]$ by employing a push-forward with respect to a suitable transformation. For this substitution, it is often implicitly or explicitly assumed that samples from the prior are uniformly distributed along this unit line after having been mapped by this transformation. We show that this assumption is wrong, especially in the case of a likelihood function with plateaus. Nevertheless, we show that the substitution enacted by nested sampling works because of more interesting reasons which we lay out. Although this means that analytically, nested sampling can deal with plateaus in the likelihood function, the actual performance of the algorithm suffers under such a setting and the method fails to approximate the evidence, mean and variance appropriately. We suggest a robust implementation of nested sampling by a simple decomposition idea which demonstrably overcomes this issue.
Spectral residual methods are powerful tools for solving nonlinear systems of equations without derivatives. In a recent paper, it was shown that an acceleration technique based on the Sequential Secant Method can greatly improve its efficiency and robustness. In the present work, an R implementation of the method is presented. Numerical experiments with a widely used test bed compares the presented approach with its plain (i.e. non-accelerated) version that makes part of the R package BB. Additional numerical experiments compare the proposed method with NITSOL, a state-of-the-art solver for nonlinear systems. The comparison shows that the acceleration process greatly improves the robustness of its counterpart included in the existent R package. As a by-product, an interface is provided between R and the consolidated CUTEst collection, which contains over a thousand nonlinear programming problems of all types and represents a standard for evaluating the performance of optimization methods.
The distributions of random matrix theory have seen an explosion of interest in recent years, and have been found in various applied fields including physics, high-dimensional statistics, wireless communications, finance, etc. The Tracy-Widom distribution is one of the most important distributions in random matrix theory, and its numerical evaluation is a subject of great practical importance. One numerical method for evaluating the Tracy-Widom distribution uses the fact that the distribution can be represented as a Fredholm determinant of a certain integral operator. However, when the spectrum of the integral operator is computed by discretizing it directly, the eigenvalues are known to at most absolute precision. Remarkably, the integral operator is an example of a so-called bispectral operator, which admits a commuting differential operator that shares the same eigenfunctions. In this manuscript, we develop an efficient numerical algorithm for evaluating the eigendecomposition of the integral operator to full relative precision, using the eigendecomposition of the differential operator. With our algorithm, the Tracy-Widom distribution can be evaluated to full absolute precision everywhere rapidly, and, furthermore, its right tail can be computed to full relative precision.
Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.