Stick-breaking (SB) processes are often adopted in Bayesian mixture models for generating mixing weights. When covariates influence the sizes of clusters, SB mixtures are particularly convenient as they can leverage their connection to binary regression to ease both the specification of covariate effects and posterior computation. Existing SB models are typically constructed based on continually breaking a single remaining piece of the unit stick. We view this from a dyadic tree perspective in terms of a lopsided bifurcating tree that extends only in one side. We show that several unsavory characteristics of SB models are in fact largely due to this lopsided tree structure. We consider a generalized class of SB models with alternative bifurcating tree structures and examine the influence of the underlying tree topology on the resulting Bayesian analysis in terms of prior assumptions, posterior uncertainty, and computational effectiveness. In particular, we provide evidence that a balanced tree topology, which corresponds to continually breaking all remaining pieces of the unit stick, can resolve or mitigate several undesirable properties of SB models that rely on a lopsided tree.
Linear mixed models are commonly used in analyzing stepped-wedge cluster randomized trials (SW-CRTs). A key consideration for analyzing a SW-CRT is accounting for the potentially complex correlation structure, which can be achieved by specifying a random effects structure. Common random effects structures for a SW-CRT include random intercept, random cluster-by-period, and discrete-time decay. Recently, more complex structures, such as the random intervention structure, have been proposed. In practice, specifying appropriate random effects can be challenging. Robust variance estimators (RVE) may be applied to linear mixed models to provide consistent estimators of standard errors of fixed effect parameters in the presence of random-effects misspecification. However, there has been no empirical investigation of RVE for SW-CRT. In this paper, we first review five RVEs (both standard and small-sample bias-corrected RVEs) that are available for linear mixed models. We then describe a comprehensive simulation study to examine the performance of these RVEs for SW-CRTs with a continuous outcome under different data generators. For each data generator, we investigate whether the use of a RVE with either the random intercept model or the random cluster-by-period model is sufficient to provide valid statistical inference for fixed effect parameters, when these working models are subject to misspecification. Our results indicate that the random intercept and random cluster-by-period models with RVEs performed similarly. The CR3 RVE estimator, coupled with the number of clusters minus two degrees of freedom correction, consistently gave the best coverage results, but could be slightly anti-conservative when the number of clusters was below 16. We summarize the implications of our results for linear mixed model analysis of SW-CRTs in practice.
Bayesian linear mixed-effects models and Bayesian ANOVA are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can - if the sphericity assumption is violated - likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from \url{//osf.io/mjf47/}.
Computing the discrete rational minimax approximation in the complex plane is challenging. Apart from Ruttan's sufficient condition, there are few other sufficient conditions for global optimality. The state-of-the-art rational approximation algorithms, such as the adaptive Antoulas-Anderson (AAA), AAA-Lawson, and the rational Krylov fitting (RKFIT) method, perform highly efficiently, but the computed rational approximants may be near-best. In this paper, we propose a convex programming approach, the solution of which is guaranteed to be the rational minimax approximation under Ruttan's sufficient condition. Furthermore, we present a new version of Lawson's iteration for solving this convex programming problem. The computed solution can be easily verified as the rational minimax approximant. Our numerical experiments demonstrate that this updated version of Lawson's iteration generally converges monotonically with respect to the objective function of the convex programming. It is an effective competitive approach for the rational minimax problem, compared to the highly efficient AAA, AAA-Lawson, and the stabilized Sanathanan-Koerner iteration.
It is a challenge to numerically solve nonlinear partial differential equations whose solution involves discontinuity. In the context of numerical simulators for multi-phase flow in porous media, there exists a long-standing issue known as Grid Orientation Effect (GOE), wherein different numerical solutions can be obtained when considering grids with different orientations under certain unfavorable conditions. Our perspective is that GOE arises due to numerical instability near displacement fronts, where spurious oscillations accompanied by sharp fronts, if not adequately suppressed, lead to GOE. To reduce or even eliminate GOE, we propose augmenting adaptive artificial viscosity when solving the saturation equation. It has been demonstrated that appropriate artificial viscosity can effectively reduce or even eliminate GOE. The proposed numerical method can be easily applied in practical engineering problems.
We propose a new point process model that combines, in the spatio-temporal setting, both multi-scaling by hybridization and hardcore distances. Our so-called hybrid Strauss hardcore point process model allows different types of interaction, at different spatial and/or temporal scales, that might be of interest in environmental and biological applications. The inference and simulation of the model are implemented using the logistic likelihood approach and the birth-death Metropolis-Hastings algorithm. Our model is illustrated and compared to others on two datasets of forest fire occurrences respectively in France and Spain.
We propose a distributed bundle adjustment (DBA) method using the exact Levenberg-Marquardt (LM) algorithm for super large-scale datasets. Most of the existing methods partition the global map to small ones and conduct bundle adjustment in the submaps. In order to fit the parallel framework, they use approximate solutions instead of the LM algorithm. However, those methods often give sub-optimal results. Different from them, we utilize the exact LM algorithm to conduct global bundle adjustment where the formation of the reduced camera system (RCS) is actually parallelized and executed in a distributed way. To store the large RCS, we compress it with a block-based sparse matrix compression format (BSMC), which fully exploits its block feature. The BSMC format also enables the distributed storage and updating of the global RCS. The proposed method is extensively evaluated and compared with the state-of-the-art pipelines using both synthetic and real datasets. Preliminary results demonstrate the efficient memory usage and vast scalability of the proposed method compared with the baselines. For the first time, we conducted parallel bundle adjustment using LM algorithm on a real datasets with 1.18 million images and a synthetic dataset with 10 million images (about 500 times that of the state-of-the-art LM-based BA) on a distributed computing system.
Whisper is a recent Automatic Speech Recognition (ASR) model displaying impressive robustness to both out-of-distribution inputs and random noise. In this work, we show that this robustness does not carry over to adversarial noise. We show that we can degrade Whisper performance dramatically, or even transcribe a target sentence of our choice, by generating very small input perturbations with Signal Noise Ratio of 35-45dB. We also show that by fooling the Whisper language detector we can very easily degrade the performance of multilingual models. These vulnerabilities of a widely popular open-source model have practical security implications and emphasize the need for adversarially robust ASR.
Strong stability is a property of time integration schemes for ODEs that preserve temporal monotonicity of solutions in arbitrary (inner product) norms. It is proved that explicit Runge--Kutta schemes of order $p\in 4\mathbb{N}$ with $s=p$ stages for linear autonomous ODE systems are not strongly stable, closing an open stability question from [Z.~Sun and C.-W.~Shu, SIAM J. Numer. Anal. 57 (2019), 1158--1182]. Furthermore, for explicit Runge--Kutta methods of order $p\in\mathbb{N}$ and $s>p$ stages, we prove several sufficient as well as necessary conditions for strong stability. These conditions involve both the stability function and the hypocoercivity index of the ODE system matrix. This index is a structural property combining the Hermitian and skew-Hermitian part of the system matrix.
By combining a logarithm transformation with a corrected Milstein-type method, the present article proposes an explicit, unconditional boundary and dynamics preserving scheme for the stochastic susceptible-infected-susceptible (SIS) epidemic model that takes value in (0,N). The scheme applied to the model is first proved to have a strong convergence rate of order one. Further, the dynamic behaviors are analyzed for the numerical approximations and it is shown that the scheme can unconditionally preserve both the domain and the dynamics of the model. More precisely, the proposed scheme gives numerical approximations living in the domain (0,N) and reproducing the extinction and persistence properties of the original model for any time discretization step-size h > 0, without any additional requirements on the model parameters. Numerical experiments are presented to verify our theoretical results.
The forces acting on bridge structural elements caused by live loads are computed by live load models defined in design codes. In most cases, such live load models are defined by studies performed on girder bridges, where extreme values of shear forces and bending moment are intended to be predicted. This paper shows that when code live load models are applied to truss bridges, the estimated forces in some structural elements may not be representative of those caused by actual traffic. Three WIM (weigh-in-motion) databases, which are recorded on roads in Mexico, are used as real traffic data. The results suggest that current code live load models are not entirely adequate to estimate forces in structural elements of truss bridges.