In algorithms for solving optimization problems constrained to a smooth manifold, retractions are a well established tool to ensure that the iterates stay on the manifold. More recently, it has been demonstrated that retractions are a useful concept for other computational tasks on manifold as well, including interpolation tasks. In this work, we consider the application of retractions to the numerical integration of differential equations on fixed-rank matrix manifolds. This is closely related to dynamical low-rank approximation (DLRA) techniques. In fact, any retraction leads to a numerical integrator and, vice versa, certain DLRA techniques bear a direct relation with retractions. As an example for the latter, we introduce a new retraction, called KLS retraction, that is derived from the so-called unconventional integrator for DLRA. We also illustrate how retractions can be used to recover known DLRA techniques and to design new ones. In particular, this work introduces two novel numerical integration schemes that apply to differential equations on general manifolds: the accelerated forward Euler (AFE) method and the Ralston-Hermite (RH) method. Both methods build on retractions by using them as a tool for approximating curves on manifolds. The two methods are proven to have local truncation error of order three. Numerical experiments on classical DLRA examples highlight the advantages and shortcomings of these new methods.
We address speech enhancement based on variational autoencoders, which involves learning a speech prior distribution in the time-frequency (TF) domain. A zero-mean complex-valued Gaussian distribution is usually assumed for the generative model, where the speech information is encoded in the variance as a function of a latent variable. In contrast to this commonly used approach, we propose a weighted variance generative model, where the contribution of each spectrogram time-frame in parameter learning is weighted. We impose a Gamma prior distribution on the weights, which would effectively lead to a Student's t-distribution instead of Gaussian for speech generative modeling. We develop efficient training and speech enhancement algorithms based on the proposed generative model. Our experimental results on spectrogram auto-encoding and speech enhancement demonstrate the effectiveness and robustness of the proposed approach compared to the standard unweighted variance model.
It is known that the generating function associated with the enumeration of non-backtracking walks on finite graphs is a rational matrix-valued function of the parameter; such function is also closely related to graph-theoretical results such as Ihara's theorem and the zeta function on graphs. In [P. Grindrod, D. J. Higham, V. Noferini, The deformed graph Laplacian and its application to network centrality analysis, SIAM J. Matrix Anal. Appl. 39(1), 310--341, 2018], the radius of convergence of the generating function was studied for simple (i.e., undirected, unweighted and with no loops) graphs, and shown to depend on the number of cycles in the graph. In this paper, we use technologies from the theory of polynomial and rational matrices to greatly extend these results by studying the radius of convergence of the corresponding generating function for general, possibly directed and/or weighted, graphs. We give an analogous characterization of the radius of convergence for directed unweighted graphs, showing that it depends on the number of cycles in the undirectization of the graph. For weighted graphs, we provide for the first time an exact formula for the radius of convergence, improving a previous result that exhibited a lower bound. Finally, we consider also backtracking-downweighted walks on unweighted digraphs, and we prove a version of Ihara's theorem in that case.
Confounder selection, namely choosing a set of covariates to control for confounding between a treatment and an outcome, is arguably the most important step in the design of observational studies. Previous methods, such as Pearl's celebrated back-door criterion, typically require pre-specifying a causal graph, which can often be difficult in practice. We propose an interactive procedure for confounder selection that does not require pre-specifying the graph or the set of observed variables. This procedure iteratively expands the causal graph by finding what we call "primary adjustment sets" for a pair of possibly confounded variables. This can be viewed as inverting a sequence of latent projections of the underlying causal graph. Structural information in the form of primary adjustment sets is elicited from the user, bit by bit, until either a set of covariates are found to control for confounding or it can be determined that no such set exists. Other information, such as the causal relations between confounders, is not required by the procedure. We show that if the user correctly specifies the primary adjustment sets in every step, our procedure is both sound and complete.
Recently it has become common for applied works to combine commonly used survival analysis modeling methods, such as the multivariable Cox model, and propensity score weighting with the intention of forming a doubly robust estimator that is unbiased in large samples when either the Cox model or the propensity score model is correctly specified. This combination does not, in general, produce a doubly robust estimator, even after regression standardization, when there is truly a causal effect. We demonstrate via simulation this lack of double robustness for the semiparametric Cox model, the Weibull proportional hazards model, and a simple proportional hazards flexible parametric model, with both the latter models fit via maximum likelihood. We provide a novel proof that the combination of propensity score weighting and a proportional hazards survival model, fit either via full or partial likelihood, is consistent under the null of no causal effect of the exposure on the outcome under particular censoring mechanisms if either the propensity score or the outcome model is correctly specified and contains all confounders. Given our results suggesting that double robustness only exists under the null, we outline two simple alternative estimators that are doubly robust for the survival difference at a given time point (in the above sense), provided the censoring mechanism can be correctly modeled, and one doubly robust method of estimation for the full survival curve. We provide R code to use these estimators for estimation and inference in the supplementary materials.
For solving two-dimensional incompressible flow in the vorticity form by the fourth-order compact finite difference scheme and explicit strong stability preserving (SSP) temporal discretizations, we show that the simple bound-preserving limiter in [Li H., Xie S., Zhang X., SIAM J. Numer. Anal., 56 (2018)]. can enforce the strict bounds of the vorticity, if the velocity field satisfies a discrete divergence free constraint. For reducing oscillations, a modified TVB limiter adapted from [Cockburn B., Shu CW., SIAM J. Numer. Anal., 31 (1994)] is constructed without affecting the bound-preserving property. This bound-preserving finite difference method can be used for any passive convection equation with a divergence free velocity field.
Bagging is a commonly used ensemble technique in statistics and machine learning to improve the performance of prediction procedures. In this paper, we study the prediction risk of variants of bagged predictors under the proportional asymptotics regime, in which the ratio of the number of features to the number of observations converges to a constant. Specifically, we propose a general strategy to analyze the prediction risk under squared error loss of bagged predictors using classical results on simple random sampling. Specializing the strategy, we derive the exact asymptotic risk of the bagged ridge and ridgeless predictors with an arbitrary number of bags under a well-specified linear model with arbitrary feature covariance matrices and signal vectors. Furthermore, we prescribe a generic cross-validation procedure to select the optimal subsample size for bagging and discuss its utility to eliminate the non-monotonic behavior of the limiting risk in the sample size (i.e., double or multiple descents). In demonstrating the proposed procedure for bagged ridge and ridgeless predictors, we thoroughly investigate the oracle properties of the optimal subsample size and provide an in-depth comparison between different bagging variants.
We introduce a nonlinear stochastic model reduction technique for high-dimensional stochastic dynamical systems that have a low-dimensional invariant effective manifold with slow dynamics, and high-dimensional, large fast modes. Given only access to a black box simulator from which short bursts of simulation can be obtained, we design an algorithm that outputs an estimate of the invariant manifold, a process of the effective stochastic dynamics on it, which has averaged out the fast modes, and a simulator thereof. This simulator is efficient in that it exploits of the low dimension of the invariant manifold, and takes time steps of size dependent on the regularity of the effective process, and therefore typically much larger than that of the original simulator, which had to resolve the fast modes. The algorithm and the estimation can be performed on-the-fly, leading to efficient exploration of the effective state space, without losing consistency with the underlying dynamics. This construction enables fast and efficient simulation of paths of the effective dynamics, together with estimation of crucial features and observables of such dynamics, including the stationary distribution, identification of metastable states, and residence times and transition rates between them.
We show that the use of large language models (LLMs) is prevalent among crowd workers, and that targeted mitigation strategies can significantly reduce, but not eliminate, LLM use. On a text summarization task where workers were not directed in any way regarding their LLM use, the estimated prevalence of LLM use was around 30%, but was reduced by about half by asking workers to not use LLMs and by raising the cost of using them, e.g., by disabling copy-pasting. Secondary analyses give further insight into LLM use and its prevention: LLM use yields high-quality but homogeneous responses, which may harm research concerned with human (rather than model) behavior and degrade future models trained with crowdsourced data. At the same time, preventing LLM use may be at odds with obtaining high-quality responses; e.g., when requesting workers not to use LLMs, summaries contained fewer keywords carrying essential information. Our estimates will likely change as LLMs increase in popularity or capabilities, and as norms around their usage change. Yet, understanding the co-evolution of LLM-based tools and users is key to maintaining the validity of research done using crowdsourcing, and we provide a critical baseline before widespread adoption ensues.
Benchmark suites are crucial for assessing the performance of evolutionary algorithms, but the constituent problems are often too complex to provide clear intuition about an algorithm's strengths and weaknesses. To address this gap, we introduce DOSSIER ("Diagnostic Overview of Selection Schemes In Evolutionary Runs"), a diagnostic suite initially composed of eight handcrafted metrics. These metrics are designed to empirically measure specific capacities for exploitation, exploration, and their interactions. We consider exploitation both with and without constraints, and we divide exploration into two aspects: diversity exploration (the ability to simultaneously explore multiple pathways) and valley-crossing exploration (the ability to cross wider and wider fitness valleys). We apply DOSSIER to six popular selection schemes: truncation, tournament, fitness sharing, lexicase, nondominated sorting, and novelty search. Our results confirm that simple schemes (e.g., tournament and truncation) emphasized exploitation. For more sophisticated schemes, however, our diagnostics revealed interesting dynamics. Lexicase selection performed moderately well across all diagnostics that did not incorporate valley crossing, but faltered dramatically whenever valleys were present, performing worse than even random search. Fitness sharing was the only scheme to effectively contend with valley crossing but it struggled with the other diagnostics. Our study highlights the utility of using diagnostics to gain nuanced insights into selection scheme characteristics, which can inform the design of new selection methods.
We propose novel optimal and parameter-free algorithms for computing an approximate solution for smooth optimization with small (projected) gradient norm. Specifically, for computing an approximate solution such that the norm of the (projected) gradient is not greater than $\varepsilon$, we have the following results for the cases of convex, strongly convex, and nonconvex problems: a) for the convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L\|x_0 - x^*\|\varepsilon}$, where $L$ is the Lipschitz constant of the gradient function and $x^*$ is any optimal solution; b) for the strongly convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L/\mu}\log(\|\nabla f(x_0)\|)$, where $\mu$ is the strong convexity constant; c) for the nonconvex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{Ll}(f(x_0) - f(x^*))/\varepsilon^2$, where $l$ is the lower curvature constant. Our complexity results match the lower complexity bounds of all three cases of problems. Our analysis can be applied to both unconstrained problems and problems with constrained feasible sets; we demonstrate our strategy for analyzing the complexity of computing solutions with small projected gradient norm in the convex case. For all the convex, strongly convex, and nonconvex cases, we also propose parameter-free algorithms that does not require the knowledge of any problem parameter. To the best of our knowledge, our paper is the first one that achieves the $O(1)\sqrt{L\|x_0 - x^*\|/\varepsilon}$ complexity for convex problems with constraint feasible sets, the $O(1)\sqrt{Ll}(f(x_0) - f(x^*))/\varepsilon$ complexity for nonconvex problems, and optimal complexities for convex, strongly convex, and nonconvex problems through parameter-free algorithms.