We propose a data segmentation methodology for the high-dimensional linear regression problem where regression parameters are allowed to undergo multiple changes. The proposed methodology, MOSEG, proceeds in two stages: first, the data are scanned for multiple change points using a moving window-based procedure, which is followed by a location refinement stage. MOSEG enjoys computational efficiency thanks to the adoption of a coarse grid in the first stage, and achieves theoretical consistency in estimating both the total number and the locations of the change points, under general conditions permitting serial dependence and non-Gaussianity. We also propose MOSEG.MS, a multiscale extension of MOSEG which, while comparable to MOSEG in terms of computational complexity, achieves theoretical consistency for a broader parameter space where large parameter shifts over short intervals and small changes over long stretches of stationarity are simultaneously allowed. We demonstrate good performance of the proposed methods in comparative simulation studies and in an application to predicting the equity premium.
Conditional independence plays a foundational role in database theory, probability theory, information theory, and graphical models. In databases, conditional independence appears in database normalization and is known as the (embedded) multivalued dependency. Many properties of conditional independence are shared across various domains, and to some extent these commonalities can be studied through a measure-theoretic approach. The present paper proposes an alternative approach via semiring relations, defined by extending database relations with tuple annotations from some commutative semiring. Integrating various interpretations of conditional independence in this context, we investigate how the choice of the underlying semiring impacts the corresponding axiomatic and decomposition properties. We specifically identify positivity and multiplicative cancellativity as the key semiring properties that enable extending results from the relational context to the broader semiring framework. Additionally, we explore the relationships between different conditional independence notions through model theory, and consider how methods to test logical consequence and validity generalize from database theory and information theory to semiring relations.
This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $O(n)$ and $O(n^2)$ respectively, as compared to the $O(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation and matrix square root problems. MATLAB code implementation is publicly available on GitHub : //github.com/yogeshd-iitk/subspace_descent_over_SPD_manifold
We consider logistic regression including two sets of discrete or categorical covariates that are missing at random (MAR) separately or simultaneously. We examine the asymptotic properties of two multiple imputation (MI) estimators, given in the study of Lee at al. (2023), for the parameters of the logistic regression model with both sets of discrete or categorical covariates that are MAR separately or simultaneously. The proposed estimated asymptotic variances of the two MI estimators address a limitation observed with Rubin's type estimated variances, which lead to underestimate the variances of the two MI estimators (Rubin, 1987). Simulation results demonstrate that our two proposed MI methods outperform the complete-case, semiparametric inverse probability weighting, random forest MI using chained equations, and stochastic approximation of expectation-maximization methods. To illustrate the methodology's practical application, we provide a real data example from a survey conducted in the Feng Chia night market in Taichung City, Taiwan.
Linear regression models have been extensively considered in the literature. However, in some practical applications they may not be appropriate all over the range of the covariate. In this paper, a more flexible model is introduced by considering a regression model $Y=r(X)+\varepsilon$ where the regression function $r(\cdot)$ is assumed to be linear for large values in the domain of the predictor variable $X$. More precisely, we assume that $r(x)=\alpha_0+\beta_0 x$ for $x> u_0$, where the value $u_0$ is identified as the smallest value satisfying such a property. A penalized procedure is introduced to estimate the threshold $u_0$. The considered proposal focusses on a semiparametric approach since no parametric model is assumed for the regression function for values smaller than $u_0$. Consistency properties of both the threshold estimator and the estimators of $(\alpha_0,\beta_0)$ are derived, under mild assumptions. Through a numerical study, the small sample properties of the proposed procedure and the importance of introducing a penalization are investigated. The analysis of a real data set allows us to demonstrate the usefulness of the penalized estimators.
We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to languages not seen while training the self-supervised backbone. Moreover, without training on bilingual or parallel examples, ParrotTTS can transfer voices across languages while preserving the speaker specific characteristics, e.g., synthesizing fluent Hindi speech using a French speaker's voice and accent. We present extensive results in monolingual and multi-lingual scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models using only a fraction of paired data as latter.
In current applied research the most-used route to an analysis of composition is through log-ratios -- that is, contrasts among log-transformed measurements. Here we argue instead for a more direct approach, using a statistical model for the arithmetic mean on the original scale of measurement. Central to the approach is a general variance-covariance function, derived by assuming multiplicative measurement error. Quasi-likelihood analysis of logit models for composition is then a general alternative to the use of multivariate linear models for log-ratio transformed measurements, and it has important advantages. These include robustness to secondary aspects of model specification, stability when there are zero-valued or near-zero measurements in the data, and more direct interpretation. The usual efficiency property of quasi-likelihood estimation applies even when the error covariance matrix is unspecified. We also indicate how the derived variance-covariance function can be used, instead of the variance-covariance matrix of log-ratios, with more general multivariate methods for the analysis of composition. A specific feature is that the notion of `null correlation' -- for compositional measurements on their original scale -- emerges naturally.
We study hypothesis testing under communication constraints, where each sample is quantized before being revealed to a statistician. Without communication constraints, it is well known that the sample complexity of simple binary hypothesis testing is characterized by the Hellinger distance between the distributions. We show that the sample complexity of simple binary hypothesis testing under communication constraints is at most a logarithmic factor larger than in the unconstrained setting and this bound is tight. We develop a polynomial-time algorithm that achieves the aforementioned sample complexity. Our framework extends to robust hypothesis testing, where the distributions are corrupted in the total variation distance. Our proofs rely on a new reverse data processing inequality and a reverse Markov inequality, which may be of independent interest. For simple $M$-ary hypothesis testing, the sample complexity in the absence of communication constraints has a logarithmic dependence on $M$. We show that communication constraints can cause an exponential blow-up leading to $\Omega(M)$ sample complexity even for adaptive algorithms.
In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.
We describe and analyze a quasi-Trefftz DG method for solving boundary value problems for the homogeneous diffusion-advection-reaction equation with piecewise-smooth coefficients. Trefftz schemes are high-order Galerkin methods whose discrete functions are elementwise exact solutions of the underlying PDE. Trefftz basis functions can be computed for many PDEs that are linear, homogeneous and with piecewise-constant coefficients. However, if the equation has varying coefficients, in general, exact solutions are unavailable, hence the construction of discrete Trefftz spaces is impossible. Quasi-Trefftz methods have been introduced to overcome this limitation, relying on discrete spaces of functions that are elementwise "approximate solutions" of the PDE. A space-time quasi-Trefftz DG method for the acoustic wave equation with smoothly varying coefficients has recently been studied; since it has shown excellent results, we propose a related method that can be applied to second-order elliptic equations. The DG weak formulation is derived using an interior penalty parameter and the upwind numerical fluxes. We choose polynomial quasi-Trefftz basis functions, whose coefficients can be computed with a simple algorithm based on the Taylor expansion of the PDE's coefficients. The main advantage of Trefftz and quasi-Trefftz schemes over more classical ones is the higher accuracy for comparable numbers of degrees of freedom. We prove that the dimension of the quasi-Trefftz space is smaller than the dimension of the full polynomial space of the same degree and that yields the same optimal convergence rates. The quasi-Trefftz DG method is well-posed, consistent and stable and we prove its high-order convergence. We present some numerical experiments in two dimensions that show excellent properties in terms of approximation and convergence rate.
We introduce an extension of first-order logic that comes equipped with additional predicates for reasoning about an abstract state. Sequents in the logic comprise a main formula together with pre- and postconditions in the style of Hoare logic, and the axioms and rules of the logic ensure that the assertions about the state compose in the correct way. The main result of the paper is a realizability interpretation of our logic that extracts programs into a mixed functional/imperative language. All programs expressible in this language act on the state in a sequential manner, and we make this intuition precise by interpreting them in a semantic metatheory using the state monad. Our basic framework is very general, and our intention is that it can be instantiated and extended in a variety of different ways. We outline in detail one such extension: A monadic version of Heyting arithmetic with a wellfounded while rule, and conclude by outlining several other directions for future work.