Solving semiparametric models can be computationally challenging because the dimension of parameter space may grow large with increasing sample size. Classical Newton's method becomes quite slow and unstable with intensive calculation of the large Hessian matrix and its inverse. Iterative methods separately update parameters for finite dimensional component and infinite dimensional component have been developed to speed up single iteration, but they often take more steps until convergence or even sometimes sacrifice estimation precision due to sub-optimal update direction. We propose a computationally efficient implicit profiling algorithm that achieves simultaneously the fast iteration step in iterative methods and the optimal update direction in the Newton's method by profiling out the infinite dimensional component as the function of the finite dimensional component. We devise a first order approximation when the profiling function has no explicit analytical form. We show that our implicit profiling method always solve any local quadratic programming problem in two steps. In two numerical experiments under semiparametric transformation models and GARCH-M models, we demonstrated the computational efficiency and statistical precision of our implicit profiling method.
We derive a posteriori error estimates for a fully discrete time-implicit finite element approximation of the stochastic total variaton flow (STVF) with additive space time noise. The estimates are first derived for an implementable fully discrete approximation of a regularized stochastic total variation flow. We then show that the derived a posteriori estimates remain valid for the unregularized flow up to a perturbation term that can be controlled by the regularization parameter. Based on the derived a posteriori estimates we propose a pathwise algorithm for the adaptive space-time refinement and perform numerical simulation for the regularized STVF to demonstrate the behavior of the proposed algorithm.
Nonlinear metrics, such as the F1-score, Matthews correlation coefficient, and Fowlkes-Mallows index, are often used to evaluate the performance of machine learning models, in particular, when facing imbalanced datasets that contain more samples of one class than the other. Recent optimal decision tree algorithms have shown remarkable progress in producing trees that are optimal with respect to linear criteria, such as accuracy, but unfortunately nonlinear metrics remain a challenge. To address this gap, we propose a novel algorithm based on bi-objective optimisation, which treats misclassifications of each binary class as a separate objective. We show that, for a large class of metrics, the optimal tree lies on the Pareto frontier. Consequently, we obtain the optimal tree by using our method to generate the set of all nondominated trees. To the best of our knowledge, this is the first method to compute provably optimal decision trees for nonlinear metrics. Our approach leads to a trade-off when compared to optimising linear metrics: the resulting trees may be more desirable according to the given nonlinear metric at the expense of higher runtimes. Nevertheless, the experiments illustrate that runtimes are reasonable for majority of the tested datasets.
Ensuring group fairness among groups of individuals in our society is desirable and crucial for many application domains. A social planner's typical medium of achieving group fair outcomes is through solving an optimization problem under a given objective for a particular domain. When the input is provided by strategic agents, the planner is facing a difficult situation of achieving fair outcomes while ensuring agent truthfulness without using incentive payment. To address this challenge, we consider the approximate mechanism design without money paradigm with group-fair objectives. We first consider the group-fair facility location problems where agents are divided into groups. The agents are located on a real line, modeling agents' private ideal preferences/points for the facility's location. Our aim is to locate a facility to approximately minimize the costs of groups of agents to the facility fairly while eliciting the agents' private locations truthfully. We consider various group-fair objectives and show that many objectives have an unbounded approximation ratio. We then consider the objectives of minimizing the maximum total group cost and the average group cost. For the first objective, we show that the approximation ratio of the median mechanism depends on the number of groups and provide a new group-based mechanism with an approximation ratio of 3. For the second objective, the median mechanism obtains a ratio of 3, and we propose a randomized mechanism that obtains a better approximation ratio. We also provide lower bounds for both objectives. We then study the notion of intergroup and intragroup fairness that measures fairness between groups and within each group. We consider various objectives and provide mechanisms with tight approximation ratios.
We study the problem of estimating a rank-$1$ signal in the presence of rotationally invariant noise-a class of perturbations more general than Gaussian noise. Principal Component Analysis (PCA) provides a natural estimator, and sharp results on its performance have been obtained in the high-dimensional regime. Recently, an Approximate Message Passing (AMP) algorithm has been proposed as an alternative estimator with the potential to improve the accuracy of PCA. However, the existing analysis of AMP requires an initialization that is both correlated with the signal and independent of the noise, which is often unrealistic in practice. In this work, we combine the two methods, and propose to initialize AMP with PCA. Our main result is a rigorous asymptotic characterization of the performance of this estimator. Both the AMP algorithm and its analysis differ from those previously derived in the Gaussian setting: at every iteration, our AMP algorithm requires a specific term to account for PCA initialization, while in the Gaussian case, PCA initialization affects only the first iteration of AMP. The proof is based on a two-phase artificial AMP that first approximates the PCA estimator and then mimics the true AMP. Our numerical simulations show an excellent agreement between AMP results and theoretical predictions, and suggest an interesting open direction on achieving Bayes-optimal performance.
In this work we consider a class of non-linear eigenvalue problems that admit a spectrum similar to that of a Hamiltonian matrix, in the sense that the spectrum is symmetric with respect to both the real and imaginary axis. More precisely, we present a method to iteratively approximate the eigenvalues of such non-linear eigenvalue problems closest to a given purely real or imaginary shift, while preserving the symmetries of the spectrum. To this end the presented method exploits the equivalence between the considered non-linear eigenvalue problem and the eigenvalue problem associated with a linear but infinite-dimensional operator. To compute the eigenvalues closest to the given shift, we apply a specifically chosen shift-invert transformation to this linear operator and compute the eigenvalues with the largest modulus of the new shifted and inverted operator using an (infinite) Arnoldi procedure. The advantage of the chosen shift-invert transformation is that the spectrum of the transformed operator has a "real skew-Hamiltonian"-like structure. Furthermore, it is proven that the Krylov space constructed by applying this operator, satisfies an orthogonality property in terms of a specifically chosen bilinear form. By taking this property into account in the orthogonalization process, it is ensured that even in the presence of rounding errors, the obtained approximation for, e.g., a simple, purely imaginary eigenvalue is simple and purely imaginary. The presented work can thus be seen as an extension of [V. Mehrmann and D. Watkins, "Structure-Preserving Methods for Computing Eigenpairs of Large Sparse Skew-Hamiltonian/Hamiltonian Pencils", SIAM J. Sci. Comput. (22.6), 2001], to the considered class of non-linear eigenvalue problems. Although the presented method is initially defined on function spaces, it can be implemented using finite dimensional linear algebra operations.
We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs). This approach contrasts current approaches for "neural PDE solvers" that employ collocation-based methods to make point-wise predictions of solutions to PDEs. This approach has the advantage of naturally enforcing different boundary conditions as well as ease of invoking well-developed PDE theory -- including analysis of numerical stability and convergence -- to obtain capacity bounds for our proposed neural networks in discretized domains. We explore our mesh-based strategy, called NeuFENet, using a weighted Galerkin loss function based on the Finite Element Method (FEM) on a parametric elliptic PDE. The weighted Galerkin loss (FEM loss) is similar to an energy functional that produces improved solutions, satisfies a priori mesh convergence, and can model Dirichlet and Neumann boundary conditions. We prove theoretically, and illustrate with experiments, convergence results analogous to mesh convergence analysis deployed in finite element solutions to PDEs. These results suggest that a mesh-based neural network approach serves as a promising approach for solving parametric PDEs with theoretical bounds.
Functional data are frequently accompanied by parametric templates that describe the typical shapes of the functions. Although the templates incorporate critical domain knowledge, parametric functional data models can incur significant bias, which undermines the usefulness and interpretability of these models. To correct for model misspecification, we augment the parametric templates with an infinite-dimensional nonparametric functional basis. Crucially, the nonparametric factors are regularized with an ordered spike-and-slab prior, which implicitly provides consistent rank selection and satisfies several appealing theoretical properties. This prior is accompanied by a parameter-expansion scheme customized to boost MCMC efficiency, and is broadly applicable for Bayesian factor models. The nonparametric basis functions are learned from the data, yet constrained to be orthogonal to the parametric template in order to preserve distinctness between the parametric and nonparametric terms. The versatility of the proposed approach is illustrated through applications to synthetic data, human motor control data, and dynamic yield curve data. Relative to parametric alternatives, the proposed semiparametric functional factor model eliminates bias, reduces excessive posterior and predictive uncertainty, and provides reliable inference on the effective number of nonparametric terms--all with minimal additional computational costs.
We consider a sparse high-dimensional varying coefficients model with random effects, a flexible linear model allowing covariates and coefficients to have a functional dependence with time. For each individual, we observe discretely sampled responses and covariates as a function of time as well as time invariant covariates. Under sampling times that are either fixed and common or random and independent amongst individuals, we propose a projection procedure for the empirical estimation of all varying coefficients. We extend this estimator to construct confidence bands for a fixed number of varying coefficients.
Distance metric learning based on triplet loss has been applied with success in a wide range of applications such as face recognition, image retrieval, speaker change detection and recently recommendation with the CML model. However, as we show in this article, CML requires large batches to work reasonably well because of a too simplistic uniform negative sampling strategy for selecting triplets. Due to memory limitations, this makes it difficult to scale in high-dimensional scenarios. To alleviate this problem, we propose here a 2-stage negative sampling strategy which finds triplets that are highly informative for learning. Our strategy allows CML to work effectively in terms of accuracy and popularity bias, even when the batch size is an order of magnitude smaller than what would be needed with the default uniform sampling. We demonstrate the suitability of the proposed strategy for recommendation and exhibit consistent positive results across various datasets.
We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.