Approximate joint diagonalization of a set of matrices provides a powerful framework for numerous statistical signal processing applications. For non-unitary joint diagonalization (NUJD) based on the least-squares (LS) criterion, outliers, also referred to as anomaly or discordant observations, have a negative influence on the performance, since squaring the residuals magnifies the effects of them. To solve this problem, we propose a novel cost function that incorporates the soft decision-directed scheme into the least-squares algorithm and develops an efficient algorithm. The influence of the outliers is mitigated by applying decision-directed weights which are associated with the residual error at each iterative step. Specifically, the mixing matrix is estimated by a modified stationary point method, in which the updating direction is determined based on the linear approximation to the gradient function. Simulation results demonstrate that the proposed algorithm outperforms conventional non-unitary diagonalization algorithms in terms of both convergence performance and robustness to outliers.
We present the first formal verification of approximation algorithms for NP-complete optimization problems: vertex cover, independent set, set cover, center selection, load balancing, and bin packing. We uncover incompletenesses in existing proofs and improve the approximation ratio in one case. All proofs are uniformly invariant based.
This paper addresses the problem of full model estimation for non-parametric finite mixture models. It presents an approach for selecting the number of components and the subset of discriminative variables (i.e., the subset of variables having different distributions among the mixture components). The proposed approach considers a discretization of each variable into B bins and a penalization of the resulting log-likelihood. Considering that the number of bins tends to infinity as the sample size tends to infinity, we prove that our estimator of the model (number of components and subset of relevant variables for clustering) is consistent under a suitable choice of the penalty term. Interest of our proposal is illustrated on simulated and benchmark data.
In this paper, we address a new problem of reversing the effect of an image filter, which can be linear or nonlinear. The assumption is that the algorithm of the filter is unknown and the filter is available as a black box. We formulate this inverse problem as minimizing a local patch-based cost function and use total derivative to approximate the gradient which is used in gradient descent to solve the problem. We analyze factors affecting the convergence and quality of the output in the Fourier domain. We also study the application of accelerated gradient descent algorithms in three gradient-free reverse filters, including the one proposed in this paper. We present results from extensive experiments to evaluate the complexity and effectiveness of the proposed algorithm. Results demonstrate that the proposed algorithm outperforms the state-of-the-art in that (1) it is at the same level of complexity as that of the fastest reverse filter, but it can reverse a larger number of filters, and (2) it can reverse the same list of filters as that of the very complex reverse filter, but its complexity is much smaller.
A solution manifold is the collection of points in a $d$-dimensional space satisfying a system of $s$ equations with $s<d$. Solution manifolds occur in several statistical problems including hypothesis testing, curved-exponential families, constrained mixture models, partial identifications, and nonparametric set estimation. We analyze solution manifolds both theoretically and algorithmically. In terms of theory, we derive five useful results: the smoothness theorem, the stability theorem (which implies the consistency of a plug-in estimator), the convergence of a gradient flow, the local center manifold theorem and the convergence of the gradient descent algorithm. To numerically approximate a solution manifold, we propose a Monte Carlo gradient descent algorithm. In the case of likelihood inference, we design a manifold constraint maximization procedure to find the maximum likelihood estimator on the manifold. We also develop a method to approximate a posterior distribution defined on a solution manifold.
Physical activity (PA) is an important risk factor for many health outcomes. Wearable-devices such as accelerometers are increasingly used in biomedical studies to understand the associations between PA and health outcomes. Statistical analyses involving accelerometer data are challenging due to the following three characteristics: (i) high-dimensionality, (ii) temporal dependence, and (iii) measurement error. To address these challenges we treat accelerometer-based measures of physical activity as a single function-valued covariate prone to measurement error. Specifically, in order to determine the relationship between PA and a health outcome of interest, we propose a regression model with a functional covariate that accounts for measurement error. Using regression calibration, we develop a two-step estimation method for the model parameters and establish their consistency. A test is also proposed to test the significance of the estimated model parameters. Simulation studies are conducted to compare the proposed methods with existing alternative approaches under varying scenarios. Finally, the developed methods are used to assess the relationship between PA intensity and BMI obtained from the National Health and Nutrition Examination Survey data.
This paper gives a new approach for the maximum likelihood estimation of the joint of the location and scale of the Cauchy distribution. We regard the joint as a single complex parameter and derive a new form of the likelihood equation of a complex variable. Based on the equation, we provide a new iterative scheme approximating the maximum likelihood estimate. We also handle the equation in an algebraic manner and derive a polynomial containing the maximum likelihood estimate as a root. This algebraic approach provides another scheme approximating the maximum likelihood estimate by root-finding algorithms for polynomials, and furthermore, gives non-existence of closed-form formulae for the case that the sample size is five. We finally provide some numerical examples to show our method is effective.
Gaussian covariance graph model is a popular model in revealing underlying dependency structures among random variables. A Bayesian approach to the estimation of covariance structures uses priors that force zeros on some off-diagonal entries of covariance matrices and put a positive definite constraint on matrices. In this paper, we consider a spike and slab prior on off-diagonal entries, which uses a mixture of point-mass and normal distribution. The point-mass naturally introduces sparsity to covariance structures so that the resulting posterior from this prior renders covariance structure learning. Under this prior, we calculate posterior model probabilities of covariance structures using Laplace approximation. We show that the error due to Laplace approximation becomes asymptotically marginal at some rate depending on the posterior convergence rate of covariance matrix under the Frobenius norm. With the approximated posterior model probabilities, we propose a new framework for estimating a covariance structure. Since the Laplace approximation is done around the mode of conditional posterior of covariance matrix, which cannot be obtained in the closed form, we propose a block coordinate descent algorithm to find the mode and show that the covariance matrix can be estimated using this algorithm once the structure is chosen. Through a simulation study based on five numerical models, we show that the proposed method outperforms graphical lasso and sample covariance matrix in terms of root mean squared error, max norm, spectral norm, specificity, and sensitivity. Also, the advantage of the proposed method is demonstrated in terms of accuracy compared to our competitors when it is applied to linear discriminant analysis (LDA) classification to breast cancer diagnostic dataset.
In this paper, we propose a dual-mixed formulation for stationary viscoplastic flows with yield, such as the Bingham or the Herschel-Bulkley flow. The approach is based on a Huber regularization of the viscosity term and a two-fold saddle point nonlinear operator equation for the resulting weak formulation. We provide the uniqueness of solutions for the continuous formulation and propose a discrete scheme based on Arnold-Falk-Winther finite elements. The discretization scheme yields a system of slantly differentiable nonlinear equations, for which a semismooth Newton algorithm is proposed and implemented. Local superlinear convergence of the method is also proved. Finally, we perform several numerical experiments in two and three dimensions to investigate the behavior and efficiency of the method.
Depth separation results propose a possible theoretical explanation for the benefits of deep neural networks over shallower architectures, establishing that the former possess superior approximation capabilities. However, there are no known results in which the deeper architecture leverages this advantage into a provable optimization guarantee. We prove that when the data are generated by a distribution with radial symmetry which satisfies some mild assumptions, gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations, and where the hidden layer is held fixed throughout training. Since it is known that ball indicators are hard to approximate with respect to a certain heavy-tailed distribution when using depth 2 networks with a single layer of non-linearities (Safran and Shamir, 2017), this establishes what is to the best of our knowledge, the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice. Our proof technique relies on a random features approach which reduces the problem to learning with a single neuron, where new tools are required to show the convergence of gradient descent when the distribution of the data is heavy-tailed.
Directed networks appear in various areas, such as biology, sociology, physiology and computer science. In this paper, we construct a spectral clustering method based on the singular decomposition of the adjacency matrix to detect community in directed stochastic block model (DiSBM). By considering a sparsity parameter, under mild conditions, we show the proposed approach can consistently recover hidden row and column communities for different scaling of degrees. By considering the degree heterogeneity of both row and column nodes, we further modify the proposed method and establish a theoretical framework for directed degree corrected stochastic block model (DiDCSBM), and also show the consistency of the modified method for this case. Our theoretical results under DiSBM and DiDCSBM provide some innovations on some special directed networks, such as directed network with balanced clusters, directed network with nodes enjoying similar degrees, and the directed Erd\"os-R\'enyi graph. Furthermore, the theoretical results under DiDCSBM are consistent with those under DiSBM.