After a Hessian computation, we quickly prove the 3D simplex mean width conjecture using classical methods. Then, we generalize some components to $d$ dimensions.
Random 2-cell embeddings of a given graph $G$ are obtained by choosing a random local rotation around every vertex. We analyze the expected number of faces, $\mathbb{E}[F_G]$, of such an embedding which is equivalent to studying its average genus. So far, tight results are known for two families called monopoles and dipoles. We extend the dipole result to a more general family called multistars, i.e., loopless multigraphs in which there is a vertex incident with all the edges. In particular, we show that the expected number of faces of every multistar with $n$ nonleaf edges lies in an interval of length $2/(n + 1)$ centered at the expected number of faces of an $n$-edge dipole. This allows us to derive bounds on $\mathbb{E}[F_G]$ for any given graph $G$ in terms of vertex degrees. We conjecture that $\mathbb{E}[F_G ] \le O(n)$ for any simple $n$-vertex graph $G$.
On sparse graphs, Roditty and Williams [2013] proved that no $O(n^{2-\varepsilon})$-time algorithm achieves an approximation factor smaller than $\frac{3}{2}$ for the diameter problem unless SETH fails. In this article, we solve a longstanding question: can we use the structural properties of median graphs to break this global quadratic barrier? We propose the first combinatiorial algorithm computing exactly all eccentricities of a median graph in truly subquadratic time. Median graphs constitute the family of graphs which is the most studied in metric graph theory because their structure represent many other discrete and geometric concepts, such as CAT(0) cube complexes. Our result generalizes a recent one, stating that there is a linear-time algorithm for all eccentricities in median graphs with bounded dimension $d$, i.e. the dimension of the largest induced hypercube. This prerequisite on $d$ is not necessarily anymore to determine all eccentricities in subquadratic time. The execution time of our algorithm is $O(n^{1.6456}\log^{O(1)} n)$. We provide also some satellite outcomes related to this general result. In particular, restricted to simplex graphs, this algorithm enumerate all eccentricities with a quasilinear running time. Moreover, an algorithm is proposed to compute exactly all reach centralities in time $O(2^{3d}n\log^{O(1)}n)$.
Simultaneous analysis of gene expression data and genetic variants is highly of interest, especially when the number of gene expressions and genetic variants are both greater than the sample size. Association of both causal genes and effective SNPs makes the use of sparse modeling of such genetic data sets, highly important. The high-dimensional sparse instrumental variables models are one of such useful association models, which models the simultaneous relation of the gene expressions and genetic variants with complex traits. From a Bayesian viewpoint, the sparsity can be favored using sparsity-enforcing priors such as spike-and-slab priors. A two-stage modification of the expectation propagation (EP) algorithm is proposed and examined for approximate inference in high-dimensional sparse instrumental variables models with spike-and-slab priors. This method is an adoption of the classical two-stage least squares method, to be used with the Bayes context. A simulation study is performed to examine the performance of the methods. The proposed method is applied to analysis of the mouse obesity data.
We study the classification problem for high-dimensional data with $n$ observations on $p$ features where the $p \times p$ covariance matrix $\Sigma$ exhibits a spiked eigenvalues structure and the vector $\zeta$, given by the difference between the whitened mean vectors, is sparse with sparsity at most $s$. We propose an adaptive classifier (adaptive with respect to the sparsity $s$) that first performs dimension reduction on the feature vectors prior to classification in the dimensionally reduced space, i.e., the classifier whitened the data, then screen the features by keeping only those corresponding to the $s$ largest coordinates of $\zeta$ and finally apply Fisher linear discriminant on the selected features. Leveraging recent results on entrywise matrix perturbation bounds for covariance matrices, we show that the resulting classifier is Bayes optimal whenever $n \rightarrow \infty$ and $s \sqrt{n^{-1} \ln p} \rightarrow 0$. Experimental results on real and synthetic data sets indicate that the proposed classifier is competitive with existing state-of-the-art methods while also selecting a smaller number of features.
Fully convolutional networks are robust in performing semantic segmentation, with many applications from signal processing to computer vision. From the fundamental principles of variational quantum algorithms, we propose a feasible pure quantum architecture that can be operated on noisy intermediate-scale quantum devices. In this work, a parameterized quantum circuit consisting of three layers, convolutional, pooling, and upsampling, is characterized by generative one-qubit and two-qubit gates and driven by a classical optimizer. This architecture supplies a solution for realizing the dynamical programming on a one-way quantum computer and maximally taking advantage of quantum computing throughout the calculation. Moreover, our algorithm works on many physical platforms, and particularly the upsampling layer can use either conventional qubits or multiple-level systems. Through numerical simulations, our study represents the successful training of a pure quantum fully convolutional network and discusses advantages by comparing it with the hybrid solution.
We investigate how the final parameters found by stochastic gradient descent are influenced by over-parameterization. We generate families of models by increasing the number of channels in a base network, and then perform a large hyper-parameter search to study how the test error depends on learning rate, batch size, and network width. We find that the optimal SGD hyper-parameters are determined by a "normalized noise scale," which is a function of the batch size, learning rate, and initialization conditions. In the absence of batch normalization, the optimal normalized noise scale is directly proportional to width. Wider networks, with their higher optimal noise scale, also achieve higher test accuracy. These observations hold for MLPs, ConvNets, and ResNets, and for two different parameterization schemes ("Standard" and "NTK"). We observe a similar trend with batch normalization for ResNets. Surprisingly, since the largest stable learning rate is bounded, the largest batch size consistent with the optimal normalized noise scale decreases as the width increases.
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.
Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for few-shot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.
Image foreground extraction is a classical problem in image processing and vision, with a large range of applications. In this dissertation, we focus on the extraction of text and graphics in mixed-content images, and design novel approaches for various aspects of this problem. We first propose a sparse decomposition framework, which models the background by a subspace containing smooth basis vectors, and foreground as a sparse and connected component. We then formulate an optimization framework to solve this problem, by adding suitable regularizations to the cost function to promote the desired characteristics of each component. We present two techniques to solve the proposed optimization problem, one based on alternating direction method of multipliers (ADMM), and the other one based on robust regression. Promising results are obtained for screen content image segmentation using the proposed algorithm. We then propose a robust subspace learning algorithm for the representation of the background component using training images that could contain both background and foreground components, as well as noise. With the learnt subspace for the background, we can further improve the segmentation results, compared to using a fixed subspace. Lastly, we investigate a different class of signal/image decomposition problem, where only one signal component is active at each signal element. In this case, besides estimating each component, we need to find their supports, which can be specified by a binary mask. We propose a mixed-integer programming problem, that jointly estimates the two components and their supports through an alternating optimization scheme. We show the application of this algorithm on various problems, including image segmentation, video motion segmentation, and also separation of text from textured images.
Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.