Determining the number of clusters in a dataset is a fundamental issue in data clustering. Many methods have been proposed to solve the problem of selecting the number of clusters, considering it to be a problem with regard to model selection. This paper proposes a greedy algorithm that automatically selects a suitable number of clusters based on a probability distribution model framework. The algorithm includes two components. First, a generalization of Kohonen's self-organizing map (SOM), which has nodes linked to a probability distribution model, and which enables the algorithm to search for the winner based on the likelihood of each node, is introduced. Second, the proposed method uses a graph structure and a neighbor defined by the length of the shortest path between nodes, in contrast to Kohonen's SOM in which the nodes are fixed in the Euclidean space. This implementation makes it possible to update its graph structure by cutting links to weakly connected nodes to avoid unnecessary node deletion. The weakness of a node connection is measured using the Kullback--Leibler divergence and the redundancy of a node is measured by the minimum description length (MDL). This updating step makes it easy to determine the suitable number of clusters. Compared with existing methods, our proposed method is computationally efficient and can accurately select the number of clusters and perform clustering.
We consider the problem of maximizing the Nash social welfare when allocating a set $G$ of indivisible goods to a set $N$ of agents. We study instances, in which all agents have 2-value additive valuations: The value of a good $g \in G$ for an agent $i \in N$ is either $1$ or $s$, where $s$ is an odd multiple of $\frac{1}{2}$ larger than one. We show that the problem is solvable in polynomial time. Akrami et at. showed that this problem is solvable in polynomial time if $s$ is integral and is NP-hard whenever $s = \frac{p}{q}$, $p \in \mathbb{N}$ and $q\in \mathbb{N}$ are co-prime and $p > q \ge 3$. For the latter situation, an approximation algorithm was also given. It obtains an approximation ratio of at most $1.0345$. Moreover, the problem is APX-hard, with a lower bound of $1.000015$ achieved at $\frac{p}{q} = \frac{5}{4}$. The case $q = 2$ and odd $p$ was left open. In the case of integral $s$, the problem is separable in the sense that the optimal allocation of the heavy goods (= value $s$ for some agent) is independent of the number of light goods (= value $1$ for all agents). This leads to an algorithm that first computes an optimal allocation of the heavy goods and then adds the light goods greedily. This separation no longer holds for $s = \frac{3}{2}$; a simple example is given in the introduction. Thus an algorithm has to consider heavy and light goods together. This complicates matters considerably. Our algorithm is based on a collection of improvement rules that transfers any allocation into an optimal allocation and exploits a connection to matchings with parity constraints.
Existing approaches to image captioning usually generate the sentence word-by-word from left to right, with the constraint of conditioned on local context including the given image and history generated words. There have been many studies target to make use of global information during decoding, e.g., iterative refinement. However, it is still under-explored how to effectively and efficiently incorporate the future context. To respond to this issue, inspired by that Non-Autoregressive Image Captioning (NAIC) can leverage two-side relation with modified mask operation, we aim to graft this advance to the conventional Autoregressive Image Captioning (AIC) model while maintaining the inference efficiency without extra time cost. Specifically, AIC and NAIC models are first trained combined with shared visual encoders, forcing the visual encoder to contain sufficient and valid future context; then the AIC model is encouraged to capture the causal dynamics of cross-layer interchanging from NAIC model on its unconfident words, which follows a teacher-student paradigm and optimized with the distribution calibration training objective. Empirical evidences demonstrate that our proposed approach clearly surpass the state-of-the-art baselines in both automatic metrics and human evaluations on the MS COCO benchmark. The source code is available at: //github.com/feizc/Future-Caption.
When learning disconnected distributions, Generative adversarial networks (GANs) are known to face model misspecification. Indeed, a continuous mapping from a unimodal latent distribution to a disconnected one is impossible, so GANs necessarily generate samples outside of the support of the target distribution. This raises a fundamental question: what is the latent space partition that minimizes the measure of these areas? Building on a recent result of geometric measure theory, we prove that an optimal GANs must structure its latent space as a 'simplicial cluster' - a Voronoi partition where cells are convex cones - when the dimension of the latent space is larger than the number of modes. In this configuration, each Voronoi cell maps to a distinct mode of the data. We derive both an upper and a lower bound on the optimal precision of GANs learning disconnected manifolds. Interestingly, these two bounds have the same order of decrease: $\sqrt{\log m}$, $m$ being the number of modes. Finally, we perform several experiments to exhibit the geometry of the latent space and experimentally show that GANs have a geometry with similar properties to the theoretical one.
Recently, Implicit Neural Representations (INRs) parameterized by neural networks have emerged as a powerful and promising tool to represent different kinds of signals due to its continuous, differentiable properties, showing superiorities to classical discretized representations. However, the training of neural networks for INRs only utilizes input-output pairs, and the derivatives of the target output with respect to the input, which can be accessed in some cases, are usually ignored. In this paper, we propose a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network. Specifically, we use finite differences to approximate image derivatives. We show how the training paradigm can be leveraged to solve typical INRs problems, i.e., image regression and inverse rendering, and demonstrate this training paradigm can improve the data-efficiency and generalization capabilities of INRs. The code of our method is available at \url{//github.com/megvii-research/Sobolev_INRs}.
A time-varying zero-inflated serially dependent Poisson process is proposed. The model assumes that the intensity of the Poisson Process evolves according to a generalized autoregressive conditional heteroscedastic (GARCH) formulation. The proposed model is a generalization of the zero-inflated Poisson Integer GARCH model proposed by Fukang Zhu in 2012, which in return is a generalization of the Integer GARCH (INGARCH) model introduced by Ferland, Latour, and Oraichi in 2006. The proposed model builds on previous work by allowing the zero-inflation parameter to vary over time, governed by a deterministic function or by an exogenous variable. Both the Expectation Maximization (EM) and the Maximum Likelihood Estimation (MLE) approaches are presented as possible estimation methods. A simulation study shows that both parameter estimation methods provide good estimates. Applications to two real-life data sets show that the proposed INGARCH model provides a better fit than the traditional zero-inflated INGARCH model in the cases considered.
Maximum likelihood (ML) estimation is widely used in statistics. The h-likelihood has been proposed as an extension of Fisher's likelihood to statistical models including unobserved latent variables of recent interest. Its advantage is that the joint maximization gives ML estimators (MLEs) of both fixed and random parameters with their standard error estimates. However, the current h-likelihood approach does not allow MLEs of variance components as Henderson's joint likelihood does not in linear mixed models. In this paper, we show how to form the h-likelihood in order to facilitate joint maximization for MLEs of whole parameters. We also show the role of the Jacobian term which allows MLEs in the presence of unobserved latent variables. To obtain MLEs for fixed parameters, intractable integration is not necessary. As an illustration, we show one-shot ML imputation for missing data by treating them as realized but unobserved random parameters. We show that the h-likelihood bypasses the expectation step in the expectation-maximization (EM) algorithm and allows single ML imputation instead of multiple imputations. We also discuss the difference in predictions in random effects and missing data.
The present paper addresses the convergence of a first order in time incremental projection scheme for the time-dependent incompressible Navier-Stokes equations to a weak solution, without any assumption of existence or regularity assumptions on the exact solution. We prove the convergence of the approximate solutions obtained by the semi-discrete scheme and a fully discrete scheme using a staggered finite volume scheme on non uniform rectangular meshes. Some first a priori estimates on the approximate solutions yield the existence. Compactness arguments, relying on these estimates, together with some estimates on the translates of the discrete time derivatives, are then developed to obtain convergence (up to the extraction of a subsequence), when the time step tends to zero in the semi-discrete scheme and when the space and time steps tend to zero in the fully discrete scheme; the approximate solutions are thus shown to converge to a limit function which is then shown to be a weak solution to the continuous problem by passing to the limit in these schemes.
The algorithms based on the technique of optimal $k$-thresholding (OT) were recently proposed for signal recovery, and they are very different from the traditional family of hard thresholding methods. However, the computational cost for OT-based algorithms remains high at the current stage of their development. This stimulates the development of the so-called natural thresholding (NT) algorithm and its variants in this paper. The family of NT algorithms is developed through the first-order approximation of the so-called regularized optimal $k$-thresholding model, and thus the computational cost for this family of algorithms is significantly lower than that of the OT-based algorithms. The guaranteed performance of NT-type algorithms for signal recovery from noisy measurements is shown under the restricted isometry property and concavity of the objective function of regularized optimal $k$-thresholding model. Empirical results indicate that the NT-type algorithms are robust and very comparable to several mainstream algorithms for sparse signal recovery.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
Clustering is one of the most fundamental and wide-spread techniques in exploratory data analysis. Yet, the basic approach to clustering has not really changed: a practitioner hand-picks a task-specific clustering loss to optimize and fit the given data to reveal the underlying cluster structure. Some types of losses---such as k-means, or its non-linear version: kernelized k-means (centroid based), and DBSCAN (density based)---are popular choices due to their good empirical performance on a range of applications. Although every so often the clustering output using these standard losses fails to reveal the underlying structure, and the practitioner has to custom-design their own variation. In this work we take an intrinsically different approach to clustering: rather than fitting a dataset to a specific clustering loss, we train a recurrent model that learns how to cluster. The model uses as training pairs examples of datasets (as input) and its corresponding cluster identities (as output). By providing multiple types of training datasets as inputs, our model has the ability to generalize well on unseen datasets (new clustering tasks). Our experiments reveal that by training on simple synthetically generated datasets or on existing real datasets, we can achieve better clustering performance on unseen real-world datasets when compared with standard benchmark clustering techniques. Our meta clustering model works well even for small datasets where the usual deep learning models tend to perform worse.