We investigate the domain of satisfiable formulas in satisfiability modulo theories (SMT), in particular, automatic generation of a multitude of satisfying assignments to such formulas. Despite the long and successful history of SMT in model checking and formal verification, this aspect is relatively under-explored. Prior work exists for generating such assignments, or samples, for Boolean formulas and for quantifier-free first-order formulas involving bit-vectors, arrays, and uninterpreted functions (QF_AUFBV). We propose a new approach that is suitable for a theory T of integer arithmetic and to T with arrays and uninterpreted functions. The approach involves reducing the general sampling problem to a simpler instance of sampling from a set of independent intervals, which can be done efficiently. Such reduction is carried out by expanding a single model - a seed - using top-down propagation of constraints along the original first-order formula.
We provide a clear and concise introduction to the subjects of inverse problems and data assimilation, and their inter-relations. The first part of our notes covers inverse problems; this refers to the study of how to estimate unknown model parameters from data. The second part of our notes covers data assimilation; this refers to a particular class of inverse problems in which the unknown parameter is the initial condition (and/or state) of a dynamical system, and the data comprises partial and noisy observations of the state. The third and final part of our notes describes the use of data assimilation methods to solve generic inverse problems by introducing an artificial algorithmic time. Our notes cover, among other topics, maximum a posteriori estimation, (stochastic) gradient descent, variational Bayes, Monte Carlo, importance sampling and Markov chain Monte Carlo for inverse problems; and 3DVAR, 4DVAR, extended and ensemble Kalman filters, and particle filters for data assimilation. Each of parts one and two starts with a chapter on the Bayesian formulation, in which the problem solution is given by a posterior distribution on the unknown parameter. Then the following chapter specializes the Bayesian formulation to a linear-Gaussian setting where explicit characterization of the posterior is possible and insightful. The next two chapters explore methods to extract information from the posterior in nonlinear and non-Gaussian settings using optimization and Gaussian approximations. The final two chapters describe sampling methods that can reproduce the full posterior in the large sample limit. Each chapter closes with a bibliography containing citations to alternative pedagogical literature and to relevant research literature. We also include a set of exercises at the end of parts one and two. Our notes are thus useful for both classroom teaching and self-guided study.
This paper concerns the use of asymptotic expansions for the efficient solving of forward and inverse problems involving a nonlinear singularly perturbed time-dependent reaction--diffusion--advection equation. By using an asymptotic expansion with the local coordinates in the transition-layer region, we prove the existence and uniqueness of a smooth solution with a sharp transition layer for a three-dimensional partial differential equation. Moreover, with the help of asymptotic expansion, a simplified model is derived for the corresponding inverse source problem, which is close to the original inverse problem over the entire region except for a narrow transition layer. We show that such simplification does not reduce the accuracy of the inversion results when the measurement data contain noise. Based on this simpler inversion model, an asymptotic-expansion regularization algorithm is proposed for efficiently solving the inverse source problem in the three-dimensional case. A model problem shows the feasibility of the proposed numerical approach.
In group sequential analysis, data is collected and analyzed in batches until pre-defined stopping criteria are met. Inference in the parametric setup typically relies on the limiting asymptotic multivariate normality of the repeatedly computed maximum likelihood estimators (MLEs), a result first rigorously proved by Jennison and Turbull (1997) under general regularity conditions. In this work, using Stein's method we provide optimal order, non-asymptotic bounds on the distance for smooth test functions between the joint group sequential MLEs and the appropriate normal distribution under the same conditions. Our results assume independent observations but allow heterogeneous (i.e., non-identically distributed) data. We examine how the resulting bounds simplify when the data comes from an exponential family. Finally, we present a general result relating multivariate Kolmogorov distance to smooth function distance which, in addition to extending our results to the former metric, may be of independent interest.
We introduce VA-DepthNet, a simple, effective, and accurate deep neural network approach for the single-image depth prediction (SIDP) problem. The proposed approach advocates using classical first-order variational constraints for this problem. While state-of-the-art deep neural network methods for SIDP learn the scene depth from images in a supervised setting, they often overlook the invaluable invariances and priors in the rigid scene space, such as the regularity of the scene. The paper's main contribution is to reveal the benefit of classical and well-founded variational constraints in the neural network design for the SIDP task. It is shown that imposing first-order variational constraints in the scene space together with popular encoder-decoder-based network architecture design provides excellent results for the supervised SIDP task. The imposed first-order variational constraint makes the network aware of the depth gradient in the scene space, i.e., regularity. The paper demonstrates the usefulness of the proposed approach via extensive evaluation and ablation analysis over several benchmark datasets, such as KITTI, NYU Depth V2, and SUN RGB-D. The VA-DepthNet at test time shows considerable improvements in depth prediction accuracy compared to the prior art and is accurate also at high-frequency regions in the scene space. At the time of writing this paper, our method -- labeled as VA-DepthNet, when tested on the KITTI depth-prediction evaluation set benchmarks, shows state-of-the-art results, and is the top-performing published approach.
We analyze the convergence of a nonlocal gradient descent method for minimizing a class of high-dimensional non-convex functions, where a directional Gaussian smoothing (DGS) is proposed to define the nonlocal gradient (also referred to as the DGS gradient). The method was first proposed in [42], in which multiple numerical experiments showed that replacing the traditional local gradient with the DGS gradient can help the optimizers escape local minima more easily and significantly improve their performance. However, a rigorous theory for the efficiency of the method on nonconvex landscape is lacking. In this work, we investigate the scenario where the objective function is composed of a convex function, perturbed by a oscillating noise. We provide a convergence theory under which the iterates exponentially converge to a tightened neighborhood of the solution, whose size is characterized by the noise wavelength. We also establish a correlation between the optimal values of the Gaussian smoothing radius and the noise wavelength, thus justify the advantage of using moderate or large smoothing radius with the method. Furthermore, if the noise level decays to zero when approaching global minimum, we prove that DGS-based optimization converges to the exact global minimum with linear rates, similarly to standard gradient-based method in optimizing convex functions. Several numerical experiments are provided to confirm our theory and illustrate the superiority of the approach over those based on the local gradient.
Understanding how the adult human brain learns novel categories is an important problem in neuroscience. Drift-diffusion models are popular in such contexts for their ability to mimic the underlying neural mechanisms. One such model for gradual longitudinal learning was recently developed in Paulon, et al. (2020). Fitting drift-diffusion models however require data on both category responses and associated response times. Category response accuracies are, however, often the only reliable measure recorded by behavioral scientists to describe human learning. To our knowledge, however, drift-diffusion models for such scenarios have never been considered in the literature before. To address this gap, in this article we build carefully on Paulon, et al. (2020), but now with latent response times integrated out, to derive a novel biologically interpretable class of `inverse-probit' categorical probability models for observed categories. This marginal model, however, presents significant identifiability and inferential challenges not encountered originally for the joint model in Paulon, et al. (2020). We address these new challenges via a novel projection-based approach with a symmetry-preserving identifiability constraint that allows us to work with conjugate priors in an unconstrained space. We adapt the model for group and individual-level inference in longitudinal settings. Building again on the model's latent variable representation, we design an efficient Markov chain Monte Carlo algorithm for posterior computation. We evaluate the method's empirical performances through simulation experiments. The method's practical efficacy is illustrated in applications to longitudinal tone learning studies.
The goal of survey design is often to minimize the errors associated with inference: the total of bias and variance. Random surveys are common because they allow the use of theoretically unbiased estimators. In practice however, such design-based approaches are often unable to account for logistical or budgetary constraints. Thus, they may result in samples that are logistically inefficient, or infeasible to implement. Various balancing and optimal sampling techniques have been proposed to improve the statistical efficiency of such designs, but few models have attempted to explicitly incorporate logistical and financial constraints. We introduce a mixed integer linear program (MILP) for optimal sampling design, capable of capturing a variety of constraints and a wide class of Bayesian regression models. We demonstrate the use of our model on three spatial sampling problems of increasing complexity, including the real logistics of the US Forest Service Forest Inventory and Analysis survey of Tanana, Alaska. Our methodological contribution to survey design is significant because the proposed modeling framework makes it possible to generate high-quality sampling designs and inferences while satisfying practical constraints defined by the user. The technical novelty of the method is the explicit integration of Bayesian statistical models in combinatorial optimization. This integration might allow a paradigm shift in spatial sampling under constrained budgets or logistics.
In the usual Bayesian setting, a full probabilistic model is required to link the data and parameters, and the form of this model and the inference and prediction mechanisms are specified via de Finetti's representation. In general, such a formulation is not robust to model mis-specification of its component parts. An alternative approach is to draw inference based on loss functions, where the quantity of interest is defined as a minimizer of some expected loss, and to construct posterior distributions based on the loss-based formulation; this strategy underpins the construction of the Gibbs posterior. We develop a Bayesian non-parametric approach; specifically, we generalize the Bayesian bootstrap, and specify a Dirichlet process model for the distribution of the observables. We implement this using direct prior-to-posterior calculations, but also using predictive sampling. We also study the assessment of posterior validity for non-standard Bayesian calculations, and provide an efficient way to calibrate the scaling parameter in the Gibbs posterior so that it can achieve the desired coverage rate. We show that the developed non-standard Bayesian updating procedures yield valid posterior distributions in terms of consistency and asymptotic normality under model mis-specification. Simulation studies show that the proposed methods can recover the true value of the parameter efficiently and achieve frequentist coverage even when the sample size is small. Finally, we apply our methods to evaluate the causal impact of speed cameras on traffic collisions in England.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.
Most of the internet today is composed of digital media that includes videos and images. With pixels becoming the currency in which most transactions happen on the internet, it is becoming increasingly important to have a way of browsing through this ocean of information with relative ease. YouTube has 400 hours of video uploaded every minute and many million images are browsed on Instagram, Facebook, etc. Inspired by recent advances in the field of deep learning and success that it has gained on various problems like image captioning and, machine translation , word2vec , skip thoughts, etc, we present DeepSeek a natural language processing based deep learning model that allows users to enter a description of the kind of images that they want to search, and in response the system retrieves all the images that semantically and contextually relate to the query. Two approaches are described in the following sections.