The planted coloring problem is a prototypical inference problem for which thresholds for Bayes optimal algorithms, like Belief Propagation (BP), can be computed analytically. In this paper, we analyze the limits and performances of the Simulated Annealing (SA), a Monte Carlo-based algorithm that is more general and robust than BP, and thus of broader applicability. We show that SA is sub-optimal in the recovery of the planted solution because it gets attracted by glassy states that, instead, do not influence the BP algorithm. At variance with previous conjectures, we propose an analytic estimation for the SA algorithmic threshold by comparing the spinodal point of the paramagnetic phase and the dynamical critical temperature. This is a fundamental connection between thermodynamical phase transitions and out of equilibrium behavior of Glauber dynamics. We also study an improved version of SA, called replicated SA (RSA), where several weakly coupled replicas are cooled down together. We show numerical evidence that the algorithmic threshold for the RSA coincides with the Bayes optimal one. Finally, we develop an approximated analytical theory explaining the optimal performances of RSA and predicting the location of the transition towards the planted solution in the limit of a very large number of replicas. Our results for RSA support the idea that mismatching the parameters in the prior with respect to those of the generative model may produce an algorithm that is optimal and very robust.
Multilevel regression and poststratification (MRP) has become a popular approach for selection bias adjustment in subgroup estimation, with widespread applications from social sciences to public health. We examine the finite population inferential validity of MRP in connection with poststratification and model specification. The success of MRP prominently depends on the availability of auxiliary information strongly related to the outcome. To improve the outcome model fitting performances, we recommend modeling inclusion mechanisms conditional on auxiliary variables and adding flexible functions of estimated inclusion probabilities as predictors in the mean structure. We present a framework for statistical data integration and robust inferences of probability and nonprobability surveys, providing solutions to various challenges in practical applications. Our simulation studies indicate the statistical validity of MRP with a tradeoff between bias and variance, and the improvement over alternative methods is mainly on subgroup estimates with small sample sizes. Our development is motivated by the Adolescent Brain Cognitive Development (ABCD) Study that has collected children's information across 21 U.S. geographic locations for national representation but is subject to selection bias as a nonprobability sample. We apply the methods for population inferences to evaluate the cognition measure of diverse groups of children in the ABCD study and demonstrate that the use of auxiliary variables affects the inferential findings.
We study two generalizations of classic clustering problems called dynamic ordered $k$-median and dynamic $k$-supplier, where the points that need clustering evolve over time, and we are allowed to move the cluster centers between consecutive time steps. In these dynamic clustering problems, the general goal is to minimize certain combinations of the service cost of points and the movement cost of centers, or to minimize one subject to some constraints on the other. We obtain a constant-factor approximation algorithm for dynamic ordered $k$-median under mild assumptions on the input. We give a 3-approximation for dynamic $k$-supplier and a multi-criteria approximation for its outlier version where some points can be discarded, when the number of time steps is two. We complement the algorithms with almost matching hardness results.
In this paper, we focus on solving a distributed convex aggregative optimization problem in a network, where each agent has its own cost function which depends not only on its own decision variables but also on the aggregated function of all agents' decision variables. The decision variable is constrained within a feasible set. In order to minimize the sum of the cost functions when each agent only knows its local cost function, we propose a distributed Frank-Wolfe algorithm based on gradient tracking for the aggregative optimization problem where each node maintains two estimates, namely an estimate of the sum of agents' decision variable and an estimate of the gradient of global function. The algorithm is projection-free, but only involves solving a linear optimization to get a search direction at each step. We show the convergence of the proposed algorithm for convex and smooth objective functions over a time-varying network. Finally, we demonstrate the convergence and computational efficiency of the proposed algorithm via numerical simulations.
We present a non-asymptotic lower bound on the eigenspectrum of the design matrix generated by any linear bandit algorithm with sub-linear regret when the action set has well-behaved curvature. Specifically, we show that the minimum eigenvalue of the expected design matrix grows as $\Omega(\sqrt{n})$ whenever the expected cumulative regret of the algorithm is $O(\sqrt{n})$, where $n$ is the learning horizon, and the action-space has a constant Hessian around the optimal arm. This shows that such action-spaces force a polynomial lower bound rather than a logarithmic lower bound, as shown by \cite{lattimore2017end}, in discrete (i.e., well-separated) action spaces. Furthermore, while the previous result is shown to hold only in the asymptotic regime (as $n \to \infty$), our result for these ``locally rich" action spaces is any-time. Additionally, under a mild technical assumption, we obtain a similar lower bound on the minimum eigen value holding with high probability. We apply our result to two practical scenarios -- \emph{model selection} and \emph{clustering} in linear bandits. For model selection, we show that an epoch-based linear bandit algorithm adapts to the true model complexity at a rate exponential in the number of epochs, by virtue of our novel spectral bound. For clustering, we consider a multi agent framework where we show, by leveraging the spectral result, that no forced exploration is necessary -- the agents can run a linear bandit algorithm and estimate their underlying parameters at once, and hence incur a low regret.
We study policy gradient (PG) for reinforcement learning in continuous time and space under the regularized exploratory formulation developed by Wang et al. (2020). We represent the gradient of the value function with respect to a given parameterized stochastic policy as the expected integration of an auxiliary running reward function that can be evaluated using samples and the current value function. This effectively turns PG into a policy evaluation (PE) problem, enabling us to apply the martingale approach recently developed by Jia and Zhou (2021) for PE to solve our PG problem. Based on this analysis, we propose two types of the actor-critic algorithms for RL, where we learn and update value functions and policies simultaneously and alternatingly. The first type is based directly on the aforementioned representation which involves future trajectories and hence is offline. The second type, designed for online learning, employs the first-order condition of the policy gradient and turns it into martingale orthogonality conditions. These conditions are then incorporated using stochastic approximation when updating policies. Finally, we demonstrate the algorithms by simulations in two concrete examples.
In this paper, we study a sequential decision making problem faced by e-commerce carriers related to when to send out a vehicle from the central depot to serve customer requests, and in which order to provide the service, under the assumption that the time at which parcels arrive at the depot is stochastic and dynamic. The objective is to maximize the number of parcels that can be delivered during the service hours. We propose two reinforcement learning approaches for solving this problem, one based on a policy function approximation (PFA) and the second on a value function approximation (VFA). Both methods are combined with a look-ahead strategy, in which future release dates are sampled in a Monte-Carlo fashion and a tailored batch approach is used to approximate the value of future states. Our PFA and VFA make a good use of branch-and-cut-based exact methods to improve the quality of decisions. We also establish sufficient conditions for partial characterization of optimal policy and integrate them into PFA/VFA. In an empirical study based on 720 benchmark instances, we conduct a competitive analysis using upper bounds with perfect information and we show that PFA and VFA greatly outperform two alternative myopic approaches. Overall, PFA provides best solutions, while VFA (which benefits from a two-stage stochastic optimization model) achieves a better tradeoff between solution quality and computing time.
In this paper, a meshfree method using the deep neural network (DNN) approach is developed for solving two kinds of dynamic two-phase interface problems governed by different dynamic partial differential equations on either side of the stationary interface with the jump and high-contrast coefficients. The first type of two-phase interface problem to be studied is the fluid-fluid (two-phase flow) interface problem modeled by Navier-Stokes equations with high-contrast physical parameters across the interface. The second one belongs to fluid-structure interaction (FSI) problems modeled by Navier-Stokes equations on one side of the interface and the structural equation on the other side of the interface, both the fluid and the structure interact with each other via the kinematic- and the dynamic interface conditions across the interface. The DNN/meshfree method is respectively developed for the above two-phase interface problems by representing solutions of PDEs using the DNNs' structure and reformulating the dynamic interface problem as a least-squares minimization problem based upon a space-time sampling point set. Approximation error analyses are also carried out for each kind of interface problem, which reveals an intrinsic strategy about how to efficiently build a sampling-point training dataset to obtain a more accurate DNNs' approximation. In addition, compared with traditional discretization approaches, the proposed DNN/meshfree method and its error analysis technique can be smoothly extended to many other dynamic interface problems with fixed interfaces. Numerical experiments are conducted to illustrate the accuracies of the proposed DNN/meshfree method for the presented two-phase interface problems. Theoretical results are validated to some extent through three numerical examples.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.