This paper reports a CPU-level real-time stereo matching method for surgical images (10 Hz on 640 * 480 image with a single core of i5-9400). The proposed method is built on the fast ''dense inverse searching'' algorithm, which estimates the disparity of the stereo images. The overlapping image patches (arbitrary squared image segment) from the images at different scales are aligned based on the photometric consistency presumption. We propose a Bayesian framework to evaluate the probability of the optimized patch disparity at different scales. Moreover, we introduce a spatial Gaussian mixed probability distribution to address the pixel-wise probability within the patch. In-vivo and synthetic experiments show that our method can handle ambiguities resulted from the textureless surfaces and the photometric inconsistency caused by the Lambertian reflectance. Our Bayesian method correctly balances the probability of the patch for stereo images at different scales. Experiments indicate that the estimated depth has higher accuracy and fewer outliers than the baseline methods in the surgical scenario.
Deep convolutional neural networks (CNNs) with a large number of parameters require intensive computational resources, and thus are hard to be deployed in resource-constrained platforms. Decomposition-based methods, therefore, have been utilized to compress CNNs in recent years. However, since the compression factor and performance are negatively correlated, the state-of-the-art works either suffer from severe performance degradation or have relatively low compression factors. To overcome this problem, we propose to compress CNNs and alleviate performance degradation via joint matrix decomposition, which is different from existing works that compressed layers separately. The idea is inspired by the fact that there are lots of repeated modules in CNNs. By projecting weights with the same structures into the same subspace, networks can be jointly compressed with larger ranks. In particular, three joint matrix decomposition schemes are developed, and the corresponding optimization approaches based on Singular Value Decomposition are proposed. Extensive experiments are conducted across three challenging compact CNNs for different benchmark data sets to demonstrate the superior performance of our proposed algorithms. As a result, our methods can compress the size of ResNet-34 by 22X with slighter accuracy degradation compared with several state-of-the-art methods.
Deep learning (DL) stereo matching methods gained great attention in remote sensing satellite datasets. However, most of these existing studies conclude assessments based only on a few/single stereo images lacking a systematic evaluation on how robust DL methods are on satellite stereo images with varying radiometric and geometric configurations. This paper provides an evaluation of four DL stereo matching methods through hundreds of multi-date multi-site satellite stereo pairs with varying geometric configurations, against the traditional well-practiced Census-SGM (Semi-global matching), to comprehensively understand their accuracy, robustness, generalization capabilities, and their practical potential. The DL methods include a learning-based cost metric through convolutional neural networks (MC-CNN) followed by SGM, and three end-to-end (E2E) learning models using Geometry and Context Network (GCNet), Pyramid Stereo Matching Network (PSMNet), and LEAStereo. Our experiments show that E2E algorithms can achieve upper limits of geometric accuracies, while may not generalize well for unseen data. The learning-based cost metric and Census-SGM are rather robust and can consistently achieve acceptable results. All DL algorithms are robust to geometric configurations of stereo pairs and are less sensitive in comparison to the Census-SGM, while learning-based cost metrics can generalize on satellite images when trained on different datasets (airborne or ground-view).
A biomechanical model often requires parameter estimation and selection in a known but complicated nonlinear function. Motivated by observing that data from a head-neck position tracking system, one of biomechanical models, show multiplicative time dependent errors, we develop a modified penalized weighted least squares estimator. The proposed method can be also applied to a model with non-zero mean time dependent additive errors. Asymptotic properties of the proposed estimator are investigated under mild conditions on a weight matrix and the error process. A simulation study demonstrates that the proposed estimation works well in both parameter estimation and selection with time dependent error. The analysis and comparison with an existing method for head-neck position tracking data show better performance of the proposed method in terms of the variance accounted for (VAF).
In this paper, we consider recent progress in estimating the average treatment effect when extreme inverse probability weights are present and focus on methods that account for a possible violation of the positivity assumption. These methods aim at estimating the treatment effect on the subpopulation of patients for whom there is a clinical equipoise. We propose a systematic approach to determine their related causal estimands and develop new insights into the properties of the weights targeting such a subpopulation. Then, we examine the roles of overlap weights, matching weights, Shannon's entropy weights, and beta weights. This helps us characterize and compare their underlying estimators, analytically and via simulations, in terms of the accuracy, precision, and root mean squared error. Moreover, we study the asymptotic behaviors of their augmented estimators (that mimic doubly robust estimators), which lead to improved estimations when either the propensity or the regression models are correctly specified. Based on the analytical and simulation results, we conclude that overall overlap weights are preferable to matching weights, especially when there is moderate or extreme violations of the positivity assumption. Finally, we illustrate the methods using a real data example marked by extreme inverse probability weights.
We consider a high-dimensional random constrained optimization problem in which a set of binary variables is subjected to a linear system of equations. The cost function is a simple linear cost, measuring the Hamming distance with respect to a reference configuration. Despite its apparent simplicity, this problem exhibits a rich phenomenology. We show that different situations arise depending on the random ensemble of linear systems. When each variable is involved in at most two linear constraints, we show that the problem can be partially solved analytically, in particular we show that upon convergence, the zero-temperature limit of the cavity equations returns the optimal solution. We then study the geometrical properties of more general random ensembles. In particular we observe a range in the density of constraints at which the systems enters a glassy phase where the cost function has many minima. Interestingly, the algorithmic performances are only sensitive to another phase transition affecting the structure of configurations allowed by the linear constraints. We also extend our results to variables belonging to $\text{GF}(q)$, the Galois Field of order $q$. We show that increasing the value of $q$ allows to achieve a better optimum, which is confirmed by the Replica Symmetric cavity method predictions.
This paper presents a computationally feasible method to compute rigorous bounds on the interval-generalisation of regression analysis to account for epistemic uncertainty in the output variables. The new iterative method uses machine learning algorithms to fit an imprecise regression model to data that consist of intervals rather than point values. The method is based on a single-layer interval neural network which can be trained to produce an interval prediction. It seeks parameters for the optimal model that minimizes the mean squared error between the actual and predicted interval values of the dependent variable using a first-order gradient-based optimization and interval analysis computations to model the measurement imprecision of the data. An additional extension to a multi-layer neural network is also presented. We consider the explanatory variables to be precise point values, but the measured dependent values are characterized by interval bounds without any probabilistic information. The proposed iterative method estimates the lower and upper bounds of the expectation region, which is an envelope of all possible precise regression lines obtained by ordinary regression analysis based on any configuration of real-valued points from the respective y-intervals and their x-values.
The goal of Bayesian deep learning is to provide uncertainty quantification via the posterior distribution. However, exact inference over the weight space is computationally intractable due to the ultra-high dimensions of the neural network. Variational inference (VI) is a promising approach, but naive application on weight space does not scale well and often underperform on predictive accuracy. In this paper, we propose a new adaptive variational Bayesian algorithm to train neural networks on weight space that achieves high predictive accuracy. By showing that there is an equivalence to Stochastic Gradient Hamiltonian Monte Carlo(SGHMC) with preconditioning matrix, we then propose an MCMC within EM algorithm, which incorporates the spike-and-slab prior to capture the sparsity of the neural network. The EM-MCMC algorithm allows us to perform optimization and model pruning within one-shot. We evaluate our methods on CIFAR-10, CIFAR-100 and ImageNet datasets, and demonstrate that our dense model can reach the state-of-the-art performance and our sparse model perform very well compared to previously proposed pruning schemes.
Stereo-matching is a fundamental problem in computer vision. Despite recent progress by deep learning, improving the robustness is ineluctable when deploying stereo-matching models to real-world applications. Different from the common practices, i.e., developing an elaborate model to achieve robustness, we argue that collecting multiple available datasets for training is a cheaper way to increase generalization ability. Specifically, this report presents an improved RaftStereo trained with a mixed dataset of seven public datasets for the robust vision challenge (denoted as iRaftStereo_RVC). When evaluated on the training sets of Middlebury, KITTI-2015, and ETH3D, the model outperforms its counterparts trained with only one dataset, such as the popular Sceneflow. After fine-tuning the pre-trained model on the three datasets of the challenge, it ranks at 2nd place on the stereo leaderboard, demonstrating the benefits of mixed dataset pre-training.
We study the problem of online learning in two-sided non-stationary matching markets, where the objective is to converge to a stable match. In particular, we consider the setting where one side of the market, the arms, has fixed known set of preferences over the other side, the players. While this problem has been studied when the players have fixed but unknown preferences, in this work we study the problem of how to learn when the preferences of the players are time varying. We propose the {\it Restart Competing Bandits (RCB)} algorithm, which combines a simple {\it restart strategy} to handle the non-stationarity with the {\it competing bandits} algorithm \citep{liu2020competing} designed for the stationary case. We show that, with the proposed algorithm, each player receives a uniform sub-linear regret of {$\widetilde{\mathcal{O}}(L^{1/2}_TT^{1/2})$} up to the number of changes in the underlying preference of agents, $L_T$. We also discuss extensions of this algorithm to the case where the number of changes need not be known a priori.
We consider inverse problems in Hilbert spaces under correlated Gaussian noise and use a Bayesian approach to find their regularised solution. We focus on mildly ill-posed inverse problems with the noise being generalised derivative of fractional Brownian motion, using a novel wavelet - based approach we call vaguelette-vaguelette. It allows us to apply sequence space methods without assuming that all operators are simultaneously diagonalisable. The results are proved for more general bases and covariance operators. Our primary aim is to study the posterior contraction rate in such inverse problems over Sobolev classes of true functions, comparing it to the derived minimax rate. Secondly, we study the effect of plugging in a consistent estimator of variances in sequence space on the posterior contraction rate, for instance where there are repeated observations. This result is also applied to the problem where the forward operator is observed with error. Thirdly, we show that an adaptive empirical Bayes posterior distribution contracts at the optimal rate, in the minimax sense, under a condition on prior smoothness, with a plugged in maximum marginal likelihood estimator of the prior scale. These theoretical results are illustrated on simulated data.