In this paper, we propose an approach to address the problem of 3D reconstruction of scenes from a single image captured by a light-field camera equipped with a rolling shutter sensor. Our method leverages the 3D information cues present in the light-field and the motion information provided by the rolling shutter effect. We present a generic model for the imaging process of this sensor and a two-stage algorithm that minimizes the re-projection error while considering the position and motion of the camera in a motion-shape bundle adjustment estimation strategy. Thereby, we provide an instantaneous 3D shape-and-pose-and-velocity sensing paradigm. To the best of our knowledge, this is the first study to leverage this type of sensor for this purpose. We also present a new benchmark dataset composed of different light-fields showing rolling shutter effects, which can be used as a common base to improve the evaluation and tracking the progress in the field. We demonstrate the effectiveness and advantages of our approach through several experiments conducted for different scenes and types of motions. The source code and dataset are publicly available at: //github.com/ICB-Vision-AI/RSLF
In this paper, we explore a practical system setting where a rack-aware storage system consists of racks, each containing a few parity checks, referred to as a rack-aware system with locality. To minimize cross-rack bandwidth in this system, we organize the repair sets of locally repairable codes into racks and investigate the problem of repairing erasures in locally repairable codes beyond the code locality. We devise two repair schemes to reduce the repair bandwidth for Tamo-Barg codes under the rack-aware model by setting each repair set as a rack. We then establish a cut-set bound for locally repairable codes under the rack-aware model with locality. Using this bound we show that our second repair scheme is optimal. Furthermore, we consider the partial-repair problem for locally repairable codes under the rack-aware model with locality, and introduce both repair schemes and bounds for this scenario.
In this paper, we identify the criteria for the selection of the minimal and most efficient covariate adjustment sets for the regression calibration method developed by Carroll, Rupert and Stefanski (CRS, 1992), used to correct bias due to continuous exposure measurement error. We utilize directed acyclic graphs to illustrate how subject matter knowledge can aid in the selection of such adjustment sets. Valid measurement error correction requires the collection of data on any (1) common causes of true exposure and outcome and (2) common causes of measurement error and outcome, in both the main study and validation study. For the CRS regression calibration method to be valid, researchers need to minimally adjust for covariate set (1) in both the measurement error model (MEM) and the outcome model and adjust for covariate set (2) at least in the MEM. In practice, we recommend including the minimal covariate adjustment set in both the MEM and the outcome model. In contrast with the regression calibration method developed by Rosner, Spiegelman and Willet, it is valid and more efficient to adjust for correlates of the true exposure or of measurement error that are not risk factors in the MEM only under CRS method. We applied the proposed covariate selection approach to the Health Professional Follow-up Study, examining the effect of fiber intake on cardiovascular incidence. In this study, we demonstrated potential issues with a data-driven approach to building the MEM that is agnostic to the structural assumptions. We extend the originally proposed estimators to settings where effect modification by a covariate is allowed. Finally, we caution against the use of the regression calibration method to calibrate the true nutrition intake using biomarkers.
In this paper, we introduce an innovative approach for extracting trajectories from a camera sensor in GPS-denied environments, leveraging visual odometry. The system takes video footage captured by a forward-facing camera mounted on a vehicle as input, with the output being a chain code representing the camera's trajectory. The proposed methodology involves several key steps. Firstly, we employ phase correlation between consecutive frames of the video to extract essential information. Subsequently, we introduce a novel chain code method termed "dynamic chain code," which is based on the x-shift values derived from the phase correlation. The third step involves determining directional changes (forward, left, right) by establishing thresholds and extracting the corresponding chain code. This extracted code is then stored in a buffer for further processing. Notably, our system outperforms traditional methods reliant on spatial features, exhibiting greater speed and robustness in noisy environments. Importantly, our approach operates without external camera calibration information. Moreover, by incorporating visual odometry, our system enhances its accuracy in estimating camera motion, providing a more comprehensive understanding of trajectory dynamics. Finally, the system culminates in the visualization of the normalized camera motion trajectory.
In this paper, we propose a new set of midpoint-based high-order discretization schemes for computing straight and mixed nonlinear second derivative terms that appear in the compressible Navier-Stokes equations. Firstly, we detail a set of conventional fourth and sixth-order baseline schemes that utilize central midpoint derivatives for the calculation of second derivatives terms. To enhance the spectral properties of the baseline schemes, an optimization procedure is proposed that adjusts the order and truncation error of the midpoint derivative approximation while still constraining the same overall stencil width and scheme order. A new filter penalty term is introduced into the midpoint derivative calculation to help achieve high wavenumber accuracy and high-frequency damping in the mixed derivative discretization. Fourier analysis performed on the both straight and mixed second derivative terms show high spectral efficiency and minimal numerical viscosity with no odd-even decoupling effect. Numerical validation of the resulting optimized schemes is performed through various benchmark test cases assessing their theoretical order of accuracy and solution resolution. The results highlight that the present optimized schemes efficiently utilize the inherent viscosity of the governing equations to achieve improved simulation stability - a feature attributed to their superior spectral resolution in the high wavenumber range. The method is also tested and applied to non-uniform structured meshes in curvilinear coordinates, employing a supersonic impinging jet test case.
In this paper, we address the problem of designing an experiment with both discrete and continuous factors under fairly general parametric statistical models. We propose a new algorithm, named ForLion, to search for optimal designs under the D-criterion. The algorithm performs an exhaustive search in a design space with mixed factors while keeping high efficiency and reducing the number of distinct experimental settings. Its optimality is guaranteed by the general equivalence theorem. We demonstrate its superiority over state-of-the-art design algorithms using real-life experiments under multinomial logistic models (MLM) and generalized linear models (GLM). Our simulation studies show that the ForLion algorithm could reduce the number of experimental settings by 25% or improve the relative efficiency of the designs by 17.5% on average. Our algorithm can help the experimenters reduce the time cost, the usage of experimental devices, and thus the total cost of their experiments while preserving high efficiencies of the designs.
In this paper, we present a novel person reidentification (PRe-ID) system that based on tensor feature representation and multilinear subspace learning. Our approach utilizes pretrained CNNs for high-level feature extraction, along with Local Maximal Occurrence (LOMO) and Gaussian Of Gaussian (GOG ) descriptors. Additionally, Cross-View Quadratic Discriminant Analysis (TXQDA) algorithm is used for multilinear subspace learning, which models the data in a tensor framework to enhance discriminative capabilities. Similarity measure based on Mahalanobis distance is used for matching between training and test pedestrian images. Experimental evaluations on VIPeR and PRID450s datasets demonstrate the effectiveness of our method.
In this paper, we derive a PAC-Bayes bound on the generalisation gap, in a supervised time-series setting for a special class of discrete-time non-linear dynamical systems. This class includes stable recurrent neural networks (RNN), and the motivation for this work was its application to RNNs. In order to achieve the results, we impose some stability constraints, on the allowed models. Here, stability is understood in the sense of dynamical systems. For RNNs, these stability conditions can be expressed in terms of conditions on the weights. We assume the processes involved are essentially bounded and the loss functions are Lipschitz. The proposed bound on the generalisation gap depends on the mixing coefficient of the data distribution, and the essential supremum of the data. Furthermore, the bound converges to zero as the dataset size increases. In this paper, we 1) formalize the learning problem, 2) derive a PAC-Bayesian error bound for such systems, 3) discuss various consequences of this error bound, and 4) show an illustrative example, with discussions on computing the proposed bound. Unlike other available bounds the derived bound holds for non i.i.d. data (time-series) and it does not grow with the number of steps of the RNN.
In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.
In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR). We formulated ASR for different languages as different tasks, and meta-learned the initialization parameters from many pretraining languages to achieve fast adaptation on unseen target language, via recently proposed model-agnostic meta learning algorithm (MAML). We evaluated the proposed approach using six languages as pretraining tasks and four languages as target tasks. Preliminary results showed that the proposed method, MetaASR, significantly outperforms the state-of-the-art multitask pretraining approach on all target languages with different combinations of pretraining languages. In addition, since MAML's model-agnostic property, this paper also opens new research direction of applying meta learning to more speech-related applications.
In this paper we address issues with image retrieval benchmarking on standard and popular Oxford 5k and Paris 6k datasets. In particular, annotation errors, the size of the dataset, and the level of challenge are addressed: new annotation for both datasets is created with an extra attention to the reliability of the ground truth. Three new protocols of varying difficulty are introduced. The protocols allow fair comparison between different methods, including those using a dataset pre-processing stage. For each dataset, 15 new challenging queries are introduced. Finally, a new set of 1M hard, semi-automatically cleaned distractors is selected. An extensive comparison of the state-of-the-art methods is performed on the new benchmark. Different types of methods are evaluated, ranging from local-feature-based to modern CNN based methods. The best results are achieved by taking the best of the two worlds. Most importantly, image retrieval appears far from being solved.