The linear combination of Student's $t$ random variables (RVs) appears in many statistical applications. Unfortunately, the Student's $t$ distribution is not closed under convolution, thus, deriving an exact and general distribution for the linear combination of $K$ Student's $t$ RVs is infeasible, which motivates a fitting/approximation approach. Here, we focus on the scenario where the only constraint is that the number of degrees of freedom of each $t-$RV is greater than two. Notice that since the odd moments/cumulants of the Student's $t$ distribution are zero, and the even moments/cumulants do not exist when their order is greater than the number of degrees of freedom, it becomes impossible to use conventional approaches based on moments/cumulants of order one or higher than two. To circumvent this issue, herein we propose fitting such a distribution to that of a scaled Student's $t$ RV by exploiting the second moment together with either the first absolute moment or the characteristic function (CF). For the fitting based on the absolute moment, we depart from the case of the linear combination of $K= 2$ Student's $t$ RVs and then generalize to $K\ge 2$ through a simple iterative procedure. Meanwhile, the CF-based fitting is direct, but its accuracy (measured in terms of the Bhattacharyya distance metric) depends on the CF parameter configuration, for which we propose a simple but accurate approach. We numerically show that the CF-based fitting usually outperforms the absolute moment -based fitting and that both the scale and number of degrees of freedom of the fitting distribution increase almost linearly with $K$.
Out-of-distribution detection is a common issue in deploying vision models in practice and solving it is an essential building block in safety critical applications. Existing OOD detection solutions focus on improving the OOD robustness of a classification model trained exclusively on in-distribution (ID) data. In this work, we take a different approach and propose to leverage generic pre-trained representations. We first investigate the behaviour of simple classifiers built on top of such representations and show striking performance gains compared to the ID trained representations. We propose a novel OOD method, called GROOD, that achieves excellent performance, predicated by the use of a good generic representation. Only a trivial training process is required for adapting GROOD to a particular problem. The method is simple, general, efficient, calibrated and with only a few hyper-parameters. The method achieves state-of-the-art performance on a number of OOD benchmarks, reaching near perfect performance on several of them. The source code is available at //github.com/vojirt/GROOD.
Locating 3D objects from a single RGB image via Perspective-n-Point (PnP) is a long-standing problem in computer vision. Driven by end-to-end deep learning, recent studies suggest interpreting PnP as a differentiable layer, allowing for partial learning of 2D-3D point correspondences by backpropagating the gradients of pose loss. Yet, learning the entire correspondences from scratch is highly challenging, particularly for ambiguous pose solutions, where the globally optimal pose is theoretically non-differentiable w.r.t. the points. In this paper, we propose the EPro-PnP, a probabilistic PnP layer for general end-to-end pose estimation, which outputs a distribution of pose with differentiable probability density on the SE(3) manifold. The 2D-3D coordinates and corresponding weights are treated as intermediate variables learned by minimizing the KL divergence between the predicted and target pose distribution. The underlying principle generalizes previous approaches, and resembles the attention mechanism. EPro-PnP can enhance existing correspondence networks, closing the gap between PnP-based method and the task-specific leaders on the LineMOD 6DoF pose estimation benchmark. Furthermore, EPro-PnP helps to explore new possibilities of network design, as we demonstrate a novel deformable correspondence network with the state-of-the-art pose accuracy on the nuScenes 3D object detection benchmark. Our code is available at //github.com/tjiiv-cprg/EPro-PnP-v2.
Motivated by applications in personalized medicine and individualized policy making, there is a growing interest in techniques for quantifying treatment effect heterogeneity in terms of the conditional average treatment effect (CATE). Some of the most prominent methods for CATE estimation developed in recent years are T-Learner, DR-Learner and R-Learner. The latter two were designed to improve on the former by being Neyman-orthogonal. However, the relations between them remain unclear, and likewise does the literature remain vague on whether these learners converge to a useful quantity or (functional) estimand when the underlying optimization procedure is restricted to a class of functions that does not include the CATE. In this article, we provide insight into these questions by discussing DR-learner and R-learner as special cases of a general class of Neyman-orthogonal learners for the CATE, for which we moreover derive oracle bounds. Our results shed light on how one may construct Neyman-orthogonal learners with desirable properties, on when DR-learner may be preferred over R-learner (and vice versa), and on novel learners that may sometimes be preferable to either of these. Theoretical findings are confirmed using results from simulation studies on synthetic data, as well as an application in critical care medicine.
Numerical simulations with rigid particles, drops or vesicles constitute some examples that involve 3D objects with spherical topology. When the numerical method is based on boundary integral equations, the error in using a regular quadrature rule to approximate the layer potentials that appear in the formulation will increase rapidly as the evaluation point approaches the surface and the integrand becomes sharply peaked. To determine when the accuracy becomes insufficient, and a more costly special quadrature method should be used, error estimates are needed. In this paper we present quadrature error estimates for layer potentials evaluated near surfaces of genus 0, parametrized using a polar and an azimuthal angle, discretized by a combination of the Gauss-Legendre and the trapezoidal quadrature rules. The error estimates involve no unknown coefficients, but complex valued roots of a specified distance function. The evaluation of the error estimates in general requires a one dimensional local root-finding procedure, but for specific geometries we obtain analytical results. Based on these explicit solutions, we derive simplified error estimates for layer potentials evaluated near spheres; these simple formulas depend only on the distance from the surface, the radius of the sphere and the number of discretization points. The usefulness of these error estimates is illustrated with numerical examples.
Many recent pattern recognition applications rely on complex distributed architectures in which sensing and computational nodes interact together through a communication network. Deep neural networks (DNNs) play an important role in this scenario, furnishing powerful decision mechanisms, at the price of a high computational effort. Consequently, powerful state-of-the-art DNNs are frequently split over various computational nodes, e.g., a first part stays on an embedded device and the rest on a server. Deciding where to split a DNN is a challenge in itself, making the design of deep learning applications even more complicated. Therefore, we propose Split-Et-Impera, a novel and practical framework that i) determines the set of the best-split points of a neural network based on deep network interpretability principles without performing a tedious try-and-test approach, ii) performs a communication-aware simulation for the rapid evaluation of different neural network rearrangements, and iii) suggests the best match between the quality of service requirements of the application and the performance in terms of accuracy and latency time.
Most recent 6D object pose estimation methods first use object detection to obtain 2D bounding boxes before actually regressing the pose. However, the general object detection methods they use are ill-suited to handle cluttered scenes, thus producing poor initialization to the subsequent pose network. To address this, we propose a rigidity-aware detection method exploiting the fact that, in 6D pose estimation, the target objects are rigid. This lets us introduce an approach to sampling positive object regions from the entire visible object area during training, instead of naively drawing samples from the bounding box center where the object might be occluded. As such, every visible object part can contribute to the final bounding box prediction, yielding better detection robustness. Key to the success of our approach is a visibility map, which we propose to build using a minimum barrier distance between every pixel in the bounding box and the box boundary. Our results on seven challenging 6D pose estimation datasets evidence that our method outperforms general detection frameworks by a large margin. Furthermore, combined with a pose regression network, we obtain state-of-the-art pose estimation results on the challenging BOP benchmark.
The aim of this work is to study the dual and the algebraic dual of an evaluation code using standard monomials and indicator functions. We show that the dual of an evaluation code is the evaluation code of the algebraic dual. We develop an algorithm for computing a basis for the algebraic dual. Let $C_1$ and $C_2$ be linear codes spanned by standard monomials. We give a combinatorial condition for the monomial equivalence of $C_1$ and the dual $C_2^\perp$. Moreover, we give an explicit description of a generator matrix of $C_2^\perp$ in terms of that of $C_1$ and coefficients of indicator functions. For Reed--Muller-type codes we give a duality criterion in terms of the v-number and the Hilbert function of a vanishing ideal. As an application, we provide an explicit duality for Reed--Muller-type codes corresponding to Gorenstein ideals. In addition, when the evaluation code is monomial and the set of evaluation points is a degenerate affine space, we classify when the dual is a monomial code.
Typical cooperative multi-agent systems (MASs) exchange information to coordinate their motion in proximity-based control consensus schemes to complete a common objective. However, in the event of faults or cyber attacks to on-board positioning sensors of agents, global control performance may be compromised resulting in a hijacking of the entire MAS. For systems that operate in unknown or landmark-free environments (e.g., open terrain, sea, or air) and also beyond range/proximity sensing of nearby agents, compromised agents lose localization capabilities. To maintain resilience in these scenarios, we propose a method to recover compromised agents by utilizing Received Signal Strength Indication (RSSI) from nearby agents (i.e., mobile landmarks) to provide reliable position measurements for localization. To minimize estimation error: i) a multilateration scheme is proposed to leverage RSSI and position information received from neighboring agents as mobile landmarks and ii) a Kalman filtering method adaptively updates the unknown RSSI-based position measurement covariance matrix at runtime that is robust to unreliable state estimates. The proposed framework is demonstrated with simulations on MAS formations in the presence of faults and cyber attacks to on-board position sensors.
We review Quasi Maximum Likelihood estimation of factor models for high-dimensional panels of time series. We consider two cases: (1) estimation when no dynamic model for the factors is specified (Bai and Li, 2016); (2) estimation based on the Kalman smoother and the Expectation Maximization algorithm thus allowing to model explicitly the factor dynamics (Doz et al., 2012). Our interest is in approximate factor models, i.e., when we allow for the idiosyncratic components to be mildly cross-sectionally, as well as serially, correlated. Although such setting apparently makes estimation harder, we show, in fact, that factor models do not suffer of the curse of dimensionality problem, but instead they enjoy a blessing of dimensionality property. In particular, we show that if the cross-sectional dimension of the data, $N$, grows to infinity, then: (i) identification of the model is still possible, (ii) the mis-specification error due to the use of an exact factor model log-likelihood vanishes. Moreover, if we let also the sample size, $T$, grow to infinity, we can also consistently estimate all parameters of the model and make inference. The same is true for estimation of the latent factors which can be carried out by weighted least-squares, linear projection, or Kalman filtering/smoothing. We also compare the approaches presented with: Principal Component analysis and the classical, fixed $N$, exact Maximum Likelihood approach. We conclude with a discussion on efficiency of the considered estimators.
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.