Rare and Weak models for multiple hypothesis testing assume that only a small proportion of the tested hypotheses concern non-null effects and the individual effects are only moderately large, so they generally do not stand out individually, for example in a Bonferroni analysis. Such models have been studied in quite a few settings, for example in some cases studies focused on an underlying Gaussian means model for the hypotheses being tested; in some others, Poisson and Binomial. Such seemingly different models have asymptotically the following common structure. Summarizing the evidence of individual tests by the negative logarithm of its P-value, the model is asymptotically equivalent to a situation in which most negative log P-values have a standard exponential distribution but a small fraction of the P-values might have an alternative distribution which is approximately noncentral chisquared on one degree of freedom. This log-chisquared approximation is different from the log-normal approximation of Bahadur which is unsuitable for analyzing Rare and Weak multiple testing models. We characterize the asymptotic performance of global tests combining asymptotic log-chisquared P-values in terms of the chisquared mixture parameters: the scaling parameter controlling heteroscedasticity, the non-centrality parameter describing the effect size whenever it exists, and the parameter controlling the rarity of the non-null effects. In a phase space involving the last two parameters, we derive a region where all tests are asymptotically powerless. Outside of this region, the Berk-Jones and the Higher Criticism tests have maximal power. Inference techniques based on the minimal P-value, false-discovery rate controlling, and Fisher's combination test have sub-optimal asymptotic phase diagrams.
Geographical and Temporal Weighted Regression (GTWR) model is an important local technique for exploring spatial heterogeneity in data relationships, as well as temporal dependence due to its high fitting capacity when it comes to real data. In this article, we consider a GTWR model driven by a spatio-temporal noise, colored in space and fractional in time. Concerning the covariates, we consider that they are correlated, taking into account two interaction types between covariates, weak and strong interaction. Under these assumptions, Weighted Least Squares Estimator (WLS) is obtained, as well as its rate of convergence. In order to evidence the good performance of the estimator studied, it is provided a simulation study of four different scenarios, where it is observed that the residuals oscillate with small variation around zero. The STARMA package of the R software allows obtaining a variant of the $R^{2}$ coefficient, with values very close to 1, which means that most of the variability is explained by the model.
Unsupervised sentence representation learning aims to transform input sentences into fixed-length vectors enriched with intricate semantic information while obviating the reliance on labeled data. Recent progress within this field, propelled by contrastive learning and prompt engineering, has significantly bridged the gap between unsupervised and supervised strategies. Nonetheless, the potential utilization of Chain-of-Thought, remains largely untapped within this trajectory. To unlock latent capabilities within pre-trained models, such as BERT, we propose a two-stage approach for sentence representation: comprehension and summarization. Subsequently, the output of the latter phase is harnessed as the vectorized representation of the input sentence. For further performance enhancement, we meticulously refine both the contrastive learning loss function and the template denoising technique for prompt engineering. Rigorous experimentation substantiates our method, CoT-BERT, transcending a suite of robust baselines without necessitating other text representation models or external databases.
A problem that plagues robotic grasping is the misalignment of the object and gripper due to difficulties in precise localization, actuation, etc. Under-actuated robotic hands with compliant mechanisms are used to adapt and compensate for these inaccuracies. However, these mechanisms come at the cost of controllability and coordination. For instance, adaptive functions that let the fingers of a two-fingered gripper adapt independently may affect the coordination necessary for grasping small objects. In this work, we develop a two-fingered robotic hand capable of grasping objects that are offset from the gripper's center, while still having the requisite coordination for grasping small objects via a novel gear-type synchronization mechanism with a magnet. This gear synchronization mechanism allows the adaptive finger's tips to be aligned enabling it to grasp objects as small as toothpicks and washers. The magnetic component allows this coordination to automatically turn off when needed, allowing for the grasping of objects that are offset/misaligned from the gripper. This equips the hand with the capability of grasping light, fragile objects (strawberries, creampuffs, etc) to heavy frying pan lids, all while maintaining their position and posture which is vital in numerous applications that require precise positioning or careful manipulation.
Our research explores the use of natural language processing (NLP) methods to automatically classify entities for the purpose of knowledge graph population and integration with food system ontologies. We have created NLP models that can automatically classify organizations with respect to categories associated with environmental issues as well as Standard Industrial Classification (SIC) codes, which are used by the U.S. government to characterize business activities. As input, the NLP models are provided with text snippets retrieved by the Google search engine for each organization, which serves as a textual description of the organization that is used for learning. Our experimental results show that NLP models can achieve reasonably good performance for these two classification tasks, and they rely on a general framework that could be applied to many other classification problems as well. We believe that NLP models represent a promising approach for automatically harvesting information to populate knowledge graphs and aligning the information with existing ontologies through shared categories and concepts.
Semi-unification is the combination of first-order unification and first-order matching. The undecidability of semi-unification has been proven by Kfoury, Tiuryn, and Urzyczyn in the 1990s by Turing reduction from Turing machine immortality (existence of a diverging configuration). The particular Turing reduction is intricate, uses non-computational principles, and involves various intermediate models of computation. The present work gives a constructive many-one reduction from the Turing machine halting problem to semi-unification. This establishes RE-completeness of semi-unification under many-one reductions. Computability of the reduction function, constructivity of the argument, and correctness of the argument is witnessed by an axiom-free mechanization in the Coq proof assistant. Arguably, this serves as comprehensive, precise, and surveyable evidence for the result at hand. The mechanization is incorporated into the existing, well-maintained Coq library of undecidability proofs. Notably, a variant of Hooper's argument for the undecidability of Turing machine immortality is part of the mechanization.
The goal of this thesis is to study the use of the Kantorovich-Rubinstein distance as to build a descriptor of sample complexity in classification problems. The idea is to use the fact that the Kantorovich-Rubinstein distance is a metric in the space of measures that also takes into account the geometry and topology of the underlying metric space. We associate to each class of points a measure and thus study the geometrical information that we can obtain from the Kantorovich-Rubinstein distance between those measures. We show that a large Kantorovich-Rubinstein distance between those measures allows to conclude that there exists a 1-Lipschitz classifier that classifies well the classes of points. We also discuss the limitation of the Kantorovich-Rubinstein distance as a descriptor.
We propose the new problem of choosing which dense retrieval model to use when searching on a new collection for which no labels are available, i.e. in a zero-shot setting. Many dense retrieval models are readily available. Each model however is characterized by very differing search effectiveness -- not just on the test portion of the datasets in which the dense representations have been learned but, importantly, also across different datasets for which data was not used to learn the dense representations. This is because dense retrievers typically require training on a large amount of labeled data to achieve satisfactory search effectiveness in a specific dataset or domain. Moreover, effectiveness gains obtained by dense retrievers on datasets for which they are able to observe labels during training, do not necessarily generalise to datasets that have not been observed during training. This is however a hard problem: through empirical experimentation we show that methods inspired by recent work in unsupervised performance evaluation with the presence of domain shift in the area of computer vision and machine learning are not effective for choosing highly performing dense retrievers in our setup. The availability of reliable methods for the selection of dense retrieval models in zero-shot settings that do not require the collection of labels for evaluation would allow to streamline the widespread adoption of dense retrieval. This is therefore an important new problem we believe the information retrieval community should consider. Implementation of methods, along with raw result files and analysis scripts are made publicly available at //www.github.com/anonymized.
L1-ball-type priors are a recent generalization of the spike-and-slab priors. By transforming a continuous precursor distribution to the L1-ball boundary, it induces exact zeros with positive prior and posterior probabilities. With great flexibility in choosing the precursor and threshold distributions, we can easily specify models under structured sparsity, such as those with dependent probability for zeros and smoothness among the non-zeros. Motivated to significantly accelerate the posterior computation, we propose a new data augmentation that leads to a fast block Gibbs sampling algorithm. The latent variable, named ``anti-correlation Gaussian'', cancels out the quadratic exponent term in the latent Gaussian distribution, making the parameters of interest conditionally independent so that they can be updated in a block. Compared to existing algorithms such as the No-U-Turn sampler, the new blocked Gibbs sampler has a very low computing cost per iteration and shows rapid mixing of Markov chains. We establish the geometric ergodicity guarantee of the algorithm in linear models. Further, we show useful extensions of our algorithm for posterior estimation of general latent Gaussian models, such as those involving multivariate truncated Gaussian or latent Gaussian process. Keywords: Blocked Gibbs sampler; Fast Mixing of Markov Chains; Latent Gaussian Models; Soft-thresholding.
Accurate analytical and numerical modeling of multiscale systems is a daunting task. The need to properly resolve spatial and temporal scales spanning multiple orders of magnitude pushes the limits of both our theoretical models as well as our computational capabilities. Rigorous upscaling techniques enable efficient computation while bounding/tracking errors and helping to make informed cost-accuracy tradeoffs. The biggest challenges arise when the applicability conditions of upscaled models break down. Here, we present a non-intrusive two-way (iterative bottom-up top-down) coupled hybrid model, applied to thermal runaway in battery packs, that combines fine-scale and upscaled equations in the same numerical simulation to achieve predictive accuracy while limiting computational costs. First, we develop two methods with different orders of accuracy to enforce continuity at the coupling boundary. Then, we derive weak (i.e., variational) formulations of the fine-scale and upscaled governing equations for finite element (FE) discretization and numerical implementation in FEniCS. We demonstrate that hybrid simulations can accurately predict the average temperature fields within error bounds determined a priori by homogenization theory. Finally, we demonstrate the computational efficiency of the hybrid algorithm against fine-scale simulations.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.