This study is a survey of digital library initiatives in India collecting secondary information from about fifty digital libraries from their respective websites. The findings show that in most cases the actual conception of the digital library is still in a nascent stage. Online subscriptions and links to third-party websites are also considered digital libraries. However, many digital libraries do have not any proper search interface on their respective website due to improper arrangement of metadata. In some cases, they do not have their own digitized collection and provided other collections or referred to their users to some third-party website. Moreover, there are many digital libraries that cannot be accessed outside (remote access) of the organization. Hence, regular website maintenance, remote access facility, and proper training of information professionals are required. Moreover, the so-called digital libraries in India have not developed their own standards or are not following any global standards. However, the usage statistics for government digital libraries are far better than the usage statistics of academic or public libraries. Users are perhaps more interested in government rules, laws, orders, etc. That is perhaps a positive sign of digital governance and reaching the public. There are several important observations and policy suggestions that may be helpful for students, scholars, library professionals, and the decision-makers in the government.
In the face of increasingly severe privacy threats in the era of data and AI, the US Census Bureau has recently adopted differential privacy, the de facto standard of privacy protection for the 2020 Census release. Enforcing differential privacy involves adding carefully calibrated random noise to sensitive demographic information prior to its release. This change has the potential to impact policy decisions like political redistricting and other high-stakes practices, partly because tremendous federal funds and resources are allocated according to datasets (like Census data) released by the US government. One under-explored yet important application of such data is the redrawing of school attendance boundaries to foster less demographically segregated schools. In this study, we ask: how differential privacy might impact diversity-promoting boundaries in terms of resulting levels of segregation, student travel times, and school switching requirements? Simulating alternative boundaries using differentially-private student counts across 67 Georgia districts, we find that increasing data privacy requirements decreases the extent to which alternative boundaries might reduce segregation and foster more diverse and integrated schools, largely by reducing the number of students who would switch schools under boundary changes. Impacts on travel times are minimal. These findings point to a privacy-diversity tradeoff local educational policymakers may face in forthcoming years, particularly as computational methods are increasingly poised to facilitate attendance boundary redrawings in the pursuit of less segregated schools.
Ensuring that large language models (LMs) are fair, robust and useful requires an understanding of how different modifications to their inputs impact the model's behaviour. In the context of open-text generation tasks, however, such an evaluation is not trivial. For example, when introducing a model with an input text and a perturbed, "contrastive" version of it, meaningful differences in the next-token predictions may not be revealed with standard decoding strategies. With this motivation in mind, we propose Contrastive Input Decoding (CID): a decoding algorithm to generate text given two inputs, where the generated text is likely given one input but unlikely given the other. In this way, the contrastive generations can highlight potentially subtle differences in how the LM output differs for the two inputs in a simple and interpretable manner. We use CID to highlight context-specific biases that are hard to detect with standard decoding strategies and quantify the effect of different input perturbations.
This project investigated new approaches and technologies to enhance the accessibility of mathematical content and its semantic information for a broad range of information retrieval applications. To achieve this goal, the project addressed three main research challenges: (1) syntactic analysis of mathematical expressions, (2) semantic enrichment of mathematical expressions, and (3) evaluation using quality metrics and demonstrators. To make our research useful for the research community, we published tools that enable researchers to process mathematical expressions more effectively and efficiently.
In order to advance academic research, it is important to assess and evaluate the academic influence of researchers and the findings they produce. Citation metrics are universally used methods to evaluate researchers. Amongst the several variations of citation metrics, the h-index proposed by Hirsch has become the leading measure. Recent work shows that h-index is not an effective measure to determine scientific impact - due to changing authorship patterns. This can be mitigated by using h-index of a paper to compute h- index of an author. We show that using fractional allocation of h-index gives better results. In this work, we reapply two indices based on the h-index of a single paper. The indices are referred to as: hp-index and hp-frac-index. We run large-scale experiments in three different fields with about a million publications and 3,000 authors. We also compare h-index of a paper with nine h-index like metrics. Our experiments show that hp-frac-index provides a unique ranking when compared to h-index. It also performs better than h-index in providing higher ranks to the awarded researcher.
We propose a Bayesian model selection approach that allows medical practitioners to select among predictor variables while taking their respective costs into account. Medical procedures almost always incur costs in time and/or money. These costs might exceed their usefulness for modeling the outcome of interest. We develop Bayesian model selection that uses flexible model priors to penalize costly predictors a priori and select a subset of predictors useful relative to their costs. Our approach (i) gives the practitioner control over the magnitude of cost penalization, (ii) enables the prior to scale well with sample size, and (iii) enables the creation of our proposed inclusion path visualization, which can be used to make decisions about individual candidate predictors using both probabilistic and visual tools. We demonstrate the effectiveness of our inclusion path approach and the importance of being able to adjust the magnitude of the prior's cost penalization through a dataset pertaining to heart disease diagnosis in patients at the Cleveland Clinic Foundation, where several candidate predictors with various costs were recorded for patients, and through simulated data.
In cancer research, clustering techniques are widely used for exploratory analyses and dimensionality reduction, playing a critical role in the identification of novel cancer subtypes, often with direct implications for patient management. As data collected by multiple research groups grows, it is increasingly feasible to investigate the replicability of clustering procedures, that is, their ability to consistently recover biologically meaningful clusters across several datasets. In this paper, we review existing methods to assess replicability of clustering analyses, and discuss a framework for evaluating cross-study clustering replicability, useful when two or more studies are available. These approaches can be applied to any clustering algorithm and can employ different measures of similarity between partitions to quantify replicability, globally (i.e. for the whole sample) as well as locally (i.e. for individual clusters). Using experiments on synthetic and real gene expression data, we illustrate the utility of replicability metrics to evaluate if the same clusters are identified consistently across a collection of datasets.
Neural networks have shown tremendous growth in recent years to solve numerous problems. Various types of neural networks have been introduced to deal with different types of problems. However, the main goal of any neural network is to transform the non-linearly separable input data into more linearly separable abstract features using a hierarchy of layers. These layers are combinations of linear and nonlinear functions. The most popular and common non-linearity layers are activation functions (AFs), such as Logistic Sigmoid, Tanh, ReLU, ELU, Swish and Mish. In this paper, a comprehensive overview and survey is presented for AFs in neural networks for deep learning. Different classes of AFs such as Logistic Sigmoid and Tanh based, ReLU based, ELU based, and Learning based are covered. Several characteristics of AFs such as output range, monotonicity, and smoothness are also pointed out. A performance comparison is also performed among 18 state-of-the-art AFs with different networks on different types of data. The insights of AFs are presented to benefit the researchers for doing further research and practitioners to select among different choices. The code used for experimental comparison is released at: \url{//github.com/shivram1987/ActivationFunctions}.
The difficulty of deploying various deep learning (DL) models on diverse DL hardwares has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the DL models described in different DL frameworks as input, and then generate optimized codes for diverse DL hardwares as output. However, none of the existing survey has analyzed the unique design of the DL compilers comprehensively. In this paper, we perform a comprehensive survey of existing DL compilers by dissecting the commonly adopted design in details, with emphasis on the DL oriented multi-level IRs, and frontend/backend optimizations. Specifically, we provide a comprehensive comparison among existing DL compilers from various aspects. In addition, we present detailed analysis of the multi-level IR design and compiler optimization techniques. Finally, several insights are highlighted as the potential research directions of DL compiler. This is the first survey paper focusing on the unique design of DL compiler, which we hope can pave the road for future research towards the DL compiler.
Recent years have witnessed the enormous success of low-dimensional vector space representations of knowledge graphs to predict missing facts or find erroneous ones. Currently, however, it is not yet well-understood how ontological knowledge, e.g. given as a set of (existential) rules, can be embedded in a principled way. To address this shortcoming, in this paper we introduce a framework based on convex regions, which can faithfully incorporate ontological knowledge into the vector space embedding. Our technical contribution is two-fold. First, we show that some of the most popular existing embedding approaches are not capable of modelling even very simple types of rules. Second, we show that our framework can represent ontologies that are expressed using so-called quasi-chained existential rules in an exact way, such that any set of facts which is induced using that vector space embedding is logically consistent and deductively closed with respect to the input ontology.
We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual analysis and natural language processing. Our goal is different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment; instead, we aim to infer the latent emotional state of the user. Thus, we focus on predicting the emotion word tags attached by users to their Tumblr posts, treating these as "self-reported emotions." We demonstrate that our multimodal model combining both text and image features outperforms separate models based solely on either images or text. Our model's results are interpretable, automatically yielding sensible word lists associated with emotions. We explore the structure of emotions implied by our model and compare it to what has been posited in the psychology literature, and validate our model on a set of images that have been used in psychology studies. Finally, our work also provides a useful tool for the growing academic study of images - both photographs and memes - on social networks.