End-users are concerned about protecting the privacy of their sensitive personal data that are generated while working on information systems. This extends to both the data they actively provide including personal identification in exchange for products and services as well as its related metadata such as unnecessary access to their location. This is when certain privacy-preserving technologies come into a place where Internet Engineering Task Force (IETF) plays a major role in incorporating such technologies at the fundamental level. Thus, this paper offers an overview of the privacy-preserving mechanisms for layer 3 (i.e. IP) and above that are currently under standardization at the IETF. This includes encrypted DNS at layer 5 classified as DNS-over-TLS (DoT), DNS-over-HTTPS (DoH), and DNS-over-QUIC (DoQ) where the underlying technologies like QUIC belong to layer 4. Followed by that, we discuss Privacy Pass Protocol and its application in generating Private Access Tokens and Passkeys to replace passwords for authentication at the application layer (i.e. end-user devices). Lastly, to protect user privacy at the IP level, Private Relays and MASQUE are discussed. This aims to make designers, implementers, and users of the Internet aware of privacy-related design choices.
Hardware security keys undoubtedly have advantage for users as "usability" pain is trivial compared to the maximum "security" gain in authentication. Naturally, the hardware factor in the authentication received a widespread adoption amongst average users, as it is ergonomically less demanding than phone texts or authentication prompts. This ergonomic advantage in particular is essential for users who are blind or low vision, as their interaction with a phone is impractical. However, the "usability" for low vision or blind users pain might be much higher than an average well-bodied user for the same "security" gain. In an effort to learn more we conducted a usability assessment with ten low vision or blind users setting up the OnlyKey two-factor authentication key. First, the setup process was insurmountable for more than half of the participants, resulting in a situation where the hardware key was abandoned. Secondly, the lack of tactile orientation led participants to consider it as both impractical, and prone to difficulties locating or loosing it. We discuss the implications of our findings for future improvements in usable authentication for visually impaired users.
Age of information (AoI) is an effective performance metric measuring the freshness of information and is particularly suitable for applications involving status update. In this paper, using the age violation probability as the metric, scheduling for heterogeneous multi-source systems is studied. Two queueing disciplines, namely the infinite packet queueing discipline and the single packet queueing discipline, are considered for scheduling packets within each source. A generalized round-robin (GRR) scheduling policy is then proposed to schedule the sources. Bounds on the exponential decay rate of the age violation probability for the proposed GRR scheduling policy under each queueing discipline are rigorously analyzed. Simulation results are provided, which show that the proposed GRR scheduling policy can efficiently serve many sources with heterogeneous arrivals and that our bounds can capture the true decay rate quite accurately. When specialized to the homogeneous source setting, the analysis concretizes the common belief that the single packet queueing discipline has a better AoI performance than the infinite packet queueing discipline. Moreover, simulations on this special case reveals that under the proposed scheduling policy, the two disciplines would have similar asymptotic performance when the inter-arrival time is much larger than the total transmission time.
Environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions. An outstanding challenge in real-time perception for autonomous driving lies in finding the best trade-off between detection quality and latency. Major constraints on both computation and power have to be taken into account for real-time perception in autonomous vehicles. Larger object detection models tend to produce the best results, but are also slower at runtime. Since the most accurate detectors cannot run in real-time locally, we investigate the possibility of offloading computation to edge and cloud platforms, which are less resource-constrained. We create a synthetic dataset to train object detection models and evaluate different offloading strategies. Using real hardware and network simulations, we compare different trade-offs between prediction quality and end-to-end delay. Since sending raw frames over the network implies additional transmission delays, we also explore the use of JPEG and H.265 compression at varying qualities and measure their impact on prediction metrics. We show that models with adequate compression can be run in real-time on the cloud while outperforming local detection performance.
This paper introduces a new method of discretization that collocates both endpoints of the domain and enables the complete convergence of the costate variables associated with the Hamilton boundary-value problem. This is achieved through the inclusion of an \emph{exceptional sample} to the roots of the Legendre-Lobatto polynomial, thus promoting the associated differentiation matrix to be full-rank. We study the location of the new sample such that the differentiation matrix is the most robust to perturbations and we prove that this location is also the choice that mitigates the Runge phenomenon associated with polynomial interpolation. Two benchmark problems are successfully implemented in support of our theoretical findings. The new method is observed to converge exponentially with the number of discretization points used.
Heart failure is a debilitating condition that affects millions of people worldwide and has a significant impact on their quality of life and mortality rates. An objective assessment of cardiac pressures remains an important method for the diagnosis and treatment prognostication for patients with heart failure. Although cardiac catheterization is the gold standard for estimating central hemodynamic pressures, it is an invasive procedure that carries inherent risks, making it a potentially dangerous procedure for some patients. Approaches that leverage non-invasive signals - such as electrocardiogram (ECG) - have the promise to make the routine estimation of cardiac pressures feasible in both inpatient and outpatient settings. Prior models trained to estimate intracardiac pressures (e.g., mean pulmonary capillary wedge pressure (mPCWP)) in a supervised fashion have shown good discriminatory ability but have been limited to the labeled dataset from the heart failure cohort. To address this issue and build a robust representation, we apply deep metric learning (DML) and propose a novel self-supervised DML with distance-based mining that improves the performance of a model with limited labels. We use a dataset that contains over 5.4 million ECGs without concomitant central pressure labels to pre-train a self-supervised DML model which showed improved classification of elevated mPCWP compared to self-supervised contrastive baselines. Additionally, the supervised DML model that is using ECGs with access to 8,172 mPCWP labels demonstrated significantly better performance on the mPCWP regression task compared to the supervised baseline. Moreover, our data suggest that DML yields models that are performant across patient subgroups, even when some patient subgroups are under-represented in the dataset. Our code is available at //github.com/mandiehyewon/ssldml
The analysis of public affairs documents is crucial for citizens as it promotes transparency, accountability, and informed decision-making. It allows citizens to understand government policies, participate in public discourse, and hold representatives accountable. This is crucial, and sometimes a matter of life or death, for companies whose operation depend on certain regulations. Large Language Models (LLMs) have the potential to greatly enhance the analysis of public affairs documents by effectively processing and understanding the complex language used in such documents. In this work, we analyze the performance of LLMs in classifying public affairs documents. As a natural multi-label task, the classification of these documents presents important challenges. In this work, we use a regex-powered tool to collect a database of public affairs documents with more than 33K samples and 22.5M tokens. Our experiments assess the performance of 4 different Spanish LLMs to classify up to 30 different topics in the data in different configurations. The results shows that LLMs can be of great use to process domain-specific documents, such as those in the domain of public affairs.
Music streaming services are increasingly popular among younger generations who seek social experiences through personal expression and sharing of subjective feelings in comments. However, such emotional aspects are often ignored by current platforms, which affects the listeners' ability to find music that triggers specific personal feelings. To address this gap, this study proposes a novel approach that leverages deep learning methods to capture contextual keywords, sentiments, and induced mechanisms from song comments. The study augments a current music app with two features, including the presentation of tags that best represent song comments and a novel map metaphor that reorganizes song comments based on chronological order, content, and sentiment. The effectiveness of the proposed approach is validated through a usage scenario and a user study that demonstrate its capability to improve the user experience of exploring songs and browsing comments of interest. This study contributes to the advancement of music streaming services by providing a more personalized and emotionally rich music experience for younger generations.
A community reveals the features and connections of its members that are different from those in other communities in a network. Detecting communities is of great significance in network analysis. Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages in handling high dimensional network data. Hence, a comprehensive overview of community detection's latest progress through deep learning is timely to both academics and practitioners. This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods, including deep learning-based models upon deep neural networks, deep nonnegative matrix factorization and deep sparse filtering. The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders. The survey also summarizes the popular benchmark data sets, model evaluation metrics, and open-source implementations to address experimentation settings. We then discuss the practical applications of community detection in various domains and point to implementation scenarios. Finally, we outline future directions by suggesting challenging topics in this fast-growing deep learning field.
Knowledge graphs capture interlinked information between entities and they represent an attractive source of structured information that can be harnessed for recommender systems. However, existing recommender engines use knowledge graphs by manually designing features, do not allow for end-to-end training, or provide poor scalability. Here we propose Knowledge Graph Convolutional Networks (KGCN), an end-to-end trainable framework that harnesses item relationships captured by the knowledge graph to provide better recommendations. Conceptually, KGCN computes user-specific item embeddings by first applying a trainable function that identifies important knowledge graph relations for a given user and then transforming the knowledge graph into a user-specific weighted graph. Then, KGCN applies a graph convolutional neural network that computes an embedding of an item node by propagating and aggregating knowledge graph neighborhood information. Moreover, to provide better inductive bias KGCN uses label smoothness (LS), which provides regularization over edge weights and we prove that it is equivalent to label propagation scheme on a graph. Finally, We unify KGCN and LS regularization, and present a scalable minibatch implementation for KGCN-LS model. Experiments show that KGCN-LS outperforms strong baselines in four datasets. KGCN-LS also achieves great performance in sparse scenarios and is highly scalable with respect to the knowledge graph size.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.