DNS, one of the fundamental protocols of the TCP/IP stack, has evolved over the years to protect against threats and attacks. This study examines the risks associated with DNS and explores recent advancements that contribute towards making the DNS ecosystem resilient against various attacks while safeguarding user privacy.
Safety measures need to be systemically investigated to what extent they evaluate the intended performance of Deep Neural Networks (DNNs) for critical applications. Due to a lack of verification methods for high-dimensional DNNs, a trade-off is needed between accepted performance and handling of out-of-distribution (OOD) samples. This work evaluates rejecting outputs from semantic segmentation DNNs by applying a Mahalanobis distance (MD) based on the most probable class-conditional Gaussian distribution for the predicted class as an OOD score. The evaluation follows three DNNs trained on the Cityscapes dataset and tested on four automotive datasets and finds that classification risk can drastically be reduced at the cost of pixel coverage, even when applied on unseen datasets. The applicability of our findings will support legitimizing safety measures and motivate their usage when arguing for safe usage of DNNs in automotive perception.
Famous people, such as celebrities and influencers, are harassed online on a daily basis. Online harassment mentally disturbs them and negatively affects society. However, limited studies have been conducted on the online harassment victimization of famous people, and its effects remain unclear. We surveyed Japanese famous people ($N=213$), who were influential people who appeared on television and other traditional media and on social media, regarding online harassment victimization, emotional injury, and action against offenders and revealed that various forms of online harassment are prevalent. Some victims used the anti-harassment functions provided by weblogs and social media systems (e.g., blocking/muting/reporting offender accounts and closing comment forms), talked about their victimization to close people, and contacted relevant authorities to take legal action (talent agencies, legal consultants, and police). By contrast, some victims felt compelled to accept harassment and did not initiate action for offenses. We propose several approaches to support victims, inhibit online harassment, and educate people. Our findings help that platforms establish support systems against online harassment.
The emergent abilities of Large Language Models (LLMs), which power tools like ChatGPT and Bard, have produced both excitement and worry about how AI will impact academic writing. In response to rising concerns about AI use, authors of academic publications may decide to voluntarily disclose any AI tools they use to revise their manuscripts, and journals and conferences could begin mandating disclosure and/or turn to using detection services, as many teachers have done with student writing in class settings. Given these looming possibilities, we investigate whether academics view it as necessary to report AI use in manuscript preparation and how detectors react to the use of AI in academic writing.
In this paper, we present a variety of classification experiments related to the task of fictional discourse detection. We utilize a diverse array of datasets, including contemporary professionally published fiction, historical fiction from the Hathi Trust, fanfiction, stories from Reddit, folk tales, GPT-generated stories, and anglophone world literature. Additionally, we introduce a new feature set of word "supersenses" that facilitate the goal of semantic generalization. The detection of fictional discourse can help enrich our knowledge of large cultural heritage archives and assist with the process of understanding the distinctive qualities of fictional storytelling more broadly.
Liquid staking has become the largest category of decentralized finance protocols in terms of total value locked. However, few studies exist on its implementation designs or underlying risks. The liquid staking protocols allow for earning staking rewards without the disadvantage of locking the capital at the validators. Yet, they are seen by some as a threat to the Proof-of-Stake blockchain security. This paper is the first work that classifies liquid staking implementations. It analyzes the historical performance of major liquid staking tokens in comparison to the traditional staking for the largest Proof-of-Stake blockchains. Furthermore, the research investigates the impact of centralization, maximum extractable value and the migration of Ethereum from Proof-of-Work to Proof-of-Stake on the tokens' performance. Examining the tracking error of the liquid stacking providers to the staking rewards shows that they are persistent and cannot be explained by macro-variables of the currency, such as the variance or return.
Recent initiatives known as Future Internet Architectures (FIAs) seek to redesign the Internet to improve performance, scalability, and security. However, some governments perceive Internet access as a threat to their political standing and engage in widespread network surveillance and censorship. In this paper, we provide an in-depth analysis into the designs of prominent FIAs, to help understand of how FIAs impact surveillance and censorship abilities. Then, we survey the applicability of privacy-enhancing technologies to FIAs. We conclude by providing guidelines for future research into novel FIA-based privacy-enhancing technologies, and recommendations to guide the evaluation of these technologies.
The breakthrough in AI and Machine Learning has brought a new revolution in robotics, resulting in the construction of more sophisticated robotic systems. Not only can these robotic systems benefit all domains, but also can accomplish tasks that seemed to be unimaginable a few years ago. From swarms of autonomous small robots working together to more very heavy and large objects, to seemingly indestructible robots capable of going to the harshest environments, we can see robotic systems designed for every task imaginable. Among them, a key scenario where robotic systems can benefit is in disaster response scenarios and rescue operations. Robotic systems are capable of successfully conducting tasks such as removing heavy materials, utilizing multiple advanced sensors for finding objects of interest, moving through debris and various inhospitable environments, and not the least have flying capabilities. Even with so much potential, we rarely see the utilization of robotic systems in disaster response scenarios and rescue missions. Many factors could be responsible for the low utilization of robotic systems in such scenarios. One of the key factors involve challenges related to Human-Robot Interaction (HRI) issues. Therefore, in this paper, we try to understand the HRI challenges involving the utilization of robotic systems in disaster response and rescue operations. Furthermore, we go through some of the proposed robotic systems designed for disaster response scenarios and identify the HRI challenges of those systems. Finally, we try to address the challenges by introducing ideas from various proposed research works.
Managers, employers, policymakers, and others often seek to understand whether decisions are biased against certain groups. One popular analytic strategy is to estimate disparities after adjusting for observed covariates, typically with a regression model. This approach, however, suffers from two key statistical challenges. First, omitted-variable bias can skew results if the model does not adjust for all relevant factors; second, and conversely, included-variable bias -- a lesser-known phenomenon -- can skew results if the set of covariates includes irrelevant factors. Here we introduce a new, three-step statistical method, which we call risk-adjusted regression, to address both concerns in settings where decision makers have clearly measurable objectives. In the first step, we use all available covariates to estimate the value, or inversely, the risk, of taking a certain action, such as approving a loan application or hiring a job candidate. Second, we measure disparities in decisions after adjusting for these risk estimates alone, mitigating the problem of included-variable bias. Finally, in the third step, we assess the sensitivity of results to potential mismeasurement of risk, addressing concerns about omitted-variable bias. To do so, we develop a novel, non-parametric sensitivity analysis that yields tight bounds on the true disparity in terms of the average gap between true and estimated risk -- a single interpretable parameter that facilitates credible estimates. We demonstrate this approach on a detailed dataset of 2.2 million police stops of pedestrians in New York City, and show that traditional statistical tests of discrimination can substantially underestimate the magnitude of disparities.
Recently, Mutual Information (MI) has attracted attention in bounding the generalization error of Deep Neural Networks (DNNs). However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization. To address the limitation, this paper introduces a probabilistic representation of DNNs for accurately estimating the MI. Leveraging the proposed MI estimator, we validate the information theoretic explanation for generalization, and derive a tighter generalization bound than the state-of-the-art relaxations.
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.