Technologies adopted by the public sector have transformed the work practices of employees in public agencies by creating different means of communication and decision-making. Although much of the recent research in the future of work domain has concentrated on the effects of technological advancements on public sector employees, the influence on work practices of external stakeholders engaging with this sector remains under-explored. In this paper, we focus on a digital platform called OneStop which is deployed by several building departments across the U.S. and aims to integrate various steps and services into a single point of online contact between public sector employees and the public. Drawing on semi-structured interviews with 22 stakeholders, including local business owners, experts involved in the construction process, community representatives, and building department employees, we investigate how this technology transition has impacted the work of these different stakeholders. We observe a multifaceted perspective and experience caused by the adoption of OneStop. OneStop exacerbated inequitable practices for local business owners due to a lack of face-to-face interactions with the department employees. For the public sector employees, OneStop standardized the work practices, representing the building department's priorities and values. Based on our findings, we discuss tensions around standardization, equality, and equity in technology transition, as well as design implications for equitable practices in the public sector.
Practical mechanisms often limit agent reports to constrained formats like trades or orderings, potentially limiting the information agents can express. We propose a novel class of mechanisms that elicit agent reports in natural language and leverage the world-modeling capabilities of large language models (LLMs) to select outcomes and assign payoffs. We identify sufficient conditions for these mechanisms to be incentive-compatible and efficient as the LLM being a good enough world model and a strong inter-agent information over-determination condition. We show situations where these LM-based mechanisms can successfully aggregate information in signal structures on which prediction markets fail.
Extreme value analysis (EVA) uses data to estimate long-term extreme environmental conditions for variables such as significant wave height and period, for the design of marine structures. Together with models for the short-term evolution of the ocean environment and for wave-structure interaction, EVA provides a basis for full probabilistic design analysis. Alternatively, environmental contours provide an approximate approach to estimating structural integrity, without requiring structural knowledge. These contour methods also exploit statistical models, including EVA, but avoid the need for structural modelling by making what are believed to be conservative assumptions about the shape of the structural failure boundary in the environment space. These assumptions, however, may not always be appropriate, or may lead to unnecessary wasted resources from over design. We demonstrate a methodology for efficient fully probabilistic analysis of structural failure. From this, we estimate the joint conditional probability density of the environment (CDE), given the occurrence of an extreme structural response. We use CDE as a diagnostic to highlight the deficiencies of environmental contour methods for design; none of the IFORM environmental contours considered characterise CDE well for three example structures.
Periodic activation functions, often referred to as learned Fourier features have been widely demonstrated to improve sample efficiency and stability in a variety of deep RL algorithms. Potentially incompatible hypotheses have been made about the source of these improvements. One is that periodic activations learn low frequency representations and as a result avoid overfitting to bootstrapped targets. Another is that periodic activations learn high frequency representations that are more expressive, allowing networks to quickly fit complex value functions. We analyse these claims empirically, finding that periodic representations consistently converge to high frequencies regardless of their initialisation frequency. We also find that while periodic activation functions improve sample efficiency, they exhibit worse generalization on states with added observation noise -- especially when compared to otherwise equivalent networks with ReLU activation functions. Finally, we show that weight decay regularization is able to partially offset the overfitting of periodic activation functions, delivering value functions that learn quickly while also generalizing.
An increasing number of unhardened commercial-off-the-shelf embedded devices are deployed under harsh operating conditions and in highly-dependable systems. Due to the mechanisms of hardware degradation that affect these devices, ageing detection and monitoring are crucial to prevent critical failures. In this paper, we empirically study the propagation delay of 298 naturally-aged FPGA devices that are deployed in the European XFEL particle accelerator. Based on in-field measurements, we find that operational devices show significantly slower switching frequencies than unused chips, and that increased gamma and neutron radiation doses correlate with increased hardware degradation. Furthermore, we demonstrate the feasibility of developing machine learning models that estimate the switching frequencies of the devices based on historical and environmental data.
In observational studies of discrimination, the most common statistical approaches consider either the rate at which decisions are made (benchmark tests) or the success rate of those decisions (outcome tests). Both tests, however, have well-known statistical limitations, sometimes suggesting discrimination even when there is none. Despite the fallibility of the benchmark and outcome tests individually, here we prove a surprisingly strong statistical guarantee: under a common non-parametric assumption, at least one of the two tests must be correct; consequently, when both tests agree, they are guaranteed to yield correct conclusions. We present empirical evidence that the underlying assumption holds approximately in several important domains, including lending, education, and criminal justice -- and that our hybrid test is robust to the moderate violations of the assumption that we observe in practice. Applying this approach to 2.8 million police stops across California, we find evidence of widespread racial discrimination.
This study explores how sentence types affect the Lombard effect and intelligibility enhancement, focusing on comparisons between natural and grid sentences. Using the Lombard Chinese-TIMIT (LCT) corpus and the Enhanced MAndarin Lombard Grid (EMALG) corpus, we analyze changes in phonetic and acoustic features across different noise levels. Our results show that grid sentences produce more pronounced Lombard effects than natural sentences. Then, we develop and test a normal-to-Lombard conversion model, trained separately on LCT and EMALG corpora. Through subjective and objective evaluations, natural sentences are superior in maintaining speech quality in intelligibility enhancement. In contrast, grid sentences could provide superior intelligibility due to the more pronounced Lombard effect. This study provides a valuable perspective on enhancing speech communication in noisy environments.
Injustice occurs when someone experiences unfair treatment or their rights are violated and is often due to the presence of implicit biases and prejudice such as stereotypes. The automated identification of injustice in text has received little attention, due in part to the fact that underlying implicit biases or stereotypes are rarely explicitly stated and that instances often occur unconsciously due to the pervasive nature of prejudice in society. Here, we describe a novel framework that combines the use of a fine-tuned BERT-based bias detection model, two stereotype detection models, and a lexicon-based approach to show that epistemological biases (i.e., words, which presupposes, entails, asserts, hedges, or boosts text to erode or assert a person's capacity as a knower) can assist with the automatic detection of injustice in text. The news media has many instances of injustice (i.e. discriminatory narratives), thus it is our use case here. We conduct and discuss an empirical qualitative research study which shows how the framework can be applied to detect injustices, even at higher volumes of data.
The aim of this paper is to improve the large deviation principle for the number of descents in a random permutation by establishing a sharp large deviation principle of any order. We shall also prove a sharp large deviation principle of any order for the major index in a random permutation.
We report on the mechanization of (preference-based) conditional normative reasoning. Our focus is on Aqvist's system E for conditional obligation, and its extensions. Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The equivalence is automatically verified in one direction, leading from the property to the axiom. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox (or impossibility theorem) in population ethics, Parfit's repugnant conclusion. While some have proposed overcoming the impossibility theorem by abandoning the presupposed transitivity of ''better than'', our formalisation unveils a less extreme approach, suggesting among other things the option of weakening transitivity suitably rather than discarding it entirely. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.