In federated frequency estimation (FFE), multiple clients work together to estimate the frequencies of their collective data by communicating with a server that respects the privacy constraints of Secure Summation (SecSum), a cryptographic multi-party computation protocol that ensures that the server can only access the sum of client-held vectors. For single-round FFE, it is known that count sketching is nearly information-theoretically optimal for achieving the fundamental accuracy-communication trade-offs [Chen et al., 2022]. However, we show that under the more practical multi-round FEE setting, simple adaptations of count sketching are strictly sub-optimal, and we propose a novel hybrid sketching algorithm that is provably more accurate. We also address the following fundamental question: how should a practitioner set the sketch size in a way that adapts to the hardness of the underlying problem? We propose a two-phase approach that allows for the use of a smaller sketch size for simpler problems (e.g. near-sparse or light-tailed distributions). We conclude our work by showing how differential privacy can be added to our algorithm and verifying its superior performance through extensive experiments conducted on large-scale datasets.
Algorithms to solve fault-tolerant consensus in asynchronous systems often rely on primitives such as crusader agreement, adopt-commit, and graded broadcast, which provide weaker agreement properties than consensus. Although these primitives have a similar flavor, they have been defined and implemented separately in ad hoc ways. We propose a new problem called connected consensus that has as special cases crusader agreement, adopt-commit, and graded broadcast, and generalizes them to handle multi-valued inputs. The generalization is accomplished by relating the problem to approximate agreement on graphs. We present three algorithms for multi-valued connected consensus in asynchronous message-passing systems, one tolerating crash failures and two tolerating malicious (unauthenticated Byzantine) failures. We extend the definition of binding, a desirable property recently identified as supporting binary consensus algorithms that are correct against adaptive adversaries, to the multi-valued input case and show that all our algorithms satisfy the property. Our crash-resilient algorithm has failure-resilience and time complexity that we show are optimal. When restricted to the case of binary inputs, the algorithm has improved time complexity over prior algorithms. Our two algorithms for malicious failures trade off failure resilience and time complexity. The first algorithm has time complexity that we prove is optimal but worse failure-resilience, while the second has failure-resilience that we prove is optimal but worse time complexity. When restricted to the case of binary inputs, the time complexity (as well as resilience) of the second algorithm matches that of prior algorithms.
In the context of the ACM KDF-SIGIR 2023 competition, we undertook an entity relation task on a dataset of financial entity relations called REFind. Our top-performing solution involved a multi-step approach. Initially, we inserted the provided entities at their corresponding locations within the text. Subsequently, we fine-tuned the transformer-based language model roberta-large for text classification by utilizing a labeled training set to predict the entity relations. Lastly, we implemented a post-processing phase to identify and handle improbable predictions generated by the model. As a result of our methodology, we achieved the 1st place ranking on the competition's public leaderboard.
Inverse transparency is created by making all usages of employee data visible to them. This requires tools that handle the logging and storage of usage information, and making logged data visible to data owners. For research and teaching contexts that integrate inverse transparency, creating this required infrastructure can be challenging. The Inverse Transparency Toolchain presents a flexible solution for such scenarios. It can be easily deployed and is tightly integrated. With it, we successfully handled use cases covering empirical studies with users, prototyping in university courses, and experimentation with our industry partner.
Purpose: The recent proliferation of preprints could be a way for researchers worldwide to increase the availability and visibility of their research findings. Against the background of rising publication costs caused by the increasing prevalence of article processing fees, the search for other ways to publish research results besides traditional journal publication may increase. This could be especially true for lower-income countries. Design/methodology/approach: Therefore, we are interested in the experiences and attitudes towards posting and using preprints in the Global South as opposed to the Global North. To explore whether motivations and concerns about posting preprints differ, we adopted a mixed-methods approach, combining a quantitative survey of researchers with focus group interviews. Findings: We found that respondents from the Global South were more likely to agree to adhere to policies and to emphasise that mandates could change publishing behaviour towards open access. They were also more likely to agree posting preprints has a positive impact. Respondents from the Global South and the Global North emphasised the importance of peer-reviewed research for career advancement. Originality: The study has identified a wide range of experiences with and attitudes towards posting preprints among researchers in the Global South and the Global North. To our knowledge, this has hardly been studied before, which is also because preprints only have emerged lately in many disciplines and countries.
The analysis of public affairs documents is crucial for citizens as it promotes transparency, accountability, and informed decision-making. It allows citizens to understand government policies, participate in public discourse, and hold representatives accountable. This is crucial, and sometimes a matter of life or death, for companies whose operation depend on certain regulations. Large Language Models (LLMs) have the potential to greatly enhance the analysis of public affairs documents by effectively processing and understanding the complex language used in such documents. In this work, we analyze the performance of LLMs in classifying public affairs documents. As a natural multi-label task, the classification of these documents presents important challenges. In this work, we use a regex-powered tool to collect a database of public affairs documents with more than 33K samples and 22.5M tokens. Our experiments assess the performance of 4 different Spanish LLMs to classify up to 30 different topics in the data in different configurations. The results shows that LLMs can be of great use to process domain-specific documents, such as those in the domain of public affairs.
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13\% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.
The concept of the federated Cloud-Edge-IoT continuum promises to alleviate many woes of current systems, improving resource use, energy efficiency, quality of service, and more. However, this continuum is still far from being realized in practice, with no comprehensive solutions for developing, deploying, and managing continuum-native applications. Breakthrough innovations and novel system architectures are needed to cope with the ever-increasing heterogeneity and the multi-stakeholder nature of computing resources. This work proposes a novel architecture for choreographing workloads in the continuum, attempting to address these challenges. The architecture that tackles this issue comprehensively, spanning from the workloads themselves, through networking and data exchange, up to the orchestration and choreography mechanisms. The concept emphasizes the use of varied AI techniques, enabling autonomous and intelligent management of resources and workloads. Open standards are also a key part of the proposition, making it possible to fully engage third parties in multi-stakeholder scenarios. Although the presented architecture is promising, much work is required to realize it in practice. To this end, the key directions for future research are outlined.
With the surge of theoretical work investigating Reconfigurable Intelligent Surfaces (RISs) for wireless communication and sensing, there exists an urgent need of hardware solutions for the evaluation of these theoretical results and further advancing the field. The most common solutions proposed in the literature are based on varactors, Positive Intrinsic-Negative (PIN) diodes, and Micro-Electro-Mechanical Systems (MEMS). This paper presents the use of Liquid Crystal (LC) technology for the realization of continuously tunable extremely large millimeter-wave RISs. We review the basic physical principles of LC theory, introduce two different realizations of LC-RISs, namely reflect-array and phased-array, and highlight their key properties that have an impact on the system design and RIS reconfiguration strategy. Moreover, the LC technology is compared with the competing technologies in terms of feasibility, cost, power consumption, reconfiguration speed, and bandwidth. Furthermore, several important open problems for both theoretical and experimental research on LC-RISs are presented.
In recent years, more and more researchers have reflected on the undervaluation of emotion in data visualization and highlighted the importance of considering human emotion in visualization design. Meanwhile, an increasing number of studies have been conducted to explore emotion-related factors. However, so far, this research area is still in its early stages and faces a set of challenges, such as the unclear definition of key concepts, the insufficient justification of why emotion is important in visualization design, and the lack of characterization of the design space of affective visualization design. To address these challenges, first, we conducted a literature review and identified three research lines that examined both emotion and data visualization. We clarified the differences between these research lines and kept 109 papers that studied or discussed how data visualization communicates and influences emotion. Then, we coded the 109 papers in terms of how they justified the legitimacy of considering emotion in visualization design (i.e., why emotion is important) and identified five argumentative perspectives. Based on these papers, we also identified 61 projects that practiced affective visualization design. We coded these design projects in three dimensions, including design fields (where), design tasks (what), and design methods (how), to explore the design space of affective visualization design.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.