We give a comprehensive account on the parameterized complexity of model checking and satisfiability of propositional inclusion and independence logic. We discover that for most parameterizations the problems are either in FPT or paraNP-complete.
We consider the visual disambiguation task of determining whether a pair of visually similar images depict the same or distinct 3D surfaces (e.g., the same or opposite sides of a symmetric building). Illusory image matches, where two images observe distinct but visually similar 3D surfaces, can be challenging for humans to differentiate, and can also lead 3D reconstruction algorithms to produce erroneous results. We propose a learning-based approach to visual disambiguation, formulating it as a binary classification task on image pairs. To that end, we introduce a new dataset for this problem, Doppelgangers, which includes image pairs of similar structures with ground truth labels. We also design a network architecture that takes the spatial distribution of local keypoints and matches as input, allowing for better reasoning about both local and global cues. Our evaluation shows that our method can distinguish illusory matches in difficult cases, and can be integrated into SfM pipelines to produce correct, disambiguated 3D reconstructions. See our project page for our code, datasets, and more results: //doppelgangers-3d.github.io/.
We investigate the complexity of several manipulation and control problems under numerous prevalent approval-based multiwinner voting rules. Particularly, the rules we study include approval voting (AV), satisfaction approval voting (SAV), net-satisfaction approval voting (NSAV), proportional approval voting (PAV), approval-based Chamberlin-Courant voting (ABCCV), minimax approval voting (MAV), etc. We show that these rules generally resist the strategic types scrutinized in the paper, with only a few exceptions. In addition, we also obtain many fixed-parameter tractability results for these problems with respect to several natural parameters, and derive polynomial-time algorithms for certain special cases.
Transparency and accountability are indispensable principles for modern data protection, from both, legal and technical viewpoints. Regulations such as the GDPR, therefore, require specific transparency information to be provided including, e.g., purpose specifications, storage periods, or legal bases for personal data processing. However, it has repeatedly been shown that all too often, this information is practically hidden in legalese privacy policies, hindering data subjects from exercising their rights. This paper presents a novel approach to enable large-scale transparency information analysis across service providers, leveraging machine-readable formats and graph data science methods. More specifically, we propose a general approach for building a transparency analysis platform (TAP) that is used to identify data transfers empirically, provide evidence-based analyses of sharing clusters of more than 70 real-world data controllers, or even to simulate network dynamics using synthetic transparency information for large-scale data-sharing scenarios. We provide the general approach for advanced transparency information analysis, an open source architecture and implementation in the form of a queryable analysis platform, and versatile analysis examples. These contributions pave the way for more transparent data processing for data subjects, and evidence-based enforcement processes for data protection authorities. Future work can build upon our contributions to gain more insights into so-far hidden data-sharing practices.
Many research explore how well computers are able to examine emotions displayed by humans and use that data to perform different tasks. However, there have been very few research which evaluate the computers ability to generate emotion classification information in an attempt to help the user make decisions or perform tasks. This is a crucial area to explore as it is paramount to the two way communication between humans and computers. This research conducted an experiment to investigate the impact of different uncertainty information displays of emotion classification on the human decision making process. Results show that displaying more uncertainty information can help users to be more confident when making decisions.
Deepfakes have become a growing concern in recent years, prompting researchers to develop benchmark datasets and detection algorithms to tackle the issue. However, existing datasets suffer from significant drawbacks that hamper their effectiveness. Notably, these datasets fail to encompass the latest deepfake videos produced by state-of-the-art methods that are being shared across various platforms. This limitation impedes the ability to keep pace with the rapid evolution of generative AI techniques employed in real-world deepfake production. Our contributions in this IRB-approved study are to bridge this knowledge gap from current real-world deepfakes by providing in-depth analysis. We first present the largest and most diverse and recent deepfake dataset (RWDF-23) collected from the wild to date, consisting of 2,000 deepfake videos collected from 4 platforms targeting 4 different languages span created from 21 countries: Reddit, YouTube, TikTok, and Bilibili. By expanding the dataset's scope beyond the previous research, we capture a broader range of real-world deepfake content, reflecting the ever-evolving landscape of online platforms. Also, we conduct a comprehensive analysis encompassing various aspects of deepfakes, including creators, manipulation strategies, purposes, and real-world content production methods. This allows us to gain valuable insights into the nuances and characteristics of deepfakes in different contexts. Lastly, in addition to the video content, we also collect viewer comments and interactions, enabling us to explore the engagements of internet users with deepfake content. By considering this rich contextual information, we aim to provide a holistic understanding of the {evolving} deepfake phenomenon and its impact on online platforms.
We relate the power bound and a resolvent condition of Kreiss-Ritt type and characterize the extremal growth of two families of products of three Toeplitz operators on the Hardy space that contain infinitely many points in their spectra. Since these operators do not fall into a well-understood class, we analyze them through explicit techniques based on properties of Toeplitz operators and the structure of the Hardy space. Our methods apply mutatis mutandis to operators of the form $T_{g(z)}^{-1}T_{f(z)}T_{g(z)}$ where $f(z)$ is a polynomial in $z$ and $\bar{z}$ and $g(z)$ is a polynomial in $z$. This collection of operators arises in the numerical solution of the Cauchy problem for linear ordinary, partial, and delay differential equations that are frequently used as models for processes in the sciences and engineering. Our results provide a framework for the stability analysis of existing numerical methods for new classes of linear differential equations as well as the development of novel approximation schemes.
Despite decades of research, developing correct and scalable concurrent programs is still challenging. Network functions (NFs) are not an exception. This paper presents NFork, a system that helps NF domain experts to productively develop concurrent NFs by abstracting away concurrency from developers. The key scheme behind NFork's design is to exploit NF characteristics to overcome the limitations of prior work on concurrency programming. Developers write NFs as sequential programs, and during runtime, NFork performs transparent parallelization by processing packets in different cores. Exploiting NF characteristics, NFork leverages transactional memory and develops efficient concurrent data structures to achieve scalability and guarantee the absence of concurrency bugs. Since NFork manages concurrency, it further provides (i) a profiler that reveals the root causes of scalability bottlenecks inherent to the NF's semantics and (ii) actionable recipes for developers to mitigate these root causes by relaxing the NF's semantics. We show that NFs developed with NFork achieve competitive scalability with those in Cisco VPP [16], and NFork's profiler and recipes can effectively aid developers in optimizing NF scalability.
The introduction and advancements in Local Differential Privacy (LDP) variants have become a cornerstone in addressing the privacy concerns associated with the vast data produced by smart devices, which forms the foundation for data-driven decision-making in crowdsensing. While harnessing the power of these immense data sets can offer valuable insights, it simultaneously poses significant privacy risks for the users involved. LDP, a distinguished privacy model with a decentralized architecture, stands out for its capability to offer robust privacy assurances for individual users during data collection and analysis. The essence of LDP is its method of locally perturbing each user's data on the client-side before transmission to the server-side, safeguarding against potential privacy breaches at both ends. This article offers an in-depth exploration of LDP, emphasizing its models, its myriad variants, and the foundational structure of LDP algorithms.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.