We extend our previous work on two-party election competition [Lin, Lu & Chen 2021] to the setting of three or more parties. An election campaign among two or more parties is viewed as a game of two or more players. Each of them has its own candidates as the pure strategies to play. People, as voters, comprise supporters for each party, and a candidate brings utility for the the supporters of each party. Each player nominates exactly one of its candidates to compete against the other party's. A candidate is assumed to win the election with higher odds if it brings more utility for all the people. The payoff of each player is the expected utility its supporters get. The game is egoistic if every candidate benefits her party's supporters more than any candidate from the competing party does. In this work, we first argue that the election game always has a pure Nash equilibrium when the winner is chosen by the hardmax function, while there exist game instances in the three-party election game such that no pure Nash equilibrium exists even the game is egoistic. Next, we propose two sufficient conditions for the egoistic election game to have a pure Nash equilibrium. Based on these conditions, we propose a fixed-parameter tractable algorithm to compute a pure Nash equilibrium of the egoistic election game. Finally, perhaps surprisingly, we show that the price of anarchy of the egoistic election game is upper bounded by the number of parties. Our findings suggest that the election becomes unpredictable when more than two parties are involved and, moreover, the social welfare deteriorates with the number of participating parties in terms of possibly increasing price of anarchy. This work alternatively explains why the two-party system is prevalent in democratic countries.
Answer Set Programming with Quantifiers ASP(Q) extends Answer Set Programming (ASP) to allow for declarative and modular modeling of problems from the entire polynomial hierarchy. The first implementation of ASP(Q), called qasp, was based on a translation to Quantified Boolean Formulae (QBF) with the aim of exploiting the well-developed and mature QBF-solving technology. However, the implementation of the QBF encoding employed in qasp is very general and might produce formulas that are hard to evaluate for existing QBF solvers because of the large number of symbols and sub-clauses. In this paper, we present a new implementation that builds on the ideas of qasp and features both a more efficient encoding procedure and new optimized encodings of ASP(Q) programs in QBF. The new encodings produce smaller formulas (in terms of the number of quantifiers, variables, and clauses) and result in a more efficient evaluation process. An algorithm selection strategy automatically combines several QBF-solving back-ends to further increase performance. An experimental analysis, conducted on known benchmarks, shows that the new system outperforms qasp.
The Immersed Boundary (IB) method of Peskin (J. Comput. Phys., 1977) is useful for problems involving fluid-structure interactions or complex geometries. By making use of a regular Cartesian grid that is independent of the geometry, the IB framework yields a robust numerical scheme that can efficiently handle immersed deformable structures. Additionally, the IB method has been adapted to problems with prescribed motion and other PDEs with given boundary data. IB methods for these problems traditionally involve penalty forces which only approximately satisfy boundary conditions, or they are formulated as constraint problems. In the latter approach, one must find the unknown forces by solving an equation that corresponds to a poorly conditioned first-kind integral equation. This operation can require a large number of iterations of a Krylov method, and since a time-dependent problem requires this solve at each time step, this method can be prohibitively inefficient without preconditioning. In this work, we introduce a new, well-conditioned IB formulation for boundary value problems, which we call the Immersed Boundary Double Layer (IBDL) method. We present the method as it applies to Poisson and Helmholtz problems to demonstrate its efficiency over the original constraint method. In this double layer formulation, the equation for the unknown boundary distribution corresponds to a well-conditioned second-kind integral equation that can be solved efficiently with a small number of iterations of a Krylov method. Furthermore, the iteration count is independent of both the mesh size and immersed boundary point spacing. The method converges away from the boundary, and when combined with a local interpolation, it converges in the entire PDE domain. Additionally, while the original constraint method applies only to Dirichlet problems, the IBDL formulation can also be used for Neumann conditions.
We consider a contest game modelling a contest where reviews for $m$ proposals are crowdsourced from $n$ strategic agents} players. Player $i$ has a skill $s_{i\ell}$ for reviewing proposal $\ell$; for her review, she strategically chooses a quality $q \in \{ 1, 2, \ldots, Q \}$ and pays an effort ${\sf f}_{q} \geq 0$, strictly increasing with $q$. For her effort, she is given a strictly positive payment determined by a payment function, which is either player-invariant, like, e.g., the popular proportional allocation function, or player-specific; for a given proposal, payments are proportional to the corresponding efforts and the total payment provided by the contest organizer is 1. The cost incurred to player $i$ for each of her reviews is the difference of a skill-effort function $\Lambda (s_{i},{ \sf f}_{q})$ minus her payment. Skills may vary for arbitrary players and arbitrary proposals. A proposal-indifferent player $i$ has identical skills: $s_{i\ell} = s_{i}$ for all $\ell$; anonymous players means $s_{i} = 1$ for all players $i$. In a pure Nash equilibrium, no player could unilaterally reduce her cost by switching to a different quality. We present algorithmic results for computing pure Nash equilibria.
Due to its ability to generate millions of particles, massively detailed scenes and confusing artificial illumination with reality, the version 5 of Unreal Engine promises unprecedented industrial applications. The paradigms and aims of Unreal Engine contrast with the industrial simulators typically used by the scientific community. The visual quality and performance of its rendering engine increase the opportunities, especially for industries and simulation business: where interoperability and scalability are required. The study of the following issue `` Which architecture should we implement to integrate real-world data, in an Unreal Engine 5 simulator and in a mixed-reality environment? '' offers a point of view. The topic is reexamined in an innovative and conceptual way, such as the generalization of mixedreality technologies, Internet of Things, digital twins, Big Data but providing a solution for simple and actual use cases. This paper gives a detailed analysis of the issue, at both theoretical and operational level. Then, the document goes deep into Unreal Engine's operation in order to extract the vanilla capabilities. Next, the C++ Plugin system is reviewed in details as well as the third-party library integration: pitfalls to be avoided are shown. Finally, the last chapter proposes a generic architecture, useful in large-scale industrial 3D applications, such as collaborative work or hyper-connected simulators. This document might be of interest to an Unreal Engine expert who would like to discover about server architectures. Conversely, it could be relevant for an expert in backend servers who wants to learn about Unreal Engine capabilities. This research concludes that Unreal Engine's modularity enables integration with almost any protocol. The features to integrate external real data are numerous but depend on use cases. Distributed systems for Big Data require a scalable architecture, possibly without the use of the Unreal Engine dedicated server. Environments, which require sub-second latency need to implement direct connections, bypassing any intermediate servers.
A recent surge of users migrating from Twitter to alternative platforms, such as Mastodon, raised questions regarding what migration patterns are, how different platforms impact user behaviors, and how migrated users settle in the migration process. In this study, we elaborate how we investigate these questions by collecting data over 10,000 users who migrated from Twitter to Mastodon within the first ten weeks following Elon Musk's acquisition of Twitter. Our research is structured in three primary steps. First, we develop algorithms to extract and analyze migration patters. Second, by leveraging behavioral analysis, we examine the distinct architectures of Twitter and Mastodon to learn how different platforms shape user behaviors on each platform. Last, we determine how particular behavioral factors influence users to stay on Mastodon. We share our findings of user migration, insights, and lessons learned from the user behavior study.
The Byzantine consensus problem involves $n$ processes, out of which t < n could be faulty and behave arbitrarily. Three properties characterize consensus: (1) termination, requiring correct (non-faulty) processes to eventually reach a decision, (2) agreement, preventing them from deciding different values, and (3) validity, precluding ``unreasonable'' decisions. But, what is a reasonable decision? Strong validity, a classical property, stipulates that, if all correct processes propose the same value, only that value can be decided. Weak validity, another established property, stipulates that, if all processes are correct and they propose the same value, that value must be decided. The space of possible validity properties is vast. However, their impact on consensus remains unclear. This paper addresses the question of which validity properties allow Byzantine consensus to be solvable with partial synchrony, and at what cost. First, we determine necessary and sufficient conditions for a validity property to make the consensus problem solvable; we say that such validity properties are solvable. Notably, we prove that, if n <= 3t, all solvable validity properties are trivial (there exists an always-admissible decision). Furthermore, we show that, with any non-trivial (and solvable) validity property, consensus requires Omega(t^2) messages. This extends the seminal Dolev-Reischuk bound, originally proven for strong validity, to all non-trivial validity properties. Lastly, we give a general Byzantine consensus algorithm, we call Universal, for any solvable (and non-trivial) validity property. Importantly, Universal incurs O(n^2) message complexity. Thus, together with our lower bound, Universal implies a fundamental result in partial synchrony: with t \in Omega(n), the message complexity of all (non-trivial) consensus variants is Theta(n^2).
The privacy of personal information has received significant attention in mobile software. Although previous researchers have designed some methods to identify the conflict between app behavior and privacy policies, little is known about investigating regulation requirements for third-party libraries (TPLs). The regulators enacted multiple regulations to regulate the usage of personal information for TPLs (e.g., the "California Consumer Privacy Act" requires businesses clearly notify consumers if they share consumers' data with third parties or not). However, it remains challenging to analyze the legality of TPLs due to three reasons: 1) TPLs are mainly published on public repositoriesinstead of app market (e.g., Google play). The public repositories do not perform privacy compliance analysis for each TPL. 2) TPLs only provide independent functions or function sequences. They cannot run independently, which limits the application of performing dynamic analysis. 3) Since not all the functions of TPLs are related to user privacy, we must locate the functions of TPLs that access/process personal information before performing privacy compliance analysis. To overcome the above challenges, in this paper, we propose an automated system named ATPChecker to analyze whether the Android TPLs meet privacy-related regulations or not. Our findings remind developers to be mindful of TPL usage when developing apps or writing privacy policies to avoid violating regulations.
Importance sampling (IS) is a popular technique in off-policy evaluation, which re-weights the return of trajectories in the replay buffer to boost sample efficiency. However, training with IS can be unstable and previous attempts to address this issue mainly focus on analyzing the variance of IS. In this paper, we reveal that the instability is also related to a new notion of Reuse Bias of IS -- the bias in off-policy evaluation caused by the reuse of the replay buffer for evaluation and optimization. We theoretically show that the off-policy evaluation and optimization of the current policy with the data from the replay buffer result in an overestimation of the objective, which may cause an erroneous gradient update and degenerate the performance. We further provide a high-probability upper bound of the Reuse Bias, and show that controlling one term of the upper bound can control the Reuse Bias by introducing the concept of stability for off-policy algorithms. Based on these analyses, we finally present a novel Bias-Regularized Importance Sampling (BIRIS) framework along with practical algorithms, which can alleviate the negative impact of the Reuse Bias. Experimental results show that our BIRIS-based methods can significantly improve the sample efficiency on a series of continuous control tasks in MuJoCo.
Imitation is a key component of human social behavior, and is widely used by both children and adults as a way to navigate uncertain or unfamiliar situations. But in an environment populated by multiple heterogeneous agents pursuing different goals or objectives, indiscriminate imitation is unlikely to be an effective strategy -- the imitator must instead determine who is most useful to copy. There are likely many factors that play into these judgements, depending on context and availability of information. Here we investigate the hypothesis that these decisions involve inferences about other agents' reward functions. We suggest that people preferentially imitate the behavior of others they deem to have similar reward functions to their own. We further argue that these inferences can be made on the basis of very sparse or indirect data, by leveraging an inductive bias toward positing the existence of different \textit{groups} or \textit{types} of people with similar reward functions, allowing learners to select imitation targets without direct evidence of alignment.
An experimental comparison of two or more optimization algorithms requires the same computational resources to be assigned to each algorithm. When a maximum runtime is set as the stopping criterion, all algorithms need to be executed in the same machine if they are to use the same resources. Unfortunately, the implementation code of the algorithms is not always available, which means that running the algorithms to be compared in the same machine is not always possible. And even if they are available, some optimization algorithms might be costly to run, such as training large neural-networks in the cloud. In this paper, we consider the following problem: how do we compare the performance of a new optimization algorithm B with a known algorithm A in the literature if we only have the results (the objective values) and the runtime in each instance of algorithm A? Particularly, we present a methodology that enables a statistical analysis of the performance of algorithms executed in different machines. The proposed methodology has two parts. Firstly, we propose a model that, given the runtime of an algorithm in a machine, estimates the runtime of the same algorithm in another machine. This model can be adjusted so that the probability of estimating a runtime longer than what it should be is arbitrarily low. Secondly, we introduce an adaptation of the one-sided sign test that uses a modified \textit{p}-value and takes into account that probability. Such adaptation avoids increasing the probability of type I error associated with executing algorithms A and B in different machines.