Spiral Galaxies is a pencil-and-paper puzzle played on a grid of unit squares: given a set of points called centers, the goal is to partition the grid into polyominoes such that each polyomino contains exactly one center and is 180{\deg} rotationally symmetric about its center. We show that this puzzle is NP-complete, ASP-complete, and #P-complete even if (a) all solutions to the puzzle have rectangles for polyominoes; or (b) the polyominoes are required to be rectangles and all solutions to the puzzle have just 1$\times$1, 1$\times$3, and 3$\times$1 rectangles. The proof for the latter variant also implies NP/ASP/#P-completeness of finding a noncrossing perfect matching in distance-2 grid graphs where edges connect vertices of Euclidean distance 2. Moreover, we prove NP-completeness of the design problem of minimizing the number of centers such that there exists a set of galaxies that exactly cover a given shape
The FAIR principles for scientific data (Findable, Accessible, Interoperable, Reusable) are also relevant to other digital objects such as research software and scientific workflows that operate on scientific data. The FAIR principles can be applied to the data being handled by a scientific workflow as well as the processes, software, and other infrastructure which are necessary to specify and execute a workflow. The FAIR principles were designed as guidelines, rather than rules, that would allow for differences in standards for different communities and for different degrees of compliance. There are many practical considerations which impact the level of FAIR-ness that can actually be achieved, including policies, traditions, and technologies. Because of these considerations, obstacles are often encountered during the workflow lifecycle that trace directly to shortcomings in the implementation of the FAIR principles. Here, we detail some cases, without naming names, in which data and workflows were Findable but otherwise lacking in areas commonly needed and expected by modern FAIR methods, tools, and users. We describe how some of these problems, all of which were overcome successfully, have motivated us to push on systems and approaches for fully FAIR workflows.
Minority groups have been using social media to organize social movements that create profound social impacts. Black Lives Matter (BLM) and Stop Asian Hate (SAH) are two successful social movements that have spread on Twitter that promote protests and activities against racism and increase the public's awareness of other social challenges that minority groups face. However, previous studies have mostly conducted qualitative analyses of tweets or interviews with users, which may not comprehensively and validly represent all tweets. Very few studies have explored the Twitter topics within BLM and SAH dialogs in a rigorous, quantified and data-centered approach. Therefore, in this research, we adopted a mixed-methods approach to comprehensively analyze BLM and SAH Twitter topics. We implemented (1) the latent Dirichlet allocation model to understand the top high-level words and topics and (2) open-coding analysis to identify specific themes across the tweets. We collected more than one million tweets with the #blacklivesmatter and #stopasianhate hashtags and compared their topics. Our findings revealed that the tweets discussed a variety of influential topics in depth, and social justice, social movements, and emotional sentiments were common topics in both movements, though with unique subtopics for each movement. Our study contributes to the topic analysis of social movements on social media platforms in particular and the literature on the interplay of AI, ethics, and society in general.
Well-designed simultaneously transmitting and reflecting RIS (STAR-RIS), which extends the half-space coverage to full-space coverage, incurs wireless communication environments to be smart and reconfigurable. In this paper, we survey how STAR-RIS affects the performance of full-duplex communication systems with the presence of full-duplex users, wherein the base station (BS) and the uplink users are subject to maximum transmission power constraints. Firstly, the weighted sum-rate (WSR) is derived as a system performance metric. Then, we formulate the resource allocation design into an equivalent weighted minimum mean-square-error form and then transform it into several convex sub-problems to maximize the WSR as an optimization problem which jointly optimizes the beamforming and the combining vectors at the BS, the transmit powers of the uplink users, and phase shifts of STAR-RIS. Although the WSR optimization is non-convex, an efficient iterative alternating procedure is proposed to achieve a sub-optimal solution for the optimization problem. Secondly, the STAR-RIS's phase shifts are optimized via the successive convex approximation technique. Finally, numerical results are provided to explain how STAR-RIS improves the performance metric with the presence of full-duplex users.
Purpose: We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Towards this, we propose a new simple yet efficient deep network architecture, called \textit{relational reasoning network (RRN)}, to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach: The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results: We applied RRN to cone beam computed tomography scans obtained from 250 patients. With a 4-fold cross validation technique, we obtained an average root mean squared error of less than 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring several \textit{reasoning} about informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformation are present in the bones. Conclusions: Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation based approaches, where segmentation failure (as often the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first of its kind algorithm finding anatomical relations of the objects using deep learning.
Generative Adversarial Network (GAN) based art has proliferated in the past year, going from a shiny new tool to generate fake human faces to a stage where anyone can generate thousands of artistic images with minimal effort. Some of these images are now ``good'' enough to win accolades from qualified judges. In this paper, we explore how Generative Models have impacted artistry, not only from a qualitative point of view, but also from an angle of exploitation of artisans --both via plagiarism, where models are trained on their artwork without permission, and via profit shifting, where profits in the art market have shifted from art creators to model owners or to traders in the unregulated secondary crypto market. This confluence of factors risks completely detaching humans from the artistic process, devaluing the labor of artists and distorting the public perception of the value of art.
We introduce a two-person non-zero-sum random-turn game that is a variant of the stake-governed games introduced recently in [HP2022]. We call the new game the Trail of Lost Pennies. At time zero, a counter is placed at a given integer location: $X_0 = k \in \mathbb{Z}$, say. At the $i$-th turn (for $i \in \mathbb{N}_+$), Maxine and Mina place non-negative stakes, $a_i$ and $b_i$, for which each pays from her own savings. Maxine is declared to be the turn victor with probability $\tfrac{a_i}{a_i+b_i}$; otherwise, Mina is. If Maxine wins the turn, she will move the counter one place to the right, so that $X_i = X_{i-1} +1$; if Mina does so, the counter will move one place to the left, so that $X_i = X_{i-1} -1$. If $\liminf X_i = \infty$, then Maxine wins the game; if $\limsup X_i = -\infty$, then Mina does. (A special rule is needed to treat the remaining, indeterminate, case.) When Maxine wins, she receives a terminal payment of $m_\infty$, while Mina receives $n_\infty$. If Mina wins, these respective receipts are $m_{-\infty}$ and $n_{-\infty}$. The four terminal payment values are supposed to be real numbers that satisfy $m_\infty > m_{-\infty}$ and $n_\infty < n_{-\infty}$, where these bounds accord with the notion that Maxine wins when the counter ends far to the right, and that Mina does so when it reaches far to the left. Each player is motivated to offer stakes at each turn of the game, in order to secure the higher terminal payment that will arise from her victory; but since these stake amounts accumulate to act as a cost depleting the profit arising from victory, each player must also seek to control these expenses. In this article, we study the Trail of Lost Pennies, formulating strategies for the two players and defining and analysing Nash equilibria in the game.
In type-and-coeffect systems, contexts are enriched by coeffects modeling how they are actually used, typically through annotations on single variables. Coeffects are computed bottom-up, combining, for each term, the coeffects of its subterms, through a fixed set of algebraic operators. We show that this principled approach can be adopted to track sharing in the imperative paradigm, that is, links among variables possibly introduced by the execution. This provides a significant example of non-structural coeffects, which cannot be computed by-variable, since the way a given variable is used can affect the coeffects of other variables. To illustrate the effectiveness of the approach, we enhance the type system tracking sharing to model a sophisticated set of features related to uniqueness and immutability. Thanks to the coeffect-based approach, we can express such features in a simple way and prove related properties with standard techniques.
Emerging quantum algorithms for problems such as element distinctness, subset sum, and closest pair demonstrate computational advantages by relying on abstract data structures. Practically realizing such an algorithm as a program for a quantum computer requires an efficient implementation of the data structure whose operations correspond to unitary operators that manipulate quantum superpositions of data. To correctly operate in superposition, an implementation must satisfy three properties -- reversibility, history independence, and bounded-time execution. Standard implementations, such as the representation of an abstract set as a hash table, fail these properties, calling for tools to develop specialized implementations. In this work, we present Core Tower, the first language for quantum programming with random-access memory. Core Tower enables the developer to implement data structures as pointer-based, linked data. It features a reversible semantics enabling every valid program to be translated to a unitary quantum circuit. We present Boson, the first memory allocator that supports reversible, history-independent, and constant-time dynamic memory allocation in quantum superposition. We also present Tower, a language for quantum programming with recursively defined data structures. Tower features a type system that bounds all recursion using classical parameters as is necessary for a program to execute on a quantum computer. Using Tower, we implement Ground, the first quantum library of data structures, including lists, stacks, queues, strings, and sets. We provide the first executable implementation of sets that satisfies all three mandated properties of reversibility, history independence, and bounded-time execution.
We consider the problem of partitioning a line segment into two subsets, so that $n$ finite measures all has the same ratio of values for the subsets. Letting $\alpha\in[0,1]$ denote the desired ratio, this generalises the PPA-complete consensus-halving problem, in which $\alpha=\frac{1}{2}$. Stromquist and Woodall showed that for any $\alpha$, there exists a solution using $2n$ cuts of the segment. They also showed that if $\alpha$ is irrational, that upper bound is almost optimal. In this work, we elaborate the bounds for rational values $\alpha$. For $\alpha = \frac{\ell}{k}$, we show a lower bound of $\frac{k-1}{k} \cdot 2n - O(1)$ cuts; we also obtain almost matching upper bounds for a large subset of rational $\alpha$. On the computational side, we explore its dependence on the number of cuts available. More specifically, 1. when using the minimal number of cuts for each instance is required, the problem is NP-hard for any $\alpha$; 2. for a large subset of rational $\alpha = \frac{\ell}{k}$, when $\frac{k-1}{k} \cdot 2n$ cuts are available, the problem is in PPA-$k$ under Turing reduction; 3. when $2n$ cuts are allowed, the problem belongs to PPA for any $\alpha$; more generally, the problem belong to PPA-$p$ for any prime $p$ if $2(p-1)\cdot \frac{\lceil p/2 \rceil}{\lfloor p/2 \rfloor} \cdot n$ cuts are available.
I survey, for a general scientific audience, three decades of research into which sorts of problems admit exponential speedups via quantum computers -- from the classics (like the algorithms of Simon and Shor), to the breakthrough of Yamakawa and Zhandry from April 2022. I discuss both the quantum circuit model, which is what we ultimately care about in practice but where our knowledge is radically incomplete, and the so-called oracle or black-box or query complexity model, where we've managed to achieve a much more thorough understanding that then informs our conjectures about the circuit model. I discuss the strengths and weaknesses of switching attention to sampling tasks, as was done in the recent quantum supremacy experiments. I make some skeptical remarks about widely-repeated claims of exponential quantum speedups for practical machine learning and optimization problems. Through many examples, I try to convey the "law of conservation of weirdness," according to which every problem admitting an exponential quantum speedup must have some unusual property to allow the amplitude to be concentrated on the unknown right answer(s).