Suppose a gambler pays one coin per coup to play a two-armed Futurity slot machine, an antique casinos, and two coins are refunded for every two consecutive gambler losses. This payoff is called the Futurity award. The casino owner honestly advertises that each arm on his/her two-armed machine is fair in the sense that the asymptotic expected profit of both gambler and dealer is 0 if the gambler only plays either arm. The gambler is allowed to play either arm on each coup alternatively in some deterministic order or at random. For almost 90 years, since Futurity slot machines is designed in 1936, an open problem that has not been solved for a long time is whether the slot machine will obey the so-called "long bet will lose" phenomenon so common to casino games. Ethier and Lee [Ann. Appl. Proba. 20(2010), pp.1098-1125] conjectured that a player will also definitely lose in the long run by applying any non-random-mixture strategy. In this paper, we shall prove Ethier and Lee's conjecture. Our result with Ethier and Lee's conclusion straightforwardly demonstrates that players decide to use either random or non-random two-arm strategies before playing and then repeated without interruption, the casino owners are always profitable even when the Futurity award is taken into account. The contribution of this work is that it helps complete the demystification of casino profitability. Moreover, it paves the way for casino owners to improve casino game design and for players to participate effectively in gambling.
The simultaneous orthogonal matching pursuit (SOMP) is a popular, greedy approach for common support recovery of a row-sparse matrix. However, compared to the noiseless scenario, the performance analysis of noisy SOMP is still nascent, especially in the scenario of unbounded noise. In this paper, we present a new study based on the mutual incoherence property (MIP) for performance analysis of noisy SOMP. Specifically, when noise is bounded, we provide the condition on which the exact support recovery is guaranteed in terms of the MIP. When noise is unbounded, we instead derive a bound on the successful recovery probability (SRP) that depends on the specific distribution of the $\ell_2$-norm of the noise matrix. Then we focus on the common case when noise is random Gaussian and show that the lower bound of SRP follows Tracy-Widom law distribution. The analysis reveals the number of measurements, noise level, the number of sparse vectors, and the value of mutual coherence that are required to guarantee a predefined recovery performance. Theoretically, we show that the mutual coherence of the measurement matrix must decrease proportionally to the noise standard deviation, and the number of sparse vectors needs to grow proportionally to the noise variance. Finally, we extensively validate the derived analysis through numerical simulations.
Studies of the human brain during natural activities, such as locomotion, would benefit from the ability to image deep brain structures during these activities. While Positron Emission Tomography (PET) can image these structures, the bulk and weight of current scanners are not compatible with the desire for a wearable device. This has motivated the design of a robotic system to support a PET imaging system around the subject's head and to move the system to accommodate natural motion. We report here the design and experimental evaluation of a prototype robotic system that senses motion of a subject's head, using parallel string encoders connected between the robot-supported imaging ring and a helmet worn by the subject. This measurement is used to robotically move the imaging ring (coarse motion correction) and to compensate for residual motion during image reconstruction (fine motion correction). Minimization of latency and measurement error are the key design goals, respectively, for coarse and fine motion correction. The system is evaluated using recorded human head motions during locomotion, with a mock imaging system consisting of lasers and cameras, and is shown to provide an overall system latency of about 80 ms, which is sufficient for coarse motion correction and collision avoidance, as well as a measurement accuracy of about 0.5 mm for fine motion correction.
We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time. This yields, to our knowledge, the first known family of codes that can be decoded in nearly linear time, even as they approach the list decoding capacity. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list-decoding. It is known that for every $\epsilon >0$, and rate $R \in (0,1)$, there exist explicit families of these codes that have rate $R$ and can be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above tasks in nearly linear time. Our algorithms have two main components. The first builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a nearly linear time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design nearly-linear time algorithms for two natural algebraic problems. The first algorithm solves linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field element. These algorithms can be viewed as generalizations of classical algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.
Decentralised organisations use blockchains as a basis for governance: they use on-chain transactions to allocate voting weight, publish proposals, cast votes, and enact the results. However, blockchain-based governance structures have challenges, mostly notably, the need to align the short-term outlook of pseudononymous voters with the long-term growth and success of the decentralised organisation. The Vote-Escrowed Token (veToken) model attempts to resolve this tension by requiring voters to escrow or lock tokens of value for an extended period in exchange for voting weight. In this paper, we describe the veToken model and analyse its emergent outcomes. We show that voting behaviour follows bribes set by higher-level protocols, and that the cost per vote varies depending on how it is acquired. We describe the implementation of the veToken model by Curve Finance, a popular automated market maker for stablecoins, and the ecosystem of protocols that has arisen on top of this implementation. We show that voting markets such as Votium largely determine the outcome of fortnightly votes held by Convex Finance, and we show that Frax Finance, a stablecoin issuer, plays a central role in the ecosystem even though they directly lock relatively few tokens with Curve. Instead, they indirectly lock tokens through yield aggregators such as Convex Finance and purchase voting weight through voting markets such as Votium. Although the veToken model in isolation is straight-forward and easily explained, it leads to many complex and emergent outcomes. Decentralised organisations should consider these outcomes before adopting the model.
Ising machines are specialized computers for finding the lowest energy states of Ising spin models, onto which many practical combinatorial optimization problems can be mapped. Simulated bifurcation (SB) is a quantum-inspired parallelizable algorithm for Ising problems that enables scalable multi-chip implementations of Ising machines. However, the computational performance of a previously proposed multi-chip architecture tends to saturate as the number of chips increases for a given problem size because both computation and communication are exclusive in the time domain. In this paper, we propose a streaming architecture for multi-chip implementations of SB-based Ising machines with full spin-to-spin connectivity. The data flow in in-chip computation is harmonized with the data flow in inter-chip communication, enabling the computation and communication to overlap and the communication time to be hidden. Systematic experiments demonstrate linear strong scaling of performance up to the vicinity of the ideal communication limit determined only by the latency of chip-to-chip communication. Our eight-FPGA (field-programmable gate array) cluster can compute a 32,768-spin problem with a high pipeline efficiency of 97.9%. The performance of a 79-FPGA cluster for a 100,000-spin problem, projected using a theoretical performance model validated on smaller experimental clusters, is comparable to that of a state-of-the-art 100,000-spin optical Ising machine.
We study the convergences of three projected Sobolev gradient flows to the ground state of the Gross-Pitaevskii eigenvalue problem. They are constructed as the gradient flows of the Gross-Pitaevskii energy functional with respect to the $H^1_0$-metric and two other equivalent metrics on $H_0^1$, including the iterate-independent $a_0$-metric and the iterate-dependent $a_u$-metric. We first prove the energy dissipation property and the global convergence to a critical point of the Gross-Pitaevskii energy for the discrete-time $H^1$ and $a_0$-gradient flow. We also prove local exponential convergence of all three schemes to the ground state.
We introduce the notion of the Lie derivative in the context of dual quaternions that represent rigid motions and twists. First we define the wrench in terms of dual quaternions. Then we show how the Lie derivative helps understand how actuators affect an end effector in parallel robots, and make it explicit in the two cases case of Stewart Platforms, and cable-driven parallel robots. We also show how to use Lie derivatives with the Newton-Raphson Method to solve the forward kinematic problem for over constrained parallel actuators. Finally, we derive the equations of motion of the end effector in dual quaternion form, which include the effect of inertia from the actuators.
Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and methods of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in Security and Defence. Australia is committed to OECD's values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for Defence; Defence Science has published 'A Method for Ethical AI in Defence' (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI.
Since the cyberspace consolidated as fifth warfare dimension, the different actors of the defense sector began an arms race toward achieving cyber superiority, on which research, academic and industrial stakeholders contribute from a dual vision, mostly linked to a large and heterogeneous heritage of developments and adoption of civilian cybersecurity capabilities. In this context, augmenting the conscious of the context and warfare environment, risks and impacts of cyber threats on kinetic actuations became a critical rule-changer that military decision-makers are considering. A major challenge on acquiring mission-centric Cyber Situational Awareness (CSA) is the dynamic inference and assessment of the vertical propagations from situations that occurred at the mission supportive Information and Communications Technologies (ICT), up to their relevance at military tactical, operational and strategical views. In order to contribute on acquiring CSA, this paper addresses a major gap in the cyber defence state-of-the-art: the dynamic identification of Key Cyber Terrains (KCT) on a mission-centric context. Accordingly, the proposed KCT identification approach explores the dependency degrees among tasks and assets defined by commanders as part of the assessment criteria. These are correlated with the discoveries on the operational network and the asset vulnerabilities identified thorough the supported mission development. The proposal is presented as a reference model that reveals key aspects for mission-centric KCT analysis and supports its enforcement and further enforcement by including an illustrative application case.
Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.