In swimming microorganisms and the cell cytoskeleton, inextensible fibers resist bending and twisting, and interact with the surrounding fluid to cause or resist large-scale fluid motion. In this paper, we develop a novel numerical method for the simulation of cylindrical fibers by extending our previous work on inextensible bending fibers [Maxian et al., Phys. Rev. Fluids 6 (1), 014102] to fibers with twist elasticity. In our "Euler" model, twist is a scalar function that measures the deviation of the fiber cross section relative to a twist-free frame, the fiber exerts only torque parallel to the centerline on the fluid, and the perpendicular components of the rotational fluid velocity are discarded in favor of the translational velocity. In the first part of this paper, we justify this model by comparing it to another commonly-used "Kirchhoff" formulation where the fiber exerts both perpendicular and parallel torque on the fluid, and the perpendicular angular fluid velocity is required to be consistent with the translational fluid velocity. We then develop a spectral numerical method for the hydrodynamics of the Euler model. We define hydrodynamic mobility operators using integrals of the Rotne-Prager-Yamakawa tensor, and evaluate these integrals through a novel slender-body quadrature, which requires on the order of 10 points along the fiber to obtain several digits of accuracy. We demonstrate that this choice of mobility removes the unphysical negative eigenvalues in the translation-translation mobility associated with asymptotic slender body theories, and ensures strong convergence of the fiber velocity and weak convergence of the fiber constraint forces. We pair the spatial discretization with a semi-implicit temporal integrator to confirm the negligible contribution of twist elasticity to the relaxation dynamics of a bent fiber and study the instability of a twirling fiber.
Interacting agents receive public information at no cost and flexibly acquire private information at a cost proportional to entropy reduction. When a policymaker provides more public information, agents acquire less private information, thus lowering information costs. Does more public information raise or reduce uncertainty faced by agents? Is it beneficial or detrimental to welfare? To address these questions, we examine the impacts of public information on flexible information acquisition in a linear-quadratic-Gaussian game with arbitrary quadratic material welfare. More public information raises uncertainty if and only if the game exhibits strategic complementarity, which can be harmful to welfare. However, when agents acquire a large amount of information, more provision of public information increases welfare through a substantial reduction in the cost of information. We give a necessary and sufficient condition for welfare to increase with public information and identify optimal public information disclosure, which is either full or partial disclosure depending upon the welfare function and the slope of the best response.
In a sports competition, a team might lose a powerful incentive to exert full effort if its final rank does not depend on the outcome of the matches still to be played. Therefore, the organiser should reduce the probability of such a situation to the extent possible. Our paper provides a classification scheme to identify these weakly (where one team is indifferent) or strongly (where both teams are indifferent) stakeless games. A statistical model is estimated to simulate the UEFA Champions League groups and compare the candidate schedules used in the 2021/22 season according to the competitiveness of the matches played in the last round(s). The option followed in four of the eight groups is found to be optimal under a wide set of parameters. Minimising the number of strongly stakeless matches is verified to be a likely goal in the computer draw of the fixture that remains hidden from the public.
We extend the Deep Galerkin Method (DGM) introduced in Sirignano and Spiliopoulos (2018)} to solve a number of partial differential equations (PDEs) that arise in the context of optimal stochastic control and mean field games. First, we consider PDEs where the function is constrained to be positive and integrate to unity, as is the case with Fokker-Planck equations. Our approach involves reparameterizing the solution as the exponential of a neural network appropriately normalized to ensure both requirements are satisfied. This then gives rise to nonlinear a partial integro-differential equation (PIDE) where the integral appearing in the equation is handled by a novel application of importance sampling. Secondly, we tackle a number of Hamilton-Jacobi-Bellman (HJB) equations that appear in stochastic optimal control problems. The key contribution is that these equations are approached in their unsimplified primal form which includes an optimization problem as part of the equation. We extend the DGM algorithm to solve for the value function and the optimal control \simultaneously by characterizing both as deep neural networks. Training the networks is performed by taking alternating stochastic gradient descent steps for the two functions, a technique inspired by the policy improvement algorithms (PIA).
Turing inspired a computer metaphor of the mind and brain that has been handy and has spawned decades of empirical investigation, but he did much more and offered behavioral and cognitive sciences another metaphor--that of the cascade. The time has come to confront Turing's cascading instability, which suggests a geometrical framework driven by power laws and can be studied using multifractal formalism and multiscale probability density function analysis. Here, we review a rapidly growing body of scientific investigations revealing signatures of cascade instability and their consequences for a perceiving, acting, and thinking organism. We review work related to executive functioning (planning to act), postural control (bodily poise for turning plans into action), and effortful perception (action to gather information in a single modality and action to blend multimodal information). We also review findings on neuronal avalanches in the brain, specifically about neural participation in body-wide cascades. Turing's cascade instability blends the mind, brain, and behavior across space and time scales and provides an alternative to the dominant computer metaphor.
The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. In the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM), and perspective of computational complexity theory, we formalize a modified version of the Global Workspace Theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeaux and others. We are not looking for a complex model of the brain nor of cognition, but for a simple computational model of (the admittedly complex concept of) consciousness. We do this by defining the Conscious Turing Machine (CTM), also called a conscious AI, and then we define consciousness and related notions in the CTM. While these are only mathematical (TCS) definitions, we suggest why the CTM has the feeling of consciousness. The TCS perspective provides a simple formal framework to employ tools from computational complexity theory and machine learning to help us understand consciousness and related concepts. Previously we explored high level explanations for the feelings of pain and pleasure in the CTM. Here we consider three examples related to vision (blindsight, inattentional blindness, and change blindness), followed by discussions of dreams, free will, and altered states of consciousness.
Refractive freeform components are becoming increasingly relevant for generating controlled patterns of light, because of their capability to spatially-modulate optical signals with high efficiency and low background. However, the use of these devices is still limited by difficulties in manufacturing macroscopic elements with complex, 3-dimensional (3D) surface reliefs. Here, 3D-printed and stretchable magic windows generating light patterns by refraction are introduced. The shape and, consequently, the light texture achieved can be changed through controlled device strain. Cryptographic magic windows are demonstrated through exemplary light patterns, including micro-QR-codes, that are correctly projected and recognized upon strain gating while remaining cryptic for as-produced devices. The light pattern of micro-QR-codes can also be projected by two coupled magic windows, with one of them acting as the decryption key. Such novel, freeform elements with 3D shape and tailored functionalities is relevant for applications in illumination design, smart labels, anti-counterfeiting systems, and cryptographic communication.
The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measures are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as "quantum combs"). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate many of these algorithms by using a variational approach with parameterized quantum circuits. We find that the simulations converge well in both the noiseless and noisy scenarios, for all examples considered. Furthermore, the noisy simulations exhibit a parameter noise resilience.
Holonomic functions play an essential role in Computer Algebra since they allow the application of many symbolic algorithms. Among all algorithmic attempts to find formulas for power series, the holonomic property remains the most important requirement to be satisfied by the function under consideration. The targeted functions mainly summarize that of meromorphic functions. However, expressions like $\tan(z)$, $z/(\exp(z)-1)$, $\sec(z)$, etc., particularly, reciprocals, quotients and compositions of holonomic functions, are generally not holonomic. Therefore their power series are inaccessible by the holonomic framework. From the mathematical dictionaries, one can observe that most of the known closed-form formulas of non-holonomic power series involve another sequence whose evaluation depends on some finite summations. In the case of $\tan(z)$ and $\sec(z)$ the corresponding sequences are the Bernoulli and Euler numbers, respectively. Thus providing a symbolic approach that yields complete representations when linear summations for power series coefficients of non-holonomic functions appear, might be seen as a step forward towards the representation of non-holonomic power series. By adapting the method of ansatz with undetermined coefficients, we build an algorithm that computes least-order quadratic differential equations with polynomial coefficients for a large class of non-holonomic functions. A differential equation resulting from this procedure is converted into a recurrence equation by applying the Cauchy product formula and rewriting powers into polynomials and derivatives into shifts. Finally, using enough initial values we are able to give normal form representations to characterize several non-holonomic power series and prove non-trivial identities. We discuss this algorithm and its implementation for Maple 2022.
We recall some of the history of the information-theoretic approach to deriving core results in probability theory and indicate parts of the recent resurgence of interest in this area with current progress along several interesting directions. Then we give a new information-theoretic proof of a finite version of de Finetti's classical representation theorem for finite-valued random variables. We derive an upper bound on the relative entropy between the distribution of the first $k$ in a sequence of $n$ exchangeable random variables, and an appropriate mixture over product distributions. The mixing measure is characterised as the law of the empirical measure of the original sequence, and de Finetti's result is recovered as a corollary. The proof is nicely motivated by the Gibbs conditioning principle in connection with statistical mechanics, and it follows along an appealing sequence of steps. The technical estimates required for these steps are obtained via the use of a collection of combinatorial tools known within information theory as `the method of types.'
Reinforcement learning is one of the core components in designing an artificial intelligent system emphasizing real-time response. Reinforcement learning influences the system to take actions within an arbitrary environment either having previous knowledge about the environment model or not. In this paper, we present a comprehensive study on Reinforcement Learning focusing on various dimensions including challenges, the recent development of different state-of-the-art techniques, and future directions. The fundamental objective of this paper is to provide a framework for the presentation of available methods of reinforcement learning that is informative enough and simple to follow for the new researchers and academics in this domain considering the latest concerns. First, we illustrated the core techniques of reinforcement learning in an easily understandable and comparable way. Finally, we analyzed and depicted the recent developments in reinforcement learning approaches. My analysis pointed out that most of the models focused on tuning policy values rather than tuning other things in a particular state of reasoning.