Answering connectivity queries in real algebraic sets is a fundamental problem in effective real algebraic geometry that finds many applications in e.g. robotics where motion planning issues are topical. This computational problem is tackled through the computation of so-called roadmaps which are real algebraic subsets of the set V under study, of dimension at most one, and which have a connected intersection with all semi-algebraically connected components of V. Algorithms for computing roadmaps rely on statements establishing connectivity properties of some well-chosen subsets of V , assuming that V is bounded. In this paper, we extend such connectivity statements by dropping the boundedness assumption on V. This exploits properties of so-called generalized polar varieties, which are critical loci of V for some well-chosen polynomial maps.
We present a two-dimensional conforming virtual element method for the fourth-order phase-field equation. Our proposed numerical approach to the solution of this high-order phase-field (HOPF) equation relies on the design of an arbitrary-order accurate, virtual element space with $C^1$ global regularity. Such regularity is guaranteed by taking the values of the virtual element functions and their full gradient at the mesh vertices as degrees of freedom. Attaining high-order accuracy requires also edge polynomial moments of the trace of the virtual element functions and their normal derivatives. In this work, we detail the scheme construction, and prove its convergence by deriving error estimates in different norms. A set of representative test cases allows us to assess the behavior of the method.
In observational studies, unobserved confounding is a major barrier in isolating the average causal effect (ACE). In these scenarios, two main approaches are often used: confounder adjustment for causality (CAC) and instrumental variable analysis for causation (IVAC). Nevertheless, both are subject to untestable assumptions and, therefore, it may be unclear which assumption violation scenarios one method is superior in terms of mitigating inconsistency for the ACE. Although general guidelines exist, direct theoretical comparisons of the trade-offs between CAC and the IVAC assumptions are limited. Using ordinary least squares (OLS) for CAC and two-stage least squares (2SLS) for IVAC, we analytically compare the relative inconsistency for the ACE of each approach under a variety of assumption violation scenarios and discuss rules of thumb for practice. Additionally, a sensitivity framework is proposed to guide analysts in determining which approach may result in less inconsistency for estimating the ACE with a given dataset. We demonstrate our findings both through simulation and an application examining whether maternal stress during pregnancy affects a neonate's birthweight. The implications of our findings for causal inference practice are discussed, providing guidance for analysts for judging whether CAC or IVAC may be more appropriate for a given situation.
We are interested in creating statistical methods to provide informative summaries of random fields through the geometry of their excursion sets. To this end, we introduce an estimator for the length of the perimeter of excursion sets of random fields on $\mathbb{R}^2$ observed over regular square tilings. The proposed estimator acts on the empirically accessible binary digital images of the excursion regions and computes the length of a piecewise linear approximation of the excursion boundary. The estimator is shown to be consistent as the pixel size decreases, without the need of any normalization constant, and with neither assumption of Gaussianity nor isotropy imposed on the underlying random field. In this general framework, even when the domain grows to cover $\mathbb{R}^2$, the estimation error is shown to be of smaller order than the side length of the domain. For affine, strongly mixing random fields, this translates to a multivariate Central Limit Theorem for our estimator when multiple levels are considered simultaneously. Finally, we conduct several numerical studies to investigate statistical properties of the proposed estimator in the finite-sample data setting.
This paper focuses on the algebraic theory underlying the study of the complexity and the algorithms for the Constraint Satisfaction Problem (CSP). We unify, simplify, and extend parts of the three approaches that have been developed to study the CSP over finite templates - absorption theory that was used to characterize CSPs solvable by local consistency methods (JACM'14), and Bulatov's and Zhuk's theories that were used for two independent proofs of the CSP Dichotomy Theorem (FOCS'17, JACM'20). As the first contribution we present an elementary theorem about primitive positive definability and use it to obtain the starting points of Bulatov's and Zhuk's proofs as corollaries. As the second contribution we propose and initiate a systematic study of minimal Taylor algebras. This class of algebras is broad enough so that it suffices to verify the CSP Dichotomy Theorem on this class only, but still is unusually well behaved. In particular, many concepts from the three approaches coincide in the class, which is in striking contrast with the general setting. We believe that the theory initiated in this paper will eventually result in a simple and more natural proof of the Dichotomy Theorem that employs a simpler and more efficient algorithm, and will help in attacking complexity questions in other CSP-related problems.
The use of Dynamic Epistemic Logic (DEL) in multi-agent planning has led to a widely adopted action formalism that can handle nondeterminism, partial observability and arbitrary knowledge nesting. As such expressive power comes at the cost of undecidability, several decidable fragments have been isolated, mainly based on syntactic restrictions of the action formalism. In this paper, we pursue a novel semantic approach to achieve decidability. Namely, rather than imposing syntactical constraints, the semantic approach focuses on the axioms of the logic for epistemic planning. Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity, which controls the ability of agents to unboundedly reason on the knowledge of other agents. We then provide a threefold contribution. First, we show that the resulting epistemic planning problem is decidable. In doing so, we prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest. Second, we study different generalizations of the commutativity axiom, with the goal of obtaining decidability for more expressive fragments of DEL. Finally, we show that two well-known epistemic planning systems based on action templates, when interpreted under the setting of knowledge, conform to the commutativity axiom, hence proving their decidability.
The processing of information is an indispensable property of living systems realized by networks of active processes with enormous complexity. They have inspired many variants of modern machine learning one of them being reservoir computing, in which stimulating a network of nodes with fading memory enables computations and complex predictions. Reservoirs are implemented on computer hardware, but also on unconventional physical substrates such as mechanical oscillators, spins, or bacteria often summarized as physical reservoir computing. Here we demonstrate physical reservoir computing with a synthetic active microparticle system that self-organizes from an active and passive component into inherently noisy nonlinear dynamical units. The self-organization and dynamical response of the unit is the result of a delayed propulsion of the microswimmer to a passive target. A reservoir of such units with a self-coupling via the delayed response can perform predictive tasks despite the strong noise resulting from Brownian motion of the microswimmers. To achieve efficient noise suppression, we introduce a special architecture that uses historical reservoir states for output. Our results pave the way for the study of information processing in synthetic self-organized active particle systems.
Translation is one of the most fundamental processes in the biological cell. Because of the central role that translation plays across all domains of life, the enzyme that carries out this process, the ribosome, is required to process information with high accuracy. This accuracy often approaches values near unity experimentally. In this paper, we model the ribosome as an information channel and demonstrate mathematically that this biological machine has information-processing capabilities that have not been recognized previously. In particular, we calculate bounds on the ribosome's theoretical Shannon capacity and numerically approximate this capacity. Finally, by incorporating estimates on the ribosome's operation time, we show that the ribosome operates at speeds safely below its capacity, allowing the ribosome to process information with an arbitrary degree of error. Our results show that the ribosome achieves a high accuracy in line with purely information-theoretic means.
Coded distributed computing, proposed by Li et al., offers significant potential for reducing the communication load in MapReduce computing systems. In the setting of the \emph{cascaded} coded distributed computing that consisting of $K$ nodes, $N$ input files, and $Q$ output functions, the objective is to compute each output function through $s\geq 1$ nodes with a computation load $r\geq 1$, enabling the application of coding techniques during the Shuffle phase to achieve minimum communication load. However, for most existing coded distributed computing schemes, a major limitation lies in their demand for splitting the original data into an exponentially growing number of input files in terms of $N/\binom{K}{r} \in\mathbb{N}$ and requiring an exponentially large number of output functions $Q/\binom{K}{s} \in\mathbb{N}$, which imposes stringent requirements for implementation and results in significant coding complexity when $K$ is large. In this paper, we focus on the cascaded case of $K/s\in\mathbb{N} $, deliberately designing the strategy of input files store and output functions assignment based on a grouping method, such that a low-complexity two-round Shuffle phase is available. The main advantages of our proposed scheme contains: 1) the communication load is quilt close to or surprisingly better than the optimal state-of-the-art scheme proposed by Li et al.; 2) our scheme requires significantly less number of input files and output functions; 3) all the operations are implemented over the minimum binary field $\mathbb{F}_2$.
Money transfer is an abstraction that realizes the core of cryptocurrencies. It has been shown that, contrary to common belief, money transfer in the presence of Byzantine faults can be implemented in asynchronous networks and does not require consensus. Nonetheless, existing implementations of money transfer still require a quadratic message complexity per payment, making attempts to scale hard. In common blockchains, such as Bitcoin and Ethereum, this cost is mitigated by payment channels implemented as a second layer on top of the blockchain allowing to make many off-chain payments between two users who share a channel. Such channels only require on-chain transactions for channel opening and closing, while the intermediate payments are done off-chain with constant message complexity. But payment channels in-use today require synchrony, therefore they are inadequate for asynchronous money transfer systems. In this paper, we provide a series of possibility and impossibility results for payment channels in asynchronous money transfer systems. We first prove a quadratic lower bound on the message complexity of on-chain transfers. Then, we explore two types of payment channels, unidirectional and bidirectional. We define them as shared memory abstractions and prove that in certain cases they can be implemented as a second layer on top of an asynchronous money transfer system whereas in other cases it is impossible.
Modelling in biology must adapt to increasingly complex and massive data. The efficiency of the inference algorithms used to estimate model parameters is therefore questioned. Many of these are based on stochastic optimization processes which waste a significant part of the computation time due to their rejection sampling approaches. We introduce the Fixed Landscape Inference MethOd (flimo), a new likelihood-free inference method for continuous state-space stochastic models. It applies deterministic gradient-based optimization algorithms to obtain a point estimate of the parameters, minimizing the difference between the data and some simulations according to some prescribed summary statistics. In this sense, it is analogous to Approximate Bayesian Computation (ABC). Like ABC, it can also provide an approximation of the distribution of the parameters. Three applications are proposed: a usual theoretical example, namely the inference of the parameters of g-and-k distributions; a population genetics problem, not so simple as it seems, namely the inference of a selective value from time series in a Wright-Fisher model; and simulations from a Ricker model, representing chaotic population dynamics. In the two first applications, the results show a drastic reduction of the computational time needed for the inference phase compared to the other methods, despite an equivalent accuracy. Even when likelihood-based methods are applicable, the simplicity and efficiency of flimo make it a compelling alternative. Implementations in Julia and in R are available on //metabarcoding.org/flimo. To run flimo, the user must simply be able to simulate data according to the chosen model.