The strength of gnomes lies in their coordinated action. Being small and subtle creatures themselves, the forest gnomes can form large swarms acting as one giant creature. This unusual defense strategy required a lot of skill and training as directing a swarm is not an easy task! Initially, gnomes used leader-based control algorithms, although those have proven to be vulnerable to abuse and failure. After thorough research and study, gnomes developed their own leaderless consensus algorithm based on very simple rules. It is based on local broadcast (gossip) in an open network of a known radius $r$. One of the gnomes proposes a plan which then spreads gnome to gnome. If there is agreement, all gnomes act \emph{all at once}. If there are conflicting plans (an extreme rarity), they try again. The resulting swarm reaction time is exactly the swarm round-trip time $2rt$, where $t$ is the command relay time. The algorithm is non-Byzantine; all gnomes must be sane and sober.
Mokka is a partial-synchronous, strong consistent BFT consensus algorithm for reaching the consensus about a certain value in open networks. This algorithm has some common approaches nested from RAFT, but its nature and design make Mokka a better solution for DLT (distributed ledger).
Voluntary human motion is the product of muscle activity that results from upstream motion planning of the motor cortical areas. We show that muscle activity can be artificially generated based on motion features such as position, velocity, and acceleration. For this purpose, we specifically develop an approach based on a recurrent neural network trained in a supervised learning session; additional neural network architectures are considered and evaluated. The performance is evaluated by a new score called the zero-line score. The latter adaptively rescales the loss function of the generated signal for all channels by comparing the overall range of muscle activity and thus dynamically evaluates similarities between both signals. The model achieves a remarkable precision for previously trained motion while new motions that were not trained before still have high accuracy. Further, these models are trained on multiple subjects and thus are able to generalize across individuals. In addition, we distinguish between a general model that has been trained on several subjects, a subject-specific model, and a specific pre-trained model that uses the general model as a basis and is adapted to a specific subject afterward. The subject-specific generation of muscle activity can be further exploited to improve the rehabilitation of neuromuscular diseases with myoelectric prostheses and functional electric stimulation.
We consider the BVP $-y" + qy = \lambda y$ with $y(0)=y(1)=0$. The inverse spectral problems asks one to recover $q$ from spectral information. In this paper, we present a very simple method to recover a potential by sampling one eigenfunction. The spectral asymptotics imply that for larger modes, more and more information is lost due to imprecise measurements (i.e. relative errors \textit{increases}) and so it is advantageous to use data from lower modes. Our method also allows us to recover "any" potential from \textit{one} boundary condition.
We revisit the task of computing the edit distance in sublinear time. In the $(k,K)$-gap edit distance problem the task is to distinguish whether the edit distance of two strings is at most $k$ or at least $K$. It has been established by Goldenberg, Krauthgamer and Saha (FOCS '19), with improvements by Kociumaka and Saha (FOCS '20), that the $(k,k^2)$-gap problem can be solved in time $\widetilde O(n/k+\operatorname{poly}(k))$. One of the most natural questions in this line of research is whether the $(k,k^2)$-gap is best-possible for the running time $\widetilde O(n/k+\operatorname{poly}(k))$. In this work we answer this question by significantly improving the gap. Specifically, we show that in time $O(n/k+\operatorname{poly}(k))$ we can even solve the $(k,k^{1+o(1)})$-gap problem. This is the first algorithm that breaks the $(k,k^2)$-gap in this running time. Our algorithm is almost optimal in the following sense: In the low distance regime ($k\le n^{0.19}$) our running time becomes $O(n/k)$, which matches a known $n/k^{1+o(1)}$ lower bound for the $(k,k^{1+o(1)})$-gap problem up to lower order factors. Our result also reveals a surprising similarity of Hamming distance and edit distance in the low distance regime: For both, the $(k,k^{1+o(1)})$-gap problem has time complexity $n/k^{1\pm o(1)}$ for small $k$. In contrast to previous work, which employed a subsampled variant of the Landau-Vishkin algorithm, we instead build upon the algorithm of Andoni, Krauthgamer and Onak (FOCS '10). We first simplify their approach and then show how to to effectively prune their computation tree in order to obtain a sublinear-time algorithm in the given time bound. Towards that, we use a variety of structural insights on the (local and global) patterns that can emerge during this process and design appropriate property testers to effectively detect these patterns.
Orthology and paralogy relations are often inferred by methods based on gene similarity, which usually yield a graph depicting the relationships between gene pairs. Such relation graphs are known to frequently contain errors, as they cannot be explained via a gene tree that both contains the depicted orthologs/paralogs, and that is consistent with a species tree $S$. This idea of detecting errors through inconsistency with a species tree has mostly been studied in the presence of speciation and duplication events only. In this work, we ask: could the given set of relations be consistent if we allow lateral gene transfers in the evolutionary model? We formalize this question and provide a variety of algorithmic results regarding the underlying problems. Namely, we show that deciding if a relation graph $R$ is consistent with a given species network $N$ is NP-hard, and that it is W[1]-hard under the parameter "minimum number of transfers". However, we present an FPT algorithm based on the degree of the $DS$-tree associated with $R$. We also study analogous problems in the case that the transfer highways on a species tree are unknown.
The key-scheduling algorithm in the AES is the component responsible for selecting from the master key the sequence of round keys to be xor-ed to the partially encrypted state at each iteration. We consider here the group $\Gamma$ generated by the action of the AES-128 key-scheduling operation, and we prove that the smallest group containing $\Gamma$ and all the translations of the message space is primitive. As a consequence, we obtain that no proper and non-trivial subspace can be invariant under its action.
Efficient asynchronous Byzantine agreement (BA) protocols were mostly studied with private setups, e.g., pre-setup threshold cryptosystem. Challenges remain to reduce the large communication in the absence of such setups. Recently, Abraham et al. (PODC'21) presented the first asynchronous validated BA (VBA) with expected $O(n^3)$ messages and $O(1)$ rounds, relying on only public key infrastructure (PKI) setup, but the design still costs $O({\lambda}n^3 \log n)$ bits. Here $n$ is the number of parties, and $\lambda$ is a cryptographic security parameter. In this paper, we reduce the communication of private-setup free asynchronous BA to expected $O(\lambda n^3)$ bits. At the core of our design, we give a systematic treatment of common randomness protocols in the asynchronous network, and proceed as: - We give an efficient reasonably fair common coin protocol in the asynchronous setting with only PKI setup. It costs only $O(\lambda n^3)$ bits and $O(1)$ rounds, and ensures that with at least 1/3 probability, all honest parties can output a common bit that is as if randomly flipped. This directly renders more efficient private-setup free asynchronous binary agreement (ABA) with expected $O(\lambda n^3)$ bits and $O(1)$ rounds. - Then, we lift our common coin to attain perfect agreement by using a single ABA. This gives us a reasonably fair random leader election protocol with expected $O(\lambda n^3)$ communication and expected constant rounds. It is pluggable in all existing VBA protocols (Cachin et al., CRYPTO'01; Abraham et al., PODC'19; Lu et al., PODC'20) to remove the needed private setup or distributed key generation (DKG). As such, the communication of private-setup free VBA is reduced to expected $O(\lambda n^3)$ bits while preserving fast termination in expected $O(1)$ rounds.
We consider the problem of partitioning a line segment into two subsets, so that $n$ finite measures all has the same ratio of values for the subsets. Letting $\alpha\in[0,1]$ denote the desired ratio, this generalises the PPA-complete consensus-halving problem, in which $\alpha=\frac{1}{2}$. It is known that for any $\alpha$, there exists a solution using $2n$ cuts of the segment. Here we show that if $\alpha$ is irrational, that upper bound is almost optimal. We also obtain bounds that are nearly exact for a large subset of rational values $\alpha$. On the computational side, we explore its dependence on the number of cuts available. More specifically, 1. when using the minimal number of cuts for each instance is required, the problem is NP-hard for any $\alpha$; 2. for a large subset of rational $\alpha = \frac{\ell}{k}$, when $\frac{k-1}{k} \cdot 2n$ cuts are available, the problem is in the Turing closure of PPA-$k$; 3. when $2n$ cuts are allowed, the problem belongs to PPA for any $\alpha$; furthermore, the problem belong to PPA-$p$ for any prime $p$ if $2(p-1)\cdot \frac{\lceil p/2 \rceil}{\lfloor p/2 \rfloor} \cdot n$ cuts are available.
The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective. A rational approach to AI, where computational algorithms drive decision making independent of human intervention, insights and emotions, has shown to result in bias and exclusion, laying bare societal vulnerabilities and insecurities. A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI. A relational approach to AI recognises that objective and rational reasoning cannot does not always result in the 'right' way to proceed because what is 'right' depends on the dynamics of the situation in which the decision is taken, and that rather than solving ethical problems the focus of design and use of AI must be on asking the ethical question. In this position paper, I start with a general discussion of current conceptualisations of AI followed by an overview of existing approaches to governance and responsible development and use of AI. Then, I reflect over what should be the bases of a social paradigm for AI and how this should be embedded in relational, feminist and non-Western philosophies, in particular the Ubuntu philosophy.
Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$, it outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log n/\epsilon^{1.75})$ iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}((\log n)^{4}/\epsilon^{2})$ or $\tilde{O}((\log n)^{6}/\epsilon^{1.75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$. For the stochastic setting, our algorithm outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}((\log n)^{2}/\epsilon^{4})$ iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in $\log n$ compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.