In this article we design a finite volume semi-implicit IMEX scheme for the incompressible Navier-Stokes equations on evolving Chimera meshes. We employ a time discretization technique that separates explicit and implicit terms which encompass both slow and fast scales. The finite volume approach for both explicit and implicit terms allows to encode into the nonlinear flux the velocity of displacement of the Chimera mesh via integration on moving cells. To attain second-order time accuracy, we employ semi-implicit IMEX Runge-Kutta schemes. These novel schemes are combined with a fractional-step method, thus the governing equations are eventually solved using a projection method to satisfy the divergence-free constraint of the velocity field. The implicit discretization of the viscous terms allows the CFL-type stability condition for the maximum admissible time step to be only defined by the relative fluid velocity referred to the movement of the frame and not depending also on the viscous eigenvalues. Communication between different grid blocks is enabled through compact exchange of information from the fringe cells of one mesh block to the field cells of the other block. Taking advantage of the continuity of the solution and the definition of a minimal compact stencil, the numerical solution of any system of differential equations is characterized by continuous data extrapolation. Free-stream preservation property, i.e. compliance with the Geometric Conservation Law (GCL), is respected. The accuracy and capabilities of the new numerical schemes is proved through an extensive range of test cases, demonstrating ability to solve relevant benchmarks in the field of incompressible fluids.
In this paper we study discrete-time quantum walks on Cayley graphs corresponding to Dihedral groups, which are graphs with both directed and undirected edges. We consider the walks with coins that are one-parameter continuous deformation of the Grover matrix and can be written as linear combinations of certain permutation matrices. We show that the walks are periodic only for coins that are permutation or negative of a permutation matrix. Finally, we investigate the localization property of the walks through numerical simulations and observe that the walks localize for a wide range of coins for different sizes of the graphs.
In this work we propose and analyze an extension of the approximate component mode synthesis (ACMS) method to the heterogeneous Helmholtz equation. The ACMS method has originally been introduced by Hetmaniuk and Lehoucq as a multiscale method to solve elliptic partial differential equations. The ACMS method uses a domain decomposition to separate the numerical approximation by splitting the variational problem into two independent parts: local Helmholtz problems and a global interface problem. While the former are naturally local and decoupled such that they can be easily solved in parallel, the latter requires the construction of suitable local basis functions relying on local eigenmodes and suitable extensions. We carry out a full error analysis of this approach focusing on the case where the domain decomposition is kept fixed, but the number of eigenfunctions is increased. The theoretical results in this work are supported by numerical experiments verifying algebraic convergence for the method. In certain, practically relevant cases, even exponential convergence for the local Helmholtz problems can be achieved without oversampling.
We address the classical inverse problem of recovering the position and shape of obstacles immersed in a planar Stokes flow using boundary measurements. We prove that this problem can be transformed into a shape-from-moments problem to which ad hoc reconstruction methods can be applied. The effectiveness of this approach is confirmed by numerical tests that show significant improvements over those available in the literature to date.
The single-particle cryo-EM field faces the persistent challenge of preferred orientation, lacking general computational solutions. We introduce cryoPROS, an AI-based approach designed to address the above issue. By generating the auxiliary particles with a conditional deep generative model, cryoPROS addresses the intrinsic bias in orientation estimation for the observed particles. We effectively employed cryoPROS in the cryo-EM single particle analysis of the hemagglutinin trimer, showing the ability to restore the near-atomic resolution structure on non-tilt data. Moreover, the enhanced version named cryoPROS-MP significantly improves the resolution of the membrane protein NaX using the no-tilted data that contains the effects of micelles. Compared to the classical approaches, cryoPROS does not need special experimental or image acquisition techniques, providing a purely computational yet effective solution for the preferred orientation problem. Finally, we conduct extensive experiments that establish the low risk of model bias and the high robustness of cryoPROS.
The emergence of Tiny Machine Learning (TinyML) has positively revolutionized the field of Artificial Intelligence by promoting the joint design of resource-constrained IoT hardware devices and their learning-based software architectures. TinyML carries an essential role within the fourth and fifth industrial revolutions in helping societies, economies, and individuals employ effective AI-infused computing technologies (e.g., smart cities, automotive, and medical robotics). Given its multidisciplinary nature, the field of TinyML has been approached from many different angles: this comprehensive survey wishes to provide an up-to-date overview focused on all the learning algorithms within TinyML-based solutions. The survey is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow, allowing for a systematic and complete literature survey. In particular, firstly we will examine the three different workflows for implementing a TinyML-based system, i.e., ML-oriented, HW-oriented, and co-design. Secondly, we propose a taxonomy that covers the learning panorama under the TinyML lens, examining in detail the different families of model optimization and design, as well as the state-of-the-art learning techniques. Thirdly, this survey will present the distinct features of hardware devices and software tools that represent the current state-of-the-art for TinyML intelligent edge applications. Finally, we discuss the challenges and future directions.
In this study, we propose a digital over-the-air computation (OAC) scheme for achieving continuous-valued (analog) aggregation for federated edge learning (FEEL). We show that the average of a set of real-valued parameters can be calculated approximately by using the average of the corresponding numerals, where the numerals are obtained based on a balanced number system. By exploiting this key property, the proposed scheme encodes the local stochastic gradients into a set of numerals. Next, it determines the positions of the activated orthogonal frequency division multiplexing (OFDM) subcarriers by using the values of the numerals. To eliminate the need for precise sample-level time synchronization, channel estimation overhead, and channel inversion, the proposed scheme also uses a non-coherent receiver at the edge server (ES) and does not utilize a pre-equalization at the edge devices (EDs). We theoretically analyze the MSE performance of the proposed scheme and the convergence rate for a non-convex loss function. To improve the test accuracy of FEEL with the proposed scheme, we introduce the concept of adaptive absolute maximum (AAM). Our numerical results show that when the proposed scheme is used with AAM for FEEL, the test accuracy can reach up to 98% for heterogeneous data distribution.
We present a short proof of a celebrated result of G\'acs and K\"orner giving sufficient and necessary condition on the joint distribution of two discrete random variables $X$ and $Y$ for the case when their mutual information matches the extractable (in the limit) common information. Our proof is based on the observation that the mere existence of certain random variables jointly distributed with $X$ and $Y$ can impose restriction on all random variables jointly distributed with $X$ and $Y$.
In this paper we revisit the classical Cauchy problem for Laplace's equation as well as two further related problems in the light of regularisation of this highly ill-conditioned problem by replacing integer derivatives with fractional ones. We do so in the spirit of quasi reversibility, replacing a classically severely ill-posed PDE problem by a nearby well-posed or only mildly ill-posed one. In order to be able to make use of the known stabilising effect of one-dimensional fractional derivatives of Abel type we work in a particular rectangular (in higher space dimensions cylindrical) geometry. We start with the plain Cauchy problem of reconstructing the values of a harmonic function inside this domain from its Dirichlet and Neumann trace on part of the boundary (the cylinder base) and explore three options for doing this with fractional operators. The two other related problems are the recovery of a free boundary and then this together with simultaneous recovery of the impedance function in the boundary condition. Our main technique here will be Newton's method. The paper contains numerical reconstructions and convergence results for the devised methods.
NMT systems trained on Pre-trained Multilingual Sequence-Sequence (PMSS) models flounder when sufficient amounts of parallel data is not available for fine-tuning. This specifically holds for languages missing/under-represented in these models. The problem gets aggravated when the data comes from different domains. In this paper, we show that intermediate-task fine-tuning (ITFT) of PMSS models is extremely beneficial for domain-specific NMT, especially when target domain data is limited/unavailable and the considered languages are missing or under-represented in the PMSS model. We quantify the domain-specific results variations using a domain-divergence test, and show that ITFT can mitigate the impact of domain divergence to some extent.
Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.