We present and implement an algorithm for computing the invariant circle and the corresponding stable manifolds for 2-dimensional maps. The algorithm is based on the parameterization method, and it is backed up by an a-posteriori theorem established in [YdlL21]. The algorithm works irrespective of whether the internal dynamics in the invariant circle is a rotation or it is phase-locked. The algorithm converges quadratically and the number of operations and memory requirements for each step of the iteration is linear with respect to the size of the discretization. We also report on the result of running the implementation in some standard models to uncover new phenomena. In particular, we explored a bundle merging scenario in which the invariant circle loses hyperbolicity because the angle between the stable directions and the tangent becomes zero even if the rates of contraction are separated. We also discuss and implement a generalization of the algorithm to 3 dimensions, and implement it on the 3-dimensional Fattened Arnold Family (3D-FAF) map with non-resonant eigenvalues and present numerical results.
A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. It uses a nonlinear Gramian constraint to impose correlation between density and susceptibility of reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the nonlinear constraint is imposed using two separate Lagrange parameters, one for each model domain. This combined approach provides more similarity between the reconstructed models. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. The sensitivity matrices exhibit a block Toeplitz Toeplitz block structure for each depth layer of the model domain. Forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problems with respect to both computational costs and memory demands, and to solve the nonlinear problem by applying iterative methods that rely only on matrix vector multiplications. As such, the use of the regularized reweighted conjugate gradient algorithm, in conjunction with the structure of the sensitivity matrices, leads to a fast methodology for large-scale joint inversion of geophysical data sets. Numerical simulations demonstrate that it is possible to apply a nonlinear joint inversion algorithm, with $L_p$-norm stabilisers, for the reconstruction of large model domains on a standard laptop computer. It is demonstrated, that the p=1 choice provides sparse reconstructed solutions with sharp boundaries, and $p=2$ provides smooth and blurred models. Gravity and magnetic data obtained over an area in northwest of Mesoproterozoic St. Francois Terrane, southeast of Missouri, USA are inverted.
Blade envelopes offer a set of data-driven tolerance guidelines for manufactured components based on aerodynamic analysis. In Part I of this two-part paper, a workflow for the formulation of blade envelopes is described and demonstrated. In Part II, this workflow is extended to accommodate multiple objectives. This allows engineers to prescribe manufacturing guidelines that take into account multiple performance criteria. The quality of a manufactured blade can be correlated with features derived from the distribution of primal flow quantities over the surface. We show that these distributions can be accounted for in the blade envelope using vector-valued models derived from discrete surface flow measurements. Our methods result in a set of variables that allows flexible and independent control over multiple flow characteristics and performance metrics, similar in spirit to inverse design methods. The augmentations to the blade envelope workflow presented in this paper are demonstrated on the LS89 turbine blade, focusing on the control of loss, mass flow and the isentropic Mach number distribution. Finally, we demonstrate how blade envelopes can be used to visualize invariant designs by producing a 3D render of the envelope using 3D modelling software.
Beyond identifying genetic variants, we introduce a set of Boolean relations that allows for a comprehensive classification of the relation for every pair of variants by taking all minimal alignments into account. We present an efficient algorithm to compute these relations, including a novel way of efficiently computing all minimal alignments within the best theoretical complexity bounds. We show that for all variants of the CFTR gene in dbSNP these relations are common and many non-trivial. Ultimately, we present an approach for the storing and indexing of variants in the context of a database that enables the efficient querying for all these relations.
We present the class of Hida-Mat\'ern kernels, which is the canonical family of covariance functions over the entire space of stationary Gauss-Markov Processes. It extends upon Mat\'ern kernels, by allowing for flexible construction of priors over processes with oscillatory components. Any stationary kernel, including the widely used squared-exponential and spectral mixture kernels, are either directly within this class or are appropriate asymptotic limits, demonstrating the generality of this class. Taking advantage of its Markovian nature we show how to represent such processes as state space models using only the kernel and its derivatives. In turn this allows us to perform Gaussian Process inference more efficiently and side step the usual computational burdens. We also show how exploiting special properties of the state space representation enables improved numerical stability in addition to further reductions of computational complexity.
This paper focuses on the regularization of backward time-fractional diffusion problem on unbounded domain. This problem is well-known to be ill-posed, whence the need of a regularization method in order to recover stable approximate solution. For the problem under consideration, we present a unified framework of regularization which covers some techniques such as Fourier regularization [19], mollification [12] and approximate-inverse [7]. We investigate a regularization technique with two major advantages: the simplicity of computation of the regularized solution and the avoid of truncation of high frequency components (so as to avoid undesirable oscillation on the resulting approximate-solution). Under classical Sobolev-smoothness conditions, we derive order-optimal error estimates between the approximate solution and the exact solution in the case where both the data and the model are only approximately known. In addition, an order-optimal a-posteriori parameter choice rule based on the Morozov principle is given. Finally, via some numerical experiments in two-dimensional space, we illustrate the efficiency of our regularization approach and we numerically confirm the theoretical convergence rates established in the paper.
We propose a novel approach to disentangle the generative factors of variation underlying a given set of observations. Our method builds upon the idea that the (unknown) low-dimensional manifold underlying the data space can be explicitly modeled as a product of submanifolds. This gives rise to a new definition of disentanglement, and to a novel weakly-supervised algorithm for recovering the unknown explanatory factors behind the data. At training time, our algorithm only requires pairs of non i.i.d. data samples whose elements share at least one, possibly multidimensional, generative factor of variation. We require no knowledge on the nature of these transformations, and do not make any limiting assumption on the properties of each subspace. Our approach is easy to implement, and can be successfully applied to different kinds of data (from images to 3D surfaces) undergoing arbitrary transformations. In addition to standard synthetic benchmarks, we showcase our method in challenging real-world applications, where we compare favorably with the state of the art.
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs). In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance. In addition, whereas existing methods are limited to equivariance on 3 dimensional spaces, our model is easily scaled to higher-dimensional spaces. We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties.
UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.
This work considers the problem of provably optimal reinforcement learning for episodic finite horizon MDPs, i.e. how an agent learns to maximize his/her long term reward in an uncertain environment. The main contribution is in providing a novel algorithm --- Variance-reduced Upper Confidence Q-learning (vUCQ) --- which enjoys a regret bound of $\widetilde{O}(\sqrt{HSAT} + H^5SA)$, where the $T$ is the number of time steps the agent acts in the MDP, $S$ is the number of states, $A$ is the number of actions, and $H$ is the (episodic) horizon time. This is the first regret bound that is both sub-linear in the model size and asymptotically optimal. The algorithm is sub-linear in that the time to achieve $\epsilon$-average regret for any constant $\epsilon$ is $O(SA)$, which is a number of samples that is far less than that required to learn any non-trivial estimate of the transition model (the transition model is specified by $O(S^2A)$ parameters). The importance of sub-linear algorithms is largely the motivation for algorithms such as $Q$-learning and other "model free" approaches. vUCQ algorithm also enjoys minimax optimal regret in the long run, matching the $\Omega(\sqrt{HSAT})$ lower bound. Variance-reduced Upper Confidence Q-learning (vUCQ) is a successive refinement method in which the algorithm reduces the variance in $Q$-value estimates and couples this estimation scheme with an upper confidence based algorithm. Technically, the coupling of both of these techniques is what leads to the algorithm enjoying both the sub-linear regret property and the asymptotically optimal regret.
Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.