We provide evidence of the existence of KAM quasi-periodic attractors for a dissipative model in Celestial Mechanics. We compute the attractors extremely close to the breakdown threshold. We consider the spin-orbit problem describing the motion of a triaxial satellite around a central planet under the simplifying assumption that the center of mass of the satellite moves on a Keplerian orbit, the spin-axis is perpendicular to the orbit plane and coincides with the shortest physical axis. We also assume that the satellite is non-rigid; as a consequence, the problem is affected by a dissipative tidal torque that can be modeled as a time-dependent friction, which depends linearly upon the velocity. Our goal is to fix a frequency and compute the embedding of a smooth attractor with this frequency. This task requires to adjust a drift parameter. The goal of this paper is to provide numerical calculations of the condition numbers and verify that, when they are applied to the numerical solutions, they will lead to the existence of the torus for values of the parameters extremely close to the parameters of breakdown. Computing reliably close to the breakdown allows to discover several interesting phenomena, which we will report in [CCGdlL20a]. The numerical calculations of the condition numbers presented here are not completely rigorous, since we do not use interval arithmetic to estimate the round off error and we do not estimate rigorously the truncation error, but we implement the usual standards in numerical analysis (using extended precision, checking that the results are not affected by the level of precision, truncation, etc.). Hence, we do not claim a computer-assisted proof, but the verification is more convincing that standard numerics. We hope that our work could stimulate a computer-assisted proof.
In this paper, we proposed a multi-objective approach for the EEG Inverse Problem. This formulation does not need unknown parameters that involve empirical procedures. Due to the combinatorial characteristics of the problem, this alternative included evolutionary strategies to resolve it. The result is a Multi-objective Evolutionary Algorithm based on Anatomical Restrictions (MOEAAR) to estimate distributed solutions. The comparative tests were between this approach and 3 classic methods of regularization: LASSO, Ridge-L and ENET-L. In the experimental phase, regression models were selected to obtain sparse and distributed solutions. The analysis involved simulated data with different signal-to-noise ratio (SNR). The indicators for quality control were Localization Error, Spatial Resolution and Visibility. The MOEAAR evidenced better stability than the classic methods in the reconstruction and localization of the maximum activation. The norm L0 was used to estimate sparse solutions with the evolutionary approach and its results were relevant.
With the recent success of representation learning methods, which includes deep learning as a special case, there has been considerable interest in developing representation learning techniques that can incorporate known physical constraints into the learned representation. As one example, in many applications that involve a signal propagating through physical media (e.g., optics, acoustics, fluid dynamics, etc), it is known that the dynamics of the signal must satisfy constraints imposed by the wave equation. Here we propose a matrix factorization technique that decomposes such signals into a sum of components, where each component is regularized to ensure that it satisfies wave equation constraints. Although our proposed formulation is non-convex, we prove that our model can be efficiently solved to global optimality in polynomial time. We demonstrate the benefits of our work by applications in structural health monitoring, where prior work has attempted to solve this problem using sparse dictionary learning approaches that do not come with any theoretical guarantees regarding convergence to global optimality and employ heuristics to capture desired physical constraints.
We present a novel method to perform numerical integration over curved polyhedra enclosed by high-order parametric surfaces. Such a polyhedron is first decomposed into a set of triangular and/or rectangular pyramids, whose certain faces correspond to the given parametric surfaces. Each pyramid serves as an integration cell with a geometric mapping from a standard parent domain (e.g., a unit cube), where the tensor-product Gauss quadrature is adopted. As no constraint is imposed on the decomposition, certain resulting pyramids may intersect with themselves, and thus their geometric mappings may present negative Jacobian values. We call such cells the folded cells and refer to the corresponding decomposition as a folded decomposition. We show that folded cells do not cause any issues in practice as they are only used to numerically compute certain integrals of interest. The same idea can be applied to planar curved polygons as well. We demonstrate both theoretically and numerically that folded cells can retain the same accuracy as the cells with strictly positive Jacobians. On the other hand, folded cells allow for a much easier and much more flexible decomposition for general curved polyhedra, on which one can robustly compute integrals. In the end, we show that folded cells can flexibly and robustly accommodate real-world complex geometries by presenting several examples in the context of immersed isogeometric analysis, where involved sharp features can be well respected in generating integration cells.
For a general third-order tensor $\mathcal{A}\in\mathbb{R}^{n\times n\times n}$ the paper studies two closely related problems, the SVD-like tensor decomposition and the (approximate) tensor diagonalization. We develop the alternating least squares Jacobi-type algorithm that maximizes the squares of the diagonal entries of $\mathcal{A}$. The algorithm works on $2\times2\times2$ subtensors such that in each iteration the sum of the squares of two diagonal entries is maximized. We show how the rotation angles are calculated and prove the convergence of the algorithm. Different initializations of the algorithm are discussed, as well as the special cases of symmetric and antisymmetric tensors. The algorithm can be generalized to work on the higher-order tensors.
In the present work, a peculiar property of hash-based signatures allowing detection of their forgery event is explored. This property relies on the fact that a successful forgery of a hash-based signature most likely results in a collision with respect to the employed hash function, while the demonstration of this collision could serve as convincing evidence of the forgery. Here we prove that with properly adjusted parameters Lamport and Winternitz one-time signatures schemes could exhibit a forgery detection availability property. This property is of significant importance in the framework of crypto-agility paradigm since the considered forgery detection serves as an alarm that the employed cryptographic hash function becomes insecure to use and the corresponding scheme has to be replaced.
We introduce a new method analyzing the cumulative sum (CUSUM) procedure in sequential change-point detection. When observations are phase-type distributed and the post-change distribution is given by exponential tilting of its pre-change distribution, the first passage analysis of the CUSUM statistic is reduced to that of a certain Markov additive process. By using the theory of the so-called scale matrix and further developing it, we derive exact expressions of the average run length, average detection delay, and false alarm probability under the CUSUM procedure. The proposed method is robust and applicable in a general setting with non-i.i.d. observations. Numerical results also are given.
Classical distributed estimation scenarios typically assume timely and reliable exchanges of information over the sensor network. This paper, in contrast, considers single time-scale distributed estimation via a sensor network subject to transmission time-delays. The proposed discrete-time networked estimator consists of two steps: (i) consensus on (delayed) a-priori estimates, and (ii) measurement update. The sensors only share their a-priori estimates with their out-neighbors over (possibly) time-delayed transmission links. The delays are assumed to be fixed over time, heterogeneous, and known. We assume distributed observability instead of local observability, which significantly reduces the communication/sensing loads on sensors. Using the notions of augmented matrices and Kronecker product, the convergence of the proposed estimator over strongly-connected networks is proved for a specific upper-bound on the time-delay.
The Posit Number System was introduced in 2017 as a replacement for floating-point numbers. Since then, the community has explored its application in Neural Network related tasks and produced some unit designs which are still far from being competitive with their floating-point counterparts. This paper proposes a Posit Logarithm-Approximate Multiplication (PLAM) scheme to significantly reduce the complexity of posit multipliers, the most power-hungry units within Deep Neural Network architectures. When comparing with state-of-the-art posit multipliers, experiments show that the proposed technique reduces the area, power, and delay of hardware multipliers up to 72.86%, 81.79%, and 17.01%, respectively, without accuracy degradation.
Reinforcement learning (RL) is a promising approach and has limited success towards real-world applications, because ensuring safe exploration or facilitating adequate exploitation is a challenges for controlling robotic systems with unknown models and measurement uncertainties. Such a learning problem becomes even more intractable for complex tasks over continuous space (state-space and action-space). In this paper, we propose a learning-based control framework consisting of several aspects: (1) linear temporal logic (LTL) is leveraged to facilitate complex tasks over an infinite horizons which can be translated to a novel automaton structure; (2) we propose an innovative reward scheme for RL-agent with the formal guarantee such that global optimal policies maximize the probability of satisfying the LTL specifications; (3) based on a reward shaping technique, we develop a modular policy-gradient architecture utilizing the benefits of automaton structures to decompose overall tasks and facilitate the performance of learned controllers; (4) by incorporating Gaussian Processes (GPs) to estimate the uncertain dynamic systems, we synthesize a model-based safeguard using Exponential Control Barrier Functions (ECBFs) to address problems with high-order relative degrees. In addition, we utilize the properties of LTL automatons and ECBFs to construct a guiding process to further improve the efficiency of exploration. Finally, we demonstrate the effectiveness of the framework via several robotic environments. And we show such an ECBF-based modular deep RL algorithm achieves near-perfect success rates and guard safety with a high probability confidence during training.
We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.