This paper describes an architecture for predicting the price of cryptocurrencies for the next seven days using the Adaptive Network Based Fuzzy Inference System (ANFIS). Historical data of cryptocurrencies and indexes that are considered are Bitcoin (BTC), Ethereum (ETH), Bitcoin Dominance (BTC.D), and Ethereum Dominance (ETH.D) in a daily timeframe. The methods used to teach the data are hybrid and backpropagation algorithms, as well as grid partition, subtractive clustering, and Fuzzy C-means clustering (FCM) algorithms, which are used in data clustering. The architectural performance designed in this paper has been compared with different inputs and neural network models in terms of statistical evaluation criteria. Finally, the proposed method can predict the price of digital currencies in a short time.
We propose a new Riemannian gradient descent method for computing spherical area-preserving mappings of topological spheres using a Riemannian retraction-based framework with theoretically guaranteed convergence. The objective function is based on the stretch energy functional, and the minimization is constrained on a power manifold of unit spheres embedded in 3-dimensional Euclidean space. Numerical experiments on several mesh models demonstrate the accuracy and stability of the proposed framework. Comparisons with two existing state-of-the-art methods for computing area-preserving mappings demonstrate that our algorithm is both competitive and more efficient. Finally, we present a concrete application to the problem of landmark-aligned surface registration of two brain models.
This paper proposes a systematic and novel component level co-rotational (CR) framework, for upgrading existing 3D continuum finite elements to flexible multibody analysis. Without using any model reduction techniques, the high efficiency is achieved through sophisticated operations in both modeling and numerical implementation phrases. In modeling phrase, as in conventional 3D nonlinear finite analysis, the nodal absolute coordinates are used as the system generalized coordinates, therefore simple formulations of the inertia force terms can be obtained. For the elastic force terms, inspired by existing floating frame of reference formulation (FFRF) and conventional element-level CR formulation, a component-level CR modeling strategy is developed. By in combination with Schur complement theory and fully exploring the nature of the component-level CR modeling method, an extremely efficient procedure is developed, which enables us to transform the linear equations raised from each Newton-Raphson iteration step into linear systems with constant coefficient matrix. The coefficient matrix thus can be pre-calculated and decomposed only once, and at all the subsequent time steps only back substitutions are needed, which avoids frequently updating the Jacobian matrix and avoids directly solving the large-scale linearized equation in each iteration. Multiple examples are presented to demonstrate the performance of the proposed framework.
Creating Computer Vision (CV) models remains a complex practice, despite their ubiquity. Access to data, the requirement for ML expertise, and model opacity are just a few points of complexity that limit the ability of end-users to build, inspect, and improve these models. Interactive ML perspectives have helped address some of these issues by considering a teacher in the loop where planning, teaching, and evaluating tasks take place. We present and evaluate two interactive visualizations in the context of Sprite, a system for creating CV classification and detection models for images originating from videos. We study how these visualizations help Sprite's users identify (evaluate) and select (plan) images where a model is struggling and can lead to improved performance, compared to a baseline condition where users used a query language. We found that users who had used the visualizations found more images across a wider set of potential types of model errors.
Large-scale applications of Visual Place Recognition (VPR) require computationally efficient approaches. Further, a well-balanced combination of data-based and training-free approaches can decrease the required amount of training data and effort and can reduce the influence of distribution shifts between the training and application phases. This paper proposes a runtime and data-efficient hierarchical VPR pipeline that extends existing approaches and presents novel ideas. There are three main contributions: First, we propose Local Positional Graphs (LPG), a training-free and runtime-efficient approach to encode spatial context information of local image features. LPG can be combined with existing local feature detectors and descriptors and considerably improves the image-matching quality compared to existing techniques in our experiments. Second, we present Attentive Local SPED (ATLAS), an extension of our previous local features approach with an attention module that improves the feature quality while maintaining high data efficiency. The influence of the proposed modifications is evaluated in an extensive ablation study. Third, we present a hierarchical pipeline that exploits hyperdimensional computing to use the same local features as holistic HDC-descriptors for fast candidate selection and for candidate reranking. We combine all contributions in a runtime and data-efficient VPR pipeline that shows benefits over the state-of-the-art method Patch-NetVLAD on a large collection of standard place recognition datasets with 15$\%$ better performance in VPR accuracy, 54$\times$ faster feature comparison speed, and 55$\times$ less descriptor storage occupancy, making our method promising for real-world high-performance large-scale VPR in changing environments. Code will be made available with publication of this paper.
This paper develops an in-depth treatment concerning the problem of approximating the Gaussian smoothing and Gaussian derivative computations in scale-space theory for application on discrete data. With close connections to previous axiomatic treatments of continuous and discrete scale-space theory, we consider three main ways discretizing these scale-space operations in terms of explicit discrete convolutions, based on either (i) sampling the Gaussian kernels and the Gaussian derivative kernels, (ii) locally integrating the Gaussian kernels and the Gaussian derivative kernels over each pixel support region and (iii) basing the scale-space analysis on the discrete analogue of the Gaussian kernel, and then computing derivative approximations by applying small-support central difference operators to the spatially smoothed image data. We study the properties of these three main discretization methods both theoretically and experimentally, and characterize their performance by quantitative measures, including the results they give rise to with respect to the task of scale selection, investigated for four different use cases, and with emphasis on the behaviour at fine scales. The results show that the sampled Gaussian kernels and derivatives as well as the integrated Gaussian kernels and derivatives perform very poorly at very fine scales. At very fine scales, the discrete analogue of the Gaussian kernel with its corresponding discrete derivative approximations performs substantially better. The sampled Gaussian kernel and the sampled Gaussian derivatives do, on the other hand, lead to numerically very good approximations of the corresponding continuous results, when the scale parameter is sufficiently large, in the experiments presented in the paper, when the scale parameter is greater than a value of about 1, in units of the grid spacing.
This paper presents a novel approach for estimating the Koopman operator defined on a reproducing kernel Hilbert space (RKHS) and its spectra. We propose an estimation method, what we call Jet Dynamic Mode Decomposition (JetDMD), leveraging the intrinsic structure of RKHS and the geometric notion known as jets to enhance the estimation of the Koopman operator. This method refines the traditional Extended Dynamic Mode Decomposition (EDMD) in accuracy, especially in the numerical estimation of eigenvalues. This paper proves JetDMD's superiority through explicit error bounds and convergence rate for special positive definite kernels, offering a solid theoretical foundation for its performance. We also delve into the spectral analysis of the Koopman operator, proposing the notion of extended Koopman operator within a framework of rigged Hilbert space. This notion leads to a deeper understanding of estimated Koopman eigenfunctions and capturing them outside the original function space. Through the theory of rigged Hilbert space, our study provides a principled methodology to analyze the estimated spectrum and eigenfunctions of Koopman operators, and enables eigendecomposition within a rigged RKHS. We also propose a new effective method for reconstructing the dynamical system from temporally-sampled trajectory data of the dynamical system with solid theoretical guarantee. We conduct several numerical simulations using the van der Pol oscillator, the Duffing oscillator, the H\'enon map, and the Lorenz attractor, and illustrate the performance of JetDMD with clear numerical computations of eigenvalues and accurate predictions of the dynamical systems.
The recent paper (IEEE Trans. IT 69, 1680) introduced an analytical method for calculating the channel capacity without the need for iteration. This method has certain limitations that restrict its applicability. Furthermore, the paper does not provide an explanation as to why the channel capacity can be solved analytically in this particular case. In order to broaden the scope of this method and address its limitations, we turn our attention to the reverse em-problem, proposed by Toyota (Information Geometry, 3, 1355 (2020)). This reverse em-problem involves iteratively applying the inverse map of the em iteration to calculate the channel capacity, which represents the maximum mutual information. However, several open problems remained unresolved in Toyota's work. To overcome these challenges, we formulate the reverse em-problem based on Bregman divergence and provide solutions to these open problems. Building upon these results, we transform the reverse em-problem into em-problems and derive a non-iterative formula for the reverse em-problem. This formula can be viewed as a generalization of the aforementioned analytical calculation method. Importantly, this derivation sheds light on the information geometrical structure underlying this special case. By effectively addressing the limitations of the previous analytical method and providing a deeper understanding of the underlying information geometrical structure, our work significantly expands the applicability of the proposed method for calculating the channel capacity without iteration.
The random batch method (RBM) proposed in [Jin et al., J. Comput. Phys., 400(2020), 108877] for large interacting particle systems is an efficient with linear complexity in particle numbers and highly scalable algorithm for $N$-particle interacting systems and their mean-field limits when $N$ is large. We consider in this work the quantitative error estimate of RBM toward its mean-field limit, the Fokker-Planck equation. Under mild assumptions, we obtain a uniform-in-time $O(\tau^2 + 1/N)$ bound on the scaled relative entropy between the joint law of the random batch particles and the tensorized law at the mean-field limit, where $\tau$ is the time step size and $N$ is the number of particles. Therefore, we improve the existing rate in discretization step size from $O(\sqrt{\tau})$ to $O(\tau)$ in terms of the Wasserstein distance.
This paper explores the residual based a posteriori error estimations for the generalized Burgers-Huxley equation (GBHE) featuring weakly singular kernels. Initially, we present a reliable and efficient error estimator for both the stationary GBHE and the semi-discrete GBHE with memory, utilizing the discontinuous Galerkin finite element method (DGFEM) in spatial dimensions. Additionally, employing backward Euler and Crank Nicolson discretization in the temporal domain and DGFEM in spatial dimensions, we introduce an estimator for the fully discrete GBHE, taking into account the influence of past history. The paper also establishes optimal $L^2$ error estimates for both the stationary GBHE and GBHE. Ultimately, we validate the effectiveness of the proposed error estimator through numerical results, demonstrating its efficacy in an adaptive refinement strategy.
This paper presents asymptotic results for the maximum likelihood and restricted maximum likelihood (REML) estimators within a two-way crossed mixed effect model as the sizes of the rows, columns, and cells tend to infinity. Under very mild conditions which do not require the assumption of normality, the estimators are proven to be asymptotically normal, possessing a structured covariance matrix. The growth rate for the number of rows, columns, and cells is unrestricted, whether considered pairwise or collectively.