亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work develops a functional analytic framework for making computer assisted arguments involving transverse heteroclinic connecting orbits between hyperbolic periodic solutions of ordinary differential equations. We exploit a Fourier-Taylor approximation of the local stable/unstable manifold of the periodic orbit, combined with a numerical method for solving two point boundary value problems via Chebyshev series approximations. The a-posteriori analysis developed provides mathematically rigorous bounds on all approximation errors, providing both abstract existence results and quantitative information about the true heteroclinic solution. Example calculations are given for both the dissipative Lorenz system and the Hamiltonian Hill Restricted Four Body Problem.

相關內容

Supervised machine learning methods for geological mapping via remote sensing face limitations due to the scarcity of accurately labelled training data that can be addressed by unsupervised learning, such as dimensionality reduction and clustering. Dimensionality reduction methods have the potential to play a crucial role in improving the accuracy of geological maps. Although conventional dimensionality reduction methods may struggle with nonlinear data, unsupervised deep learning models such as autoencoders can model non-linear relationships. Stacked autoencoders feature multiple interconnected layers to capture hierarchical data representations useful for remote sensing data. This study presents an unsupervised machine learning-based framework for processing remote sensing data using stacked autoencoders for dimensionality reduction and k-means clustering for mapping geological units. We use Landsat 8, ASTER, and Sentinel-2 datasets to evaluate the framework for geological mapping of the Mutawintji region in Western New South Wales, Australia. We also compare stacked autoencoders with principal component analysis and canonical autoencoders. Our results reveal that the framework produces accurate and interpretable geological maps, efficiently discriminating rock units. We find that the accuracy of stacked autoencoders ranges from 86.6 % to 90 %, depending on the remote sensing data type, which is superior to their counterparts. We also find that the generated maps align with prior geological knowledge of the study area while providing novel insights into geological structures.

This paper studies the complexity of classical modal logics and of their extension with fixed-point operators, using translations to transfer results across logics. In particular, we show several complexity results for multi-agent logics via translations to and from the $\mu$-calculus and modal logic, which allow us to transfer known upper and lower bounds. We also use these translations to introduce terminating and non-terminating tableau systems for the logics we study, based on Kozen's tableau for the $\mu$-calculus and the one of Fitting and Massacci for modal logic. Finally, we describe these tableaux with $\mu$-calculus formulas, thus reducing the satisfiability of each of these logics to the satisfiability of the $\mu$-calculus, resulting in a general 2EXP upper bound for satisfiability testing.

We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.

This contribution introduces a model order reduction approach for an advection-reaction problem with a parametrized reaction function. The underlying discretization uses an ultraweak formulation with an $L^2$-like trial space and an 'optimal' test space as introduced by Demkowicz et al. This ensures the stability of the discretization and in addition allows for a symmetric reformulation of the problem in terms of a dual solution which can also be interpreted as the normal equations of an adjoint least-squares problem. Classic model order reduction techniques can then be applied to the space of dual solutions which also immediately gives a reduced primal space. We show that the necessary computations do not require the reconstruction of any primal solutions and can instead be performed entirely on the space of dual solutions. We prove exponential convergence of the Kolmogorov $N$-width and show that a greedy algorithm produces quasi-optimal approximation spaces for both the primal and the dual solution space. Numerical experiments based on the benchmark problem of a catalytic filter confirm the applicability of the proposed method.

The solution approximation for partial differential equations (PDEs) can be substantially improved using smooth basis functions. The recently introduced mollified basis functions are constructed through mollification, or convolution, of cell-wise defined piecewise polynomials with a smooth mollifier of certain characteristics. The properties of the mollified basis functions are governed by the order of the piecewise functions and the smoothness of the mollifier. In this work, we exploit the high-order and high-smoothness properties of the mollified basis functions for solving PDEs through the point collocation method. The basis functions are evaluated at a set of collocation points in the domain. In addition, boundary conditions are imposed at a set of boundary collocation points distributed over the domain boundaries. To ensure the stability of the resulting linear system of equations, the number of collocation points is set larger than the total number of basis functions. The resulting linear system is overdetermined and is solved using the least square technique. The presented numerical examples confirm the convergence of the proposed approximation scheme for Poisson, linear elasticity, and biharmonic problems. We study in particular the influence of the mollifier and the spatial distribution of the collocation points.

The hyperbolic network models exhibit very fundamental and essential features, like small-worldness, scale-freeness, high-clustering coefficient, and community structure. In this paper, we comprehensively explore the presence of an important feature, the core-periphery structure, in the hyperbolic network models, which is often exhibited by real-world networks. We focused on well-known hyperbolic models such as popularity-similarity optimization model (PSO) and S1/H2 models and studied core-periphery structures using a well-established method that is based on standard random walk Markov chain model. The observed core-periphery centralization values indicate that the core-periphery structure can be very pronounced under certain conditions. We also validate our findings by statistically testing for the significance of the observed core-periphery structure in the network geometry. This study extends network science and reveals core-periphery insights applicable to various domains, enhancing network performance and resiliency in transportation and information systems.

To analyze the topological properties of the given discrete data, one needs to consider a continuous transform called filtration. Persistent homology serves as a tool to track changes of homology in the filtration. The outcome of the topological analysis of data varies depending on the choice of filtration, making the selection of filtration crucial. Filtration learning is an attempt to find an optimal filtration that minimizes the loss function. Exact Multi-parameter Persistent Homology (EMPH) has been recently proposed, particularly for topological time-series analysis, that utilizes the exact formula of rank invariant instead of calculating it. In this paper, we propose a framework for filtration learning of EMPH. We formulate an optimization problem and propose an algorithm for solving the problem. We then apply the proposed algorithm to several classification problems. Particularly, we derive the exact formula of the gradient of the loss function with respect to the filtration parameter, which makes it possible to directly update the filtration without using automatic differentiation, significantly enhancing the learning process.

We present an efficient dimension-by-dimension finite-volume method which solves the adiabatic magnetohydrodynamics equations at high discretization order, using the constrained-transport approach on Cartesian grids. Results are presented up to tenth order of accuracy. This method requires only one reconstructed value per face for each computational cell. A passage through high-order point values leads to a modest growth of computational cost with increasing discretization order. At a given resolution, these high-order schemes present significantly less numerical dissipation than commonly employed lower-order approaches. Thus, results of comparable accuracy are achievable at a substantially coarser resolution, yielding overall performance gains. We also present a way to include physical dissipative terms: viscosity, magnetic diffusivity and cooling functions, respecting the finite-volume and constrained-transport frameworks.

We propose a variational symplectic numerical method for the time integration of dynamical systems issued from the least action principle. We assume a quadratic internal interpolation of the state between two time steps and we approximate the action in one time step by the Simpson's quadrature formula. The resulting scheme is nonlinear and symplectic. First numerical experiments concern a nonlinear pendulum and we have observed experimentally very good convergence properties.

A non-linear complex system governed by multi-spatial and multi-temporal physics scales cannot be fully understood with a single diagnostic, as each provides only a partial view and much information is lost during data extraction. Combining multiple diagnostics also results in imperfect projections of the system's physics. By identifying hidden inter-correlations between diagnostics, we can leverage mutual support to fill in these gaps, but uncovering these inter-correlations analytically is too complex. We introduce a groundbreaking machine learning methodology to address this issue. Our multimodal approach generates super resolution data encompassing multiple physics phenomena, capturing detailed structural evolution and responses to perturbations previously unobservable. This methodology addresses a critical problem in fusion plasmas: the Edge Localized Mode (ELM), a plasma instability that can severely damage reactor walls. One method to stabilize ELM is using resonant magnetic perturbation to trigger magnetic islands. However, low spatial and temporal resolution of measurements limits the analysis of these magnetic islands due to their small size, rapid dynamics, and complex interactions within the plasma. With super-resolution diagnostics, we can experimentally verify theoretical models of magnetic islands for the first time, providing unprecedented insights into their role in ELM stabilization. This advancement aids in developing effective ELM suppression strategies for future fusion reactors like ITER and has broader applications, potentially revolutionizing diagnostics in fields such as astronomy, astrophysics, and medical imaging.

北京阿比特科技有限公司