亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reduced order modelling relies on representing complex dynamical systems using simplified modes, which can be achieved through Koopman operator analysis. However, computing Koopman eigen pairs for high-dimensional observable data can be inefficient. This paper proposes using deep autoencoders, a type of deep learning technique, to perform non-linear geometric transformations on raw data before computing Koopman eigen vectors. The encoded data produced by the deep autoencoder is diffeomorphic to a manifold of the dynamical system, and has a significantly lower dimension than the raw data. To handle high-dimensional time series data, Takens's time delay embedding is presented as a pre-processing technique. The paper concludes by presenting examples of these techniques in action.

相關內容

自動(dong)編碼(ma)器是(shi)一(yi)種(zhong)人工神經網絡,用(yong)(yong)(yong)于以無監(jian)督的(de)(de)(de)方式學習(xi)有(you)效(xiao)的(de)(de)(de)數(shu)據編碼(ma)。自動(dong)編碼(ma)器的(de)(de)(de)目的(de)(de)(de)是(shi)通過訓(xun)練網絡忽略信(xin)號“噪(zao)聲(sheng)”來(lai)學習(xi)一(yi)組數(shu)據的(de)(de)(de)表示(編碼(ma)),通常用(yong)(yong)(yong)于降維。與(yu)簡化方面一(yi)起,學習(xi)了(le)重(zhong)構(gou)方面,在(zai)此,自動(dong)編碼(ma)器嘗試從簡化編碼(ma)中生成(cheng)盡可(ke)能接近其原始輸(shu)入的(de)(de)(de)表示形(xing)式,從而得到其名稱。基本(ben)模型存在(zai)幾種(zhong)變體,其目的(de)(de)(de)是(shi)迫使學習(xi)的(de)(de)(de)輸(shu)入表示形(xing)式具有(you)有(you)用(yong)(yong)(yong)的(de)(de)(de)屬(shu)性。自動(dong)編碼(ma)器可(ke)有(you)效(xiao)地解決許多應用(yong)(yong)(yong)問題(ti),從面部識別到獲取單詞的(de)(de)(de)語(yu)義(yi)。

The dynamic ranking, due to its increasing importance in many applications, is becoming crucial, especially with the collection of voluminous time-dependent data. One such application is sports statistics, where dynamic ranking aids in forecasting the performance of competitive teams, drawing on historical and current data. Despite its usefulness, predicting and inferring rankings pose challenges in environments necessitating time-dependent modeling. This paper introduces a spectral ranker called Kernel Rank Centrality, designed to rank items based on pairwise comparisons over time. The ranker operates via kernel smoothing in the Bradley-Terry model, utilizing a Markov chain model. Unlike the maximum likelihood approach, the spectral ranker is nonparametric, demands fewer model assumptions and computations, and allows for real-time ranking. We establish the asymptotic distribution of the ranker by applying an innovative group inverse technique, resulting in a uniform and precise entrywise expansion. This result allows us to devise a new inferential method for predictive inference, previously unavailable in existing approaches. Our numerical examples showcase the ranker's utility in predictive accuracy and constructing an uncertainty measure for prediction, leveraging data from the National Basketball Association (NBA). The results underscore our method's potential compared to the gold standard in sports, the Arpad Elo rating system.

The classical Minkowski problem for convex bodies has deeply influenced the development of differential geometry. During the past several decades, abundant mathematical theories have been developed for studying the solutions of the Minkowski problem, however, the numerical solution of this problem has been largely left behind, with only few methods available to achieve that goal. In this article, focusing on the two-dimensional Minkowski problem with Dirichlet boundary conditions, we introduce two solution methods, both based on operator-splitting. One of these two methods deals directly with the Dirichlet condition, while the other method uses an approximation of this Dirichlet condition. This relaxation of the Dirichlet condition makes this second method better suited than the first one to treat those situations where the Minkowski and the Dirichlet condition are not compatible. Both methods are generalizations of the solution method for the canonical Monge-Amp\`{e}re equation discussed by Glowinski et al. (Journal of Scientific Computing, 79(1), 1-47, 2019); as such they take advantage of a divergence formulation of the Minkowski problem, well-suited to a mixed finite element approximation, and to the the time-discretization via an operator-splitting scheme, of an associated initial value problem. Our methodology can be easily implemented on convex domains of rather general shape (with curved boundaries, possibly). The numerical experiments we performed validate both methods and show that if one uses continuous piecewise affine finite element approximations of the smooth solution of the Minkowski problem and of its three second order derivatives, these two methods provide nearly second order accuracy for the $L^2$ and $L^{\infty}$ error. One can extend easily the methods discussed in this article, to address the solution of three-dimensional Minkowski problem.

The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems. Estimators such as principal component regression (PCR) or reduced rank regression (RRR) in kernel spaces can be shown to provably learn Koopman operators from finite empirical observations of the system's time evolution. Scaling these approaches to very long trajectories is a challenge and requires introducing suitable approximations to make computations feasible. In this paper, we boost the efficiency of different kernel-based Koopman operator estimators using random projections (sketching). We derive, implement and test the new "sketched" estimators with extensive experiments on synthetic and large-scale molecular dynamics datasets. Further, we establish non asymptotic error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency. Our empirical and theoretical analysis shows that the proposed estimators provide a sound and efficient way to learn large scale dynamical systems. In particular our experiments indicate that the proposed estimators retain the same accuracy of PCR or RRR, while being much faster.

Persistent homology is a methodology central to topological data analysis that extracts and summarizes the topological features within a dataset as a persistence diagram; it has recently gained much popularity from its myriad successful applications to many domains. However, its algebraic construction induces a metric space of persistence diagrams with a highly complex geometry. In this paper, we prove convergence of the $k$-means clustering algorithm on persistence diagram space and establish theoretical properties of the solution to the optimization problem in the Karush--Kuhn--Tucker framework. Additionally, we perform numerical experiments on various representations of persistent homology, including embeddings of persistence diagrams as well as diagrams themselves and their generalizations as persistence measures; we find that clustering performance directly on persistence diagrams and measures outperform their vectorized representations.

Bayesian inference has widely acknowledged advantages in many problems, but it can also be unreliable if the model is misspecified. Bayesian modular inference is concerned with inference in complex models which have been specified through a collection of coupled sub-models. The sub-models are called modules in the literature, and they often arise from modeling different data sources, or from combining domain knowledge from different disciplines. When some modules are misspecified, cutting feedback is a widely used Bayesian modular inference method which ensures that information from suspect model components is not used in making inferences about parameters in correctly specified modules. However, in general settings it is difficult to decide when this ``cut posterior'' is preferable to the exact posterior. When misspecification is not severe, cutting feedback may increase the uncertainty in Bayesian posterior inference greatly without reducing estimation bias substantially. This motivates semi-modular inference methods, which avoid the binary cut of cutting feedback approaches. In this work, using a local model misspecification framework, we provide the first precise formulation of the the bias-variance trade-off that has motivated the literature on semi-modular inference. We then implement a mixture-based semi-modular inference approach, demonstrating theoretically that it delivers inferences that are more accurate, in terms of a user-defined loss function, than if either the cut or full posterior were used by themselves. The new method is demonstrated in a number of applications.

The Koopman operator has become an essential tool for data-driven analysis, prediction and control of complex systems, the main reason being the enormous potential of identifying linear function space representations of nonlinear dynamics from measurements. Until now, the situation where for large-scale systems, we (i) only have access to partial observations (i.e., measurements, as is very common for experimental data) or (ii) deliberately perform coarse graining (for efficiency reasons) has not been treated to its full extent. In this paper, we address the pitfall associated with this situation, that the classical EDMD algorithm does not automatically provide a Koopman operator approximation for the underlying system if we do not carefully select the number of observables. Moreover, we show that symmetries in the system dynamics can be carried over to the Koopman operator, which allows us to massively increase the model efficiency. We also briefly draw a connection to domain decomposition techniques for partial differential equations and present numerical evidence using the Kuramoto--Sivashinsky equation.

Neural operators have emerged as a powerful tool for learning the mapping between infinite-dimensional parameter and solution spaces of partial differential equations (PDEs). In this work, we focus on multiscale PDEs that have important applications such as reservoir modeling and turbulence prediction. We demonstrate that for such PDEs, the spectral bias towards low-frequency components presents a significant challenge for existing neural operators. To address this challenge, we propose a hierarchical attention neural operator (HANO) inspired by the hierarchical matrix approach. HANO features a scale-adaptive interaction range and self-attentions over a hierarchy of levels, enabling nested feature computation with controllable linear cost and encoding/decoding of multiscale solution space. We also incorporate an empirical $H^1$ loss function to enhance the learning of high-frequency components. Our numerical experiments demonstrate that HANO outperforms state-of-the-art (SOTA) methods for representative multiscale problems.

Much like the classical Fisher linear discriminant analysis (LDA), the recently proposed Wasserstein discriminant analysis (WDA) is a linear dimensionality reduction method that seeks a projection matrix to maximize the dispersion of different data classes and minimize the dispersion of same data classes via a bi-level optimization. In contrast to LDA, WDA can account for both global and local interconnections between data classes by using the underlying principles of optimal transport. In this paper, a bi-level nonlinear eigenvector algorithm (WDA-nepv) is presented to fully exploit the structures of the bi-level optimization of WDA. The inner level of WDA-nepv for computing the optimal transport matrices is formulated as an eigenvector-dependent nonlinear eigenvalue problem (NEPv), and meanwhile, the outer level for trace ratio optimizations is formulated as another NEPv. Both NEPvs can be computed efficiently under the self-consistent field (SCF) framework. WDA-nepv is derivative-free and surrogate-model-free when compared with existing algorithms. Convergence analysis of the proposed WDA-nepv justifies the utilization of the SCF for solving the bi-level optimization of WDA. Numerical experiments with synthetic and real-life datasets demonstrate the classification accuracy and scalability of WDA-nepv.

Modelling in biology must adapt to increasingly complex and massive data. The efficiency of the inference algorithms used to estimate model parameters is therefore questioned. Many of these are based on stochastic optimization processes which waste a significant part of the computation time due to their rejection sampling approaches. We introduce the Fixed Landscape Inference MethOd (flimo), a new likelihood-free inference method for continuous state-space stochastic models. It applies deterministic gradient-based optimization algorithms to obtain a point estimate of the parameters, minimizing the difference between the data and some simulations according to some prescribed summary statistics. In this sense, it is analogous to Approximate Bayesian Computation (ABC). Like ABC, it can also provide an approximation of the distribution of the parameters. Three applications are proposed: a usual theoretical example, namely the inference of the parameters of g-and-k distributions; a population genetics problem, not so simple as it seems, namely the inference of a selective value from time series in a Wright-Fisher model; and simulations from a Ricker model, representing chaotic population dynamics. In the two first applications, the results show a drastic reduction of the computational time needed for the inference phase compared to the other methods, despite an equivalent accuracy. Even when likelihood-based methods are applicable, the simplicity and efficiency of flimo make it a compelling alternative. Implementations in Julia and in R are available on //metabarcoding.org/flimo. To run flimo, the user must simply be able to simulate data according to the chosen model.

With the development of Big data technology, data analysis has become increasingly important. Traditional clustering algorithms such as K-means are highly sensitive to the initial centroid selection and perform poorly on non-convex datasets. In this paper, we address these problems by proposing a data-driven Bregman divergence parameter optimization clustering algorithm (DBGSA), which combines the Universal Gravitational Algorithm to bring similar points closer in the dataset. We construct a gravitational coefficient equation with a special property that gradually reduces the influence factor as the iteration progresses. Furthermore, we introduce the Bregman divergence generalized power mean information loss minimization to identify cluster centers and build a hyperparameter identification optimization model, which effectively solves the problems of manual adjustment and uncertainty in the improved dataset. Extensive experiments are conducted on four simulated datasets and six real datasets. The results demonstrate that DBGSA significantly improves the accuracy of various clustering algorithms by an average of 63.8\% compared to other similar approaches like enhanced clustering algorithms and improved datasets. Additionally, a three-dimensional grid search was established to compare the effects of different parameter values within threshold conditions, and it was discovered the parameter set provided by our model is optimal. This finding provides strong evidence of the high accuracy and robustness of the algorithm.

北京阿比特科技有限公司