亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Interpolation-based methods are well-established and effective approaches for the efficient generation of accurate reduced-order surrogate models. Common challenges for such methods are the automatic selection of good or even optimal interpolation points and the appropriate size of the reduced-order model. An approach that addresses the first problem for linear, unstructured systems is the Iterative Rational Krylov Algorithm (IRKA), which computes optimal interpolation points through iterative updates by solving linear eigenvalue problems. However, in the case of preserving internal system structures, optimal interpolation points are unknown, and heuristics based on nonlinear eigenvalue problems result in numbers of potential interpolation points that typically exceed the reasonable size of reduced-order systems. In our work, we propose a projection-based iterative interpolation method inspired by IRKA for generally structured systems to adaptively compute near-optimal interpolation points as well as an appropriate size for the reduced-order system. Additionally, the iterative updates of the interpolation points can be chosen such that the reduced-order model provides an accurate approximation in specified frequency ranges of interest. For such applications, our new approach outperforms the established methods in terms of accuracy and computational effort. We show this in numerical examples with different structures.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · MoDELS · 模型評估 · 計算成本 · 代價 ·
2023 年 7 月 5 日

Recommendation algorithm plays an important role in recommendation system (RS), which predicts users' interests and preferences for some given items based on their known information. Recently, a recommendation algorithm based on the graph Laplacian regularization was proposed, which treats the prediction problem of the recommendation system as a reconstruction issue of small samples of the graph signal under the same graph model. Such a technique takes into account both known and unknown labeled samples information, thereby obtaining good prediction accuracy. However, when the data size is large, solving the reconstruction model is computationally expensive even with an approximate strategy. In this paper, we propose an equivalent reconstruction model that can be solved exactly with extremely low computational cost. Finally, a final prediction algorithm is proposed. We find in the experiments that the proposed method significantly reduces the computational cost while maintaining a good prediction accuracy.

A novel method for noise reduction in the setting of curve time series with error contamination is proposed, based on extending the framework of functional principal component analysis (FPCA). We employ the underlying, finite-dimensional dynamics of the functional time series to separate the serially dependent dynamical part of the observed curves from the noise. Upon identifying the subspaces of the signal and idiosyncratic components, we construct a projection of the observed curve time series along the noise subspace, resulting in an estimate of the underlying denoised curves. This projection is optimal in the sense that it minimizes the mean integrated squared error. By applying our method to similated and real data, we show the denoising estimator is consistent and outperforms existing denoising techniques. Furthermore, we show it can be used as a pre-processing step to improve forecasting.

The Ridgeless minimum $\ell_2$-norm interpolator in overparametrized linear regression has attracted considerable attention in recent years. While it seems to defy the conventional wisdom that overfitting leads to poor prediction, recent research reveals that its norm minimizing property induces an `implicit regularization' that helps prediction in spite of interpolation. This renders the Ridgeless interpolator a theoretically tractable proxy that offers useful insights into the mechanisms of modern machine learning methods. This paper takes a different perspective that aims at understanding the precise stochastic behavior of the Ridgeless interpolator as a statistical estimator. Specifically, we characterize the distribution of the Ridgeless interpolator in high dimensions, in terms of a Ridge estimator in an associated Gaussian sequence model with positive regularization, which plays the role of the prescribed implicit regularization in the context of prediction risk. Our distributional characterizations hold for general random designs and extend uniformly to positively regularized Ridge estimators. As a demonstration of the analytic power of these characterizations, we derive approximate formulae for a general class of weighted $\ell_q$ risks for Ridge(less) estimators that were previously available only for $\ell_2$. Our theory also provides certain further conceptual reconciliation with the conventional wisdom: given any data covariance, a certain amount of regularization in Ridge regression remains beneficial for `most' signals across various statistical tasks including prediction, estimation and inference, as long as the noise level is non-trivial. Surprisingly, optimal tuning can be achieved simultaneously for all the designated statistical tasks by a single generalized or $k$-fold cross-validation scheme, despite being designed specifically for tuning prediction risk.

Probability density estimation is a core problem of statistics and signal processing. Moment methods are an important means of density estimation, but they are generally strongly dependent on the choice of feasible functions, which severely affects the performance. In this paper, we propose a non-classical parametrization for density estimation using sample moments, which does not require the choice of such functions. The parametrization is induced by the squared Hellinger distance, and the solution of it, which is proved to exist and be unique subject to a simple prior that does not depend on data, and can be obtained by convex optimization. Statistical properties of the density estimator, together with an asymptotic error upper bound are proposed for the estimator by power moments. Applications of the proposed density estimator in signal processing tasks are given. Simulation results validate the performance of the estimator by a comparison to several prevailing methods. To the best of our knowledge, the proposed estimator is the first one in the literature for which the power moments up to an arbitrary even order exactly match the sample moments, while the true density is not assumed to fall within specific function classes.

We present a mass lumping approach based on an isogeometric Petrov-Galerkin method that preserves higher-order spatial accuracy in explicit dynamics calculations irrespective of the polynomial degree of the spline approximation. To discretize the test function space, our method uses an approximate dual basis, whose functions are smooth, have local support and satisfy approximate bi-orthogonality with respect to a trial space of B-splines. The resulting mass matrix is ``close'' to the identity matrix. Specifically, a lumped version of this mass matrix preserves all relevant polynomials when utilized in a Galerkin projection. Consequently, the mass matrix can be lumped (via row-sum lumping) without compromising spatial accuracy in explicit dynamics calculations. We address the imposition of Dirichlet boundary conditions and the preservation of approximate bi-orthogonality under geometric mappings. In addition, we establish a link between the exact dual and approximate dual basis functions via an iterative algorithm that improves the approximate dual basis towards exact bi-orthogonality. We demonstrate the performance of our higher-order accurate mass lumping approach via convergence studies and spectral analyses of discretized beam, plate and shell models.

In this paper, a methodology for fine scale modeling of large scale structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed by solving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling.

Entropic optimal transport (EOT) presents an effective and computationally viable alternative to unregularized optimal transport (OT), offering diverse applications for large-scale data analysis. In this work, we derive novel statistical bounds for empirical plug-in estimators of the EOT cost and show that their statistical performance in the entropy regularization parameter $\epsilon$ and the sample size $n$ only depends on the simpler of the two probability measures. For instance, under sufficiently smooth costs this yields the parametric rate $n^{-1/2}$ with factor $\epsilon^{-d/2}$, where $d$ is the minimum dimension of the two population measures. This confirms that empirical EOT also adheres to the lower complexity adaptation principle, a hallmark feature only recently identified for unregularized OT. As a consequence of our theory, we show that the empirical entropic Gromov-Wasserstein distance and its unregularized version for measures on Euclidean spaces also obey this principle. Additionally, we comment on computational aspects and complement our findings with Monte Carlo simulations. Our techniques employ empirical process theory and rely on a dual formulation of EOT over a single function class. Crucial to our analysis is the observation that the entropic cost-transformation of a function class does not increase its uniform metric entropy by much.

The majority of existing large 3D shape datasets contain meshes that lend themselves extremely well to visual applications such as rendering, yet tend to be topologically invalid (i.e, contain non-manifold edges and vertices, disconnected components, self-intersections). Therefore, it is of no surprise that state of the art studies in shape understanding do not explicitly use this 3D information. In conjunction with this, triangular meshes remain the dominant shape representation for many downstream tasks, and their connectivity remain a relatively untapped source of potential for more profound shape reasoning. In this paper, we introduce ROAR, an iterative geometry/topology evolution approach to reconstruct 2-manifold triangular meshes from arbitrary 3D shape representations, that is highly suitable for large existing in-the-wild datasets. ROAR leverages the visual prior large datasets exhibit by evolving the geometry of the mesh via a 2D render loss, and a novel 3D projection loss, the Planar Projection. After each geometry iteration, our system performs topological corrections. Self-intersections are reduced following a geometrically motivated attenuation term, and resolution is added to required regions using a novel face scoring function. These steps alternate until convergence is achieved, yielding a high-quality manifold mesh. We evaluate ROAR on the notoriously messy yet popular dataset ShapeNet, and present ShapeROAR - a topologically valid yet still geometrically accurate version of ShapeNet. We compare our results to state-of-the-art reconstruction methods and demonstrate superior shape faithfulness, topological correctness, and triangulation quality. In addition, we demonstrate reconstructing a mesh from neural Signed Distance Functions (SDF), and achieve comparable Chamfer distance with much fewer SDF sampling operations than the commonly used Marching Cubes approach.

In this paper, we describe and analyze the spectral properties of a number of exact block preconditioners for a class of double saddle point problems. Among all these, we consider an inexact version of a block triangular preconditioner providing extremely fast convergence of the FGMRES method. We develop a spectral analysis of the preconditioned matrix showing that the complex eigenvalues lie in a circle of center (1,0) and radius 1, while the real eigenvalues are described in terms of the roots of a third order polynomial with real coefficients. Numerical examples are reported to illustrate the efficiency of inexact versions of the proposed preconditioners, and to verify the theoretical bounds.

The common cause principle for two random variables $A$ and $B$ is examined in the case of causal insufficiency, when their common cause $C$ is known to exist, but only the joint probability of $A$ and $B$ is observed. As a result, $C$ cannot be uniquely identified (the latent confounder problem). We show that the generalized maximum likelihood method can be applied to this situation and allows identification of $C$ that is consistent with the common cause principle. It closely relates to the maximum entropy principle. Investigation of the two binary symmetric variables reveals a non-analytic behavior of conditional probabilities reminiscent of a second-order phase transition. This occurs during the transition from correlation to anti-correlation in the observed probability distribution. The relation between the generalized likelihood approach and alternative methods, such as predictive likelihood and the minimum common cause entropy, is discussed. The consideration of the common cause for three observed variables (and one hidden cause) uncovers causal structures that defy representation through directed acyclic graphs with the Markov condition.

北京阿比特科技有限公司