亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we propose a reduced basis method for efficient solution of parametric linear systems. The coefficient matrix is assumed to be a linear matrix-valued function that is symmetric and positive definite for admissible values of the parameter $\mathbf{\sigma}\in \mathbb{R}^s$. We propose a solution strategy where one first computes a basis for the appropriate compound Krylov subspace and then uses this basis to compute a subspace solution for multiple $\mathbf{\sigma}$. Three kinds of compound Krylov subspaces are discussed. Error estimate is given for the subspace solution from each of these spaces. Theoretical results are demonstrated by numerical examples related to solving parameter dependent elliptic PDEs using the finite element method (FEM).

相關內容

We give a new algorithm for the estimation of the cross-covariance matrix $\mathbb{E} XY'$ of two large dimensional signals $X\in\mathbb{R}^n$, $Y\in \mathbb{R}^p$ in the context where the number $T$ of observations of the pair $(X,Y)$ is large but $n/T$ and $p/T$ are not supposed to be small. In the asymptotic regime where $n,p,T$ are large, with high probability, this algorithm is optimal for the Frobenius norm among rotationally invariant estimators, i.e. estimators derived from the empirical estimator by cleaning the singular values, while letting singular vectors unchanged.

We study fast rates of convergence in the setting of nonparametric online regression, namely where regret is defined with respect to an arbitrary function class which has bounded complexity. Our contributions are two-fold: - In the realizable setting of nonparametric online regression with the absolute loss, we propose a randomized proper learning algorithm which gets a near-optimal mistake bound in terms of the sequential fat-shattering dimension of the hypothesis class. In the setting of online classification with a class of Littlestone dimension $d$, our bound reduces to $d \cdot {\rm poly} \log T$. This result answers a question as to whether proper learners could achieve near-optimal mistake bounds; previously, even for online classification, the best known mistake bound was $\tilde O( \sqrt{dT})$. Further, for the real-valued (regression) setting, the optimal mistake bound was not even known for improper learners, prior to this work. - Using the above result, we exhibit an independent learning algorithm for general-sum binary games of Littlestone dimension $d$, for which each player achieves regret $\tilde O(d^{3/4} \cdot T^{1/4})$. This result generalizes analogous results of Syrgkanis et al. (2015) who showed that in finite games the optimal regret can be accelerated from $O(\sqrt{T})$ in the adversarial setting to $O(T^{1/4})$ in the game setting. To establish the above results, we introduce several new techniques, including: a hierarchical aggregation rule to achieve the optimal mistake bound for real-valued classes, a multi-scale extension of the proper online realizable learner of Hanneke et al. (2021), an approach to show that the output of such nonparametric learning algorithms is stable, and a proof that the minimax theorem holds in all online learnable games.

This contribution focuses on the development of Model Order Reduction (MOR) for one-way coupled steady state linear thermomechanical problems in a finite element setting. We apply Proper Orthogonal Decomposition (POD) for the computation of reduced basis space. On the other hand, for the evaluation of the modal coefficients, we use two different methodologies: the one based on the Galerkin projection (G) and the other one based on Artificial Neural Network (ANN). We aim at comparing POD-G and POD-ANN in terms of relevant features including errors and computational efficiency. In this context, both physical and geometrical parametrization are considered. We also carry out a validation of the Full Order Model (FOM) based on customized benchmarks in order to provide a complete computational pipeline. The framework proposed is applied to a relevant industrial problem related to the investigation of thermomechanical phenomena arising in blast furnace hearth walls. Keywords: Thermomechanical problems, Finite element method, Proper orthogonal decomposition, Galerkin projection, Artificial neural network, Geometric and physical parametrization, Blast furnace.

Efficient and robust iterative solvers for strong anisotropic elliptic equations are very challenging. In this paper a block preconditioning method is introduced to solve the linear algebraic systems of a class of micro-macro asymptotic-preserving (MMAP) scheme. MMAP method was developed by Degond {\it et al.} in 2012 where the discrete matrix has a $2\times2$ block structure. By the approximate Schur complement a series of block preconditioners are constructed. We first analyze a natural approximate Schur complement that is the coefficient matrix of the original non-AP discretization. However it tends to be singular for very small anisotropic parameters. We then improve it by using more suitable approximation for boundary rows of the exact Schur complement. With these block preconditioners, preconditioned GMRES iterative method is developed to solve the discrete equations. Several numerical tests show that block preconditioning methods can be a robust strategy with respect to grid refinement and the anisotropic strengths.

We introduce a method for calculating individual elements of matrix functions. Our technique makes use of a novel series expansion for the action of matrix functions on basis vectors that is memory efficient even for very large matrices. We showcase our approach by calculating the matrix elements of the exponential of a transverse-field Ising model and evaluating quantum transition amplitudes for large many-body Hamiltonians of sizes up to $2^{64} \times 2^{64}$ on a single workstation. We also discuss the application of the method to matrix inverses. We relate and compare our method to the state-of-the-art and demonstrate its advantages. We also discuss practical applications of our method.

Many functions of interest are in a high-dimensional space but exhibit low-dimensional structures. This paper studies regression of a $s$-H\"{o}lder function $f$ in $\mathbb{R}^D$ which varies along a central subspace of dimension $d$ while $d\ll D$. A direct approximation of $f$ in $\mathbb{R}^D$ with an $\varepsilon$ accuracy requires the number of samples $n$ in the order of $\varepsilon^{-(2s+D)/s}$. In this paper, we analyze the Generalized Contour Regression (GCR) algorithm for the estimation of the central subspace and use piecewise polynomials for function approximation. GCR is among the best estimators for the central subspace, but its sample complexity is an open question. We prove that GCR leads to a mean squared estimation error of $O(n^{-1})$ for the central subspace, if a variance quantity is exactly known. The estimation error of this variance quantity is also given in this paper. The mean squared regression error of $f$ is proved to be in the order of $\left(n/\log n\right)^{-\frac{2s}{2s+d}}$ where the exponent depends on the dimension of the central subspace $d$ instead of the ambient space $D$. This result demonstrates that GCR is effective in learning the low-dimensional central subspace. We also propose a modified GCR with improved efficiency. The convergence rate is validated through several numerical experiments.

The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.

Constructed by the neural network, variational autoencoder has the overfitting problem caused by setting too many neural units, we develop an adaptive dimension reduction algorithm that can automatically learn the dimension of latent variable vector, moreover, the dimension of every hidden layer. This approach not only apply to the variational autoencoder but also other variants like Conditional VAE(CVAE), and we show the empirical results on six data sets which presents the universality and efficiency of this algorithm. The key advantages of this algorithm is that it can converge the dimension of latent variable vector which approximates the dimension reaches minimum loss of variational autoencoder(VAE), also speeds up the generating and computing speed by reducing the neural units.

For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司