亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Highly heterogeneous, anisotropic coefficients, e.g. in the simulation of carbon-fibre composite components, can lead to extremely challenging finite element systems. Direct solvers for the resulting large and sparse linear systems suffer from severe memory requirements and limited parallel scalability, while iterative solvers in general lack robustness. Two-level spectral domain decomposition methods can provide such robustness for symmetric positive definite linear systems, by using coarse spaces based on independent generalized eigenproblems in the subdomains. Rigorous condition number bounds are independent of mesh size, number of subdomains, as well as coefficient contrast. However, their parallel scalability is still limited by the fact that (in order to guarantee robustness) the coarse problem is solved via a direct method. In this paper, we introduce a multilevel variant in the context of subspace correction methods and provide a general convergence theory for its robust convergence for abstract, elliptic variational problems. Assumptions of the theory are verified for conforming, as well as for discontinuous Galerkin methods applied to a scalar diffusion problem. Numerical results illustrate the performance of the method for two- and three-dimensional problems and for various discretization schemes, in the context of scalar diffusion and linear elasticity.

相關內容

Analogy-making is at the core of human and artificial intelligence and creativity. This paper introduces from first principles an abstract algebraic framework of analogical proportions of the form `$a$ is to $b$ what $c$ is to $d$' in the general setting of universal algebra. This enables us to compare mathematical objects possibly across different domains in a uniform way which is crucial for AI-systems. The main idea is to define solutions to analogical equations in terms of maximal sets of algebraic justifications, which amounts to deriving abstract terms of concrete elements from a `known' source domain which can then be instantiated in an `unknown' target domain to obtain analogous elements. It turns out that our notion of analogical proportions has appealing mathematical properties. We compare our framework with two recently introduced frameworks of analogical proportions from the literature in the concrete domains of sets and numbers, and we show that in each case we either disagree with the notion from the literature justified by some counter-example or we can show that our model yields strictly more solutions. As we construct our model from first principles using only elementary concepts of universal algebra, and since our model questions some basic properties of analogical proportions presupposed in the literature, to convince the reader of the plausibility of our model we show that it can be naturally embedded into first-order logic via model-theoretic types, and prove that analogical proportions are compatible with structure-preserving mappings from that perspective. This provides strong evidence for its applicability. In a broader sense, this paper is a first step towards a theory of analogical reasoning and learning systems with potential applications to fundamental AI-problems like commonsense reasoning and computational learning and creativity.

We study active learning of homogeneous $s$-sparse halfspaces in $\mathbb{R}^d$ under the setting where the unlabeled data distribution is isotropic log-concave and each label is flipped with probability at most $\eta$ for a parameter $\eta \in \big[0, \frac12\big)$, known as the bounded noise. Even in the presence of mild label noise, i.e. $\eta$ is a small constant, this is a challenging problem and only recently have label complexity bounds of the form $\tilde{O}\big(s \cdot \mathrm{polylog}(d, \frac{1}{\epsilon})\big)$ been established in [Zhang, 2018] for computationally efficient algorithms. In contrast, under high levels of label noise, the label complexity bounds achieved by computationally efficient algorithms are much worse: the best known result of [Awasthi et al., 2016] provides a computationally efficient algorithm with label complexity $\tilde{O}\big((\frac{s \ln d}{\epsilon})^{2^{\mathrm{poly}(1/(1-2\eta))}} \big)$, which is label-efficient only when the noise rate $\eta$ is a fixed constant. In this work, we substantially improve on it by designing a polynomial time algorithm for active learning of $s$-sparse halfspaces, with a label complexity of $\tilde{O}\big(\frac{s}{(1-2\eta)^4} \mathrm{polylog} (d, \frac 1 \epsilon) \big)$. This is the first efficient algorithm with label complexity polynomial in $\frac{1}{1-2\eta}$ in this setting, which is label-efficient even for $\eta$ arbitrarily close to $\frac12$. Our active learning algorithm and its theoretical guarantees also immediately translate to new state-of-the-art label and sample complexity results for full-dimensional active and passive halfspace learning under arbitrary bounded noise. The key insight of our algorithm and analysis is a new interpretation of online learning regret inequalities, which may be of independent interest.

General elliptic equations with spatially discontinuous diffusion coefficients may be used as a simplified model for subsurface flow in heterogeneous or fractured porous media. In such a model, data sparsity and measurement errors are often taken into account by a randomization of the diffusion coefficient of the elliptic equation which reveals the necessity of the construction of flexible, spatially discontinuous random fields. Subordinated Gaussian random fields are random functions on higher dimensional parameter domains with discontinuous sample paths and great distributional flexibility. In the present work, we consider a random elliptic partial differential equation (PDE) where the discontinuous subordinated Gaussian random fields occur in the diffusion coefficient. Problem specific multilevel Monte Carlo (MLMC) Finite Element methods are constructed to approximate the mean of the solution to the random elliptic PDE. We prove a-priori convergence of a standard MLMC estimator and a modified MLMC - Control Variate estimator and validate our results in various numerical examples.

Stabilized explicit methods are particularly efficient for large systems of stiff stochastic differential equations (SDEs) due to their extended stability domain. However, they loose their efficiency when a severe stiffness is induced by very few "fast" degrees of freedom, as the stiff and nonstiff terms are evaluated concurrently. Therefore, inspired by [A. Abdulle, M. J. Grote, and G. Rosilho de Souza, Preprint (2020), arXiv:2006.00744] we introduce a stochastic modified equation whose stiffness depends solely on the "slow" terms. By integrating this modified equation with a stabilized explicit scheme we devise a multirate method which overcomes the bottleneck caused by a few severely stiff terms and recovers the efficiency of stabilized schemes for large systems of nonlinear SDEs. The scheme is not based on any scale separation assumption of the SDE and therefore it is employable for problems stemming from the spatial discretization of stochastic parabolic partial differential equations on locally refined grids. The multirate scheme has strong order 1/2, weak order 1 and its stability is proved on a model problem. Numerical experiments confirm the efficiency and accuracy of the scheme.

When constructing high-order schemes for solving hyperbolic conservation laws, the corresponding high-order reconstructions are commonly performed in characteristic spaces to eliminate spurious oscillations as much as possible. For multi-dimensional finite volume (FV) schemes, we need to perform the characteristic decomposition several times in different normal directions of the target cell, which is very time-consuming. In this paper, we propose a rotated characteristic decomposition technique which requires only one-time decomposition for multi-dimensional reconstructions. The rotated direction depends only on the gradient of a specific physical quantity which is cheap to calculate. This technique not only reduces the computational cost remarkably, but also controls spurious oscillations effectively. We take a third-order weighted essentially non-oscillatory finite volume (WENO-FV) scheme for solving the Euler equations as an example to demonstrate the efficiency of the proposed technique.

Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective. Our GNN-based implementation is differentiable, does not require to compute the spectral decomposition, and learns a clustering function that can be quickly evaluated on out-of-sample graphs. From the proposed clustering method, we design a graph pooling operator that overcomes some important limitations of state-of-the-art graph pooling techniques and achieves the best performance in several supervised and unsupervised tasks.

The eigendeomposition of nearest-neighbor (NN) graph Laplacian matrices is the main computational bottleneck in spectral clustering. In this work, we introduce a highly-scalable, spectrum-preserving graph sparsification algorithm that enables to build ultra-sparse NN (u-NN) graphs with guaranteed preservation of the original graph spectrums, such as the first few eigenvectors of the original graph Laplacian. Our approach can immediately lead to scalable spectral clustering of large data networks without sacrificing solution quality. The proposed method starts from constructing low-stretch spanning trees (LSSTs) from the original graphs, which is followed by iteratively recovering small portions of "spectrally critical" off-tree edges to the LSSTs by leveraging a spectral off-tree embedding scheme. To determine the suitable amount of off-tree edges to be recovered to the LSSTs, an eigenvalue stability checking scheme is proposed, which enables to robustly preserve the first few Laplacian eigenvectors within the sparsified graph. Additionally, an incremental graph densification scheme is proposed for identifying extra edges that have been missing in the original NN graphs but can still play important roles in spectral clustering tasks. Our experimental results for a variety of well-known data sets show that the proposed method can dramatically reduce the complexity of NN graphs, leading to significant speedups in spectral clustering.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

We introduce a new multi-dimensional nonlinear embedding -- Piecewise Flat Embedding (PFE) -- for image segmentation. Based on the theory of sparse signal recovery, piecewise flat embedding with diverse channels attempts to recover a piecewise constant image representation with sparse region boundaries and sparse cluster value scattering. The resultant piecewise flat embedding exhibits interesting properties such as suppressing slowly varying signals, and offers an image representation with higher region identifiability which is desirable for image segmentation or high-level semantic analysis tasks. We formulate our embedding as a variant of the Laplacian Eigenmap embedding with an $L_{1,p} (0<p\leq1)$ regularization term to promote sparse solutions. First, we devise a two-stage numerical algorithm based on Bregman iterations to compute $L_{1,1}$-regularized piecewise flat embeddings. We further generalize this algorithm through iterative reweighting to solve the general $L_{1,p}$-regularized problem. To demonstrate its efficacy, we integrate PFE into two existing image segmentation frameworks, segmentation based on clustering and hierarchical segmentation based on contour detection. Experiments on four major benchmark datasets, BSDS500, MSRC, Stanford Background Dataset, and PASCAL Context, show that segmentation algorithms incorporating our embedding achieve significantly improved results.

Spectral graph convolutional neural networks (CNNs) require approximation to the convolution to alleviate the computational complexity, resulting in performance loss. This paper proposes the topology adaptive graph convolutional network (TAGCN), a novel graph convolutional network defined in the vertex domain. We provide a systematic way to design a set of fixed-size learnable filters to perform convolutions on graphs. The topologies of these filters are adaptive to the topology of the graph when they scan the graph to perform convolution. The TAGCN not only inherits the properties of convolutions in CNN for grid-structured data, but it is also consistent with convolution as defined in graph signal processing. Since no approximation to the convolution is needed, TAGCN exhibits better performance than existing spectral CNNs on a number of data sets and is also computationally simpler than other recent methods.

北京阿比特科技有限公司