亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper is concerned with numerical solution of transport problems in heterogeneous porous media. A semi-discrete continuous-in-time formulation of the linear advection-diffusion equation is obtained by using a mixed hybrid finite element method, in which the flux variable represents both the advective and diffusive flux, and the Lagrange multiplier arising from the hybridization is used for the discretization of the advective term. Based on global-in-time and nonoverlapping domain decomposition, we propose two implicit local time-stepping methods to solve the semi-discrete problem. The first method uses the time-dependent Steklov-Poincar\'e type operator and the second uses the optimized Schwarz waveform relaxation (OSWR) with Robin transmission conditions. For each method, we formulate a space-time interface problem which is solved iteratively. Each iteration involves solving the subdomain problems independently and globally in time; thus, different time steps can be used in the subdomains. The convergence of the fully discrete OSWR algorithm with nonmatching time grids is proved. Numerical results for problems with various Pecl\'et numbers and discontinuous coefficients, including a prototype for the simulation of the underground storage of nuclear waste, are presented to illustrate the performance of the proposed local time-stepping methods.

相關內容

The Laplace-Beltrami problem on closed surfaces embedded in three dimensions arises in many areas of physics, including molecular dynamics (surface diffusion), electromagnetics (harmonic vector fields), and fluid dynamics (vesicle deformation). Using classical potential theory,the Laplace-Beltrami operator can be pre-/post-conditioned with integral operators whose kernel is translation invariant, resulting in well-conditioned Fredholm integral equations of the second-kind. These equations have the standard Laplace kernel from potential theory, and therefore the equations can be solved rapidly and accurately using a combination of fast multipole methods (FMMs) and high-order quadrature corrections. In this work we detail such a scheme, presenting two alternative integral formulations of the Laplace-Beltrami problem, each of whose solution can be obtained via FMM acceleration. We then present several applications of the solvers, focusing on the computation of what are known as harmonic vector fields, relevant for many applications in electromagnetics. A battery of numerical results are presented for each application, detailing the performance of the solver in various geometries.

In this paper, we propose a deep learning based reduced order modeling method for stochastic underground flow problems in highly heterogeneous media. We aim to utilize supervised learning to build a reduced surrogate model from the stochastic parameter space that characterizes the possible highly heterogeneous media to the solution space of a stochastic flow problem to have fast online simulations. Dominant POD modes obtained from a well-designed spectral problem in a global snapshot space are used to represent the solution of the flow problem. Due to the small dimension of the solution, the complexity of the neural network is significantly reduced. We adopt the generalized multiscale finite element method (GMsFEM), in which a set of local multiscale basis functions that can capture the heterogeneity of the media and source information are constructed to efficiently generate globally defined snapshot space. Rigorous theoretical analyses are provided and extensive numerical experiments for linear and nonlinear stochastic flows are provided to verify the superior performance of the proposed method.

We develop a rapid and accurate contour method for the solution of time-fractional PDEs. The method inverts the Laplace transform via an optimised stable quadrature rule, suitable for infinite-dimensional operators, whose error decreases like $\exp(-cN/\log(N))$ for $N$ quadrature points. The method is parallisable, avoids having to resolve singularities of the solution as $t\downarrow 0$, and avoids the large memory consumption that can be a challenge for time-stepping methods applied to time-fractional PDEs. The ODEs resulting from quadrature are solved using adaptive sparse spectral methods that converge exponentially with optimal linear complexity. These solutions of ODEs are reused for different times. We provide a complete analysis of our approach for fractional beam equations used to model small-amplitude vibration of viscoelastic materials with a fractional Kelvin-Voigt stress-strain relationship. We calculate the system's energy evolution over time and the surface deformation in cases of both constant and non-constant viscoelastic parameters. An infinite-dimensional ``solve-then-discretise'' approach considerably simplifies the analysis, which studies the generalisation of the numerical range of a quasi-linearisation of a suitable operator pencil. This allows us to build an efficient algorithm with explicit error control. The approach can be readily adapted to other time-fractional PDEs and is not constrained to fractional parameters in the range $0<\nu<1$.

We describe the first gradient methods on Riemannian manifolds to achieve accelerated rates in the non-convex case. Under Lipschitz assumptions on the Riemannian gradient and Hessian of the cost function, these methods find approximate first-order critical points faster than regular gradient descent. A randomized version also finds approximate second-order critical points. Both the algorithms and their analyses build extensively on existing work in the Euclidean case. The basic operation consists in running the Euclidean accelerated gradient descent method (appropriately safe-guarded against non-convexity) in the current tangent space, then moving back to the manifold and repeating. This requires lifting the cost function from the manifold to the tangent space, which can be done for example through the Riemannian exponential map. For this approach to succeed, the lifted cost function (called the pullback) must retain certain Lipschitz properties. As a contribution of independent interest, we prove precise claims to that effect, with explicit constants. Those claims are affected by the Riemannian curvature of the manifold, which in turn affects the worst-case complexity bounds for our optimization algorithms.

This study debuts a new spline dimensional decomposition (SDD) for uncertainty quantification analysis of high-dimensional functions, including those endowed with high nonlinearity and nonsmoothness, if they exist, in a proficient manner. The decomposition creates an hierarchical expansion for an output random variable of interest with respect to measure-consistent orthonormalized basis splines (B-splines) in independent input random variables. A dimensionwise decomposition of a spline space into orthogonal subspaces, each spanned by a reduced set of such orthonormal splines, results in SDD. Exploiting the modulus of smoothness, the SDD approximation is shown to converge in mean-square to the correct limit. The computational complexity of the SDD method is polynomial, as opposed to exponential, thus alleviating the curse of dimensionality to the extent possible. Analytical formulae are proposed to calculate the second-moment properties of a truncated SDD approximation for a general output random variable in terms of the expansion coefficients involved. Numerical results indicate that a low-order SDD approximation of nonsmooth functions calculates the probabilistic characteristics of an output variable with an accuracy matching or surpassing those obtained by high-order approximations from several existing methods. Finally, a 34-dimensional random eigenvalue analysis demonstrates the utility of SDD in solving practical problems.

We propose a topology optimisation of acoustic devices that work in a certain bandwidth. To achieve this, we define the objective function as the frequency-averaged sound intensity at given observation points, which is represented by a frequency integral over a given frequency band. It is, however, prohibitively expensive to evaluate such an integral naively by a quadrature. We thus estimate the frequency response by the Pad\'{e} approximation and integrate the approximated function to obtain the objective function. The corresponding topological derivative is derived with the help of the adjoint variable method and chain rule. It is shown that the objective and its sensitivity can be evaluated semi-analytically. We present efficient numerical procedures to compute them and incorporate them into a topology optimisation based on the level-set method. We confirm the validity and effectiveness of the present method through some numerical examples.

Non-convex optimization is ubiquitous in modern machine learning. Researchers devise non-convex objective functions and optimize them using off-the-shelf optimizers such as stochastic gradient descent and its variants, which leverage the local geometry and update iteratively. Even though solving non-convex functions is NP-hard in the worst case, the optimization quality in practice is often not an issue -- optimizers are largely believed to find approximate global minima. Researchers hypothesize a unified explanation for this intriguing phenomenon: most of the local minima of the practically-used objectives are approximately global minima. We rigorously formalize it for concrete instances of machine learning problems.

The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, maximizing variance and preservation of pairwise relative distances. The derivation of their asymptotic correlation and numerical experiments tell that a projection usually cannot satisfy both objectives. In a standard classification problem we determine projections on the input data that balance them and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning frameworks. We introduce new variational loss functions that enable integration of additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of the proposed loss functions increase the accuracy.

We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. Our implicit field decoder is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our decoder for representation learning and generative modeling of shapes, we demonstrate superior results for tasks such as shape autoencoding, generation, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.

Similarity/Distance measures play a key role in many machine learning, pattern recognition, and data mining algorithms, which leads to the emergence of metric learning field. Many metric learning algorithms learn a global distance function from data that satisfy the constraints of the problem. However, in many real-world datasets that the discrimination power of features varies in the different regions of input space, a global metric is often unable to capture the complexity of the task. To address this challenge, local metric learning methods are proposed that learn multiple metrics across the different regions of input space. Some advantages of these methods are high flexibility and the ability to learn a nonlinear mapping but typically achieves at the expense of higher time requirement and overfitting problem. To overcome these challenges, this research presents an online multiple metric learning framework. Each metric in the proposed framework is composed of a global and a local component learned simultaneously. Adding a global component to a local metric efficiently reduce the problem of overfitting. The proposed framework is also scalable with both sample size and the dimension of input data. To the best of our knowledge, this is the first local online similarity/distance learning framework based on PA (Passive/Aggressive). In addition, for scalability with the dimension of input data, DRP (Dual Random Projection) is extended for local online learning in the present work. It enables our methods to be run efficiently on high-dimensional datasets, while maintains their predictive performance. The proposed framework provides a straightforward local extension to any global online similarity/distance learning algorithm based on PA.

北京阿比特科技有限公司