亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a space-time multiscale method for a parabolic model problem with an underlying coefficient that may be highly oscillatory with respect to both the spatial and the temporal variables. The method is based on the framework of the Variational Multiscale Method in the context of a space-time formulation and computes a coarse-scale representation of the differential operator that is enriched by auxiliary space-time corrector functions. Once computed, the coarse-scale representation allows us to efficiently obtain well-approximating discrete solutions for multiple right-hand sides. We prove first-order convergence independently of the oscillation scales in the coefficient and illustrate how the space-time correctors decay exponentially in both space and time, making it possible to localize the corresponding computations. This localization allows us to define a practical and computationally efficient method in terms of complexity and memory, for which we provide a posteriori error estimates and present numerical examples.

相關內容

In genome rearrangements, the mutational event transposition swaps two adjacent blocks of genes in one chromosome. The Transposition Distance Problem (TDP) aims to find the minimum number of transpositions required to transform one chromosome into another, both represented as permutations. The TDP can be reduced to the problem of Sorting by Transpositions (SBT). SBT is $\mathcal{NP}$-hard and the best approximation algorithm with a $1.375$ ratio was proposed by Elias and Hartman. Their algorithm employs simplification, a technique used to transform an input permutation $\pi$ into a simple permutation $\hat{\pi}$, presumably easier to handle with. The permutation $\hat{\pi}$ is obtained by inserting new symbols into $\pi$ in a way that the lower bound of the transposition distance of $\pi$ is kept on $\hat{\pi}$. The simplification is guaranteed to keep the lower bound, not the transposition distance. In this paper, we first show that the algorithm of Elias and Hartman (EH algorithm) may require one extra transposition above the approximation ratio of $1.375$, depending on how the input permutation is simplified. Next, using an algebraic approach, we propose a new upper bound for the transposition distance and a new $1.375$-approximation algorithm to solve SBT skipping simplification and ensuring the approximation ratio of $1.375$ for all $S_n$. We implemented our algorithm and EH's. Regarding the implementation of the EH algorithm, two issues needed to be fixed. We tested both algorithms against all permutations of size $n$, $2\leq n \leq 12$. The results show that the EH algorithm exceeds the approximation ratio of $1.375$ for permutations with a size greater than $7$. Finally, we investigate the performance of both implementations on longer permutations of maximum length $500$.

In this paper, we develop a high order residual distribution (RD) method for solving steady state conservation laws in a novel Hermite weighted essentially non-oscillatory (HWENO) framework recently developed in [24]. In particular, we design a high order HWENO integration for the integrals of source term and fluxes based on the point value of the solution and its spatial derivatives, and the principles of residual distribution schemes are adapted to obtain steady state solutions. Two advantages of the novel HWENO framework have been shown in [24]: first, compared with the traditional HWENO framework, the proposed method does not need to introduce additional auxiliary equations to update the derivatives of the unknown variable, and just compute them from the current point value of the solution and its old spatial derivatives, which saves the computational storage and CPU time, and thereby improve the computational efficiency of the traditional HWENO framework. Second, compared with the traditional WENO method, reconstruction stencil of the HWENO methods becomes more compact, their boundary treatment is simpler, and the numerical errors are smaller at the same grid. Thus, it is also a compact scheme when we design the higher order accuracy, compared with that in [11] Chou and Shu proposed. Extensive numerical experiments for one- and two-dimensional scalar and systems problems confirm the high order accuracy and good quality of our scheme.

In this paper we develop a neural network for the numerical simulation of time-dependent linear transport equations with diffusive scaling and uncertainties. The goal of the network is to resolve the computational challenges of curse-of-dimensionality and multiple scales of the problem. We first show that a standard Physics-Informed Neural Network (PINNs) fails to capture the multiscale nature of the problem, hence justifies the need to use Asymptotic-Preserving Neural Networks (APNNs). We show that not all classical AP formulations are fit for the neural network approach. We construct a micro-macro decomposition based neutral network, and also build in a mass conservation mechanism into the loss function, in order to capture the dynamic and multiscale nature of the solutions. Numerical examples are used to demonstrate the effectiveness of this APNNs.

We consider the inverse source problem in the parabolic equation, where the unknown source possesses the semi-discrete formulation. Theoretically, we prove that the flux data from any nonempty open subset of the boundary can uniquely determine the semi-discrete source. This means the observed area can be extremely small, and that is why we call the data as sparse boundary data. For the numerical reconstruction, we formulate the problem from the Bayesian sequential prediction perspective and conduct the numerical examples which estimate the space-time-dependent source state by state. To better demonstrate the performance of the method, we solve two common multiscale problems from two models with a long sequence of the source. The numerical results illustrate that the inversion is accurate and efficient.

Neural networks are powerful tools for approximating high dimensional data that have been used in many contexts, including solution of partial differential equations (PDEs). We describe a solver for multiscale fully nonlinear elliptic equations that makes use of domain decomposition, an accelerated Schwarz framework, and two-layer neural networks to approximate the boundary-to-boundary map for the subdomains, which is the key step in the Schwarz procedure. Conventionally, the boundary-to-boundary map requires solution of boundary-value elliptic problems on each subdomain. By leveraging the compressibility of multiscale problems, our approach trains the neural network offline to serve as a surrogate for the usual implementation of the boundary-to-boundary map. Our method is applied to a multiscale semilinear elliptic equation and a multiscale $p$-Laplace equation. In both cases we demonstrate significant improvement in efficiency as well as good accuracy and generalization performance.

In this paper, we propose a $C^{0}$ interior penalty method for $m$th-Laplace equation on bounded Lipschitz polyhedral domain in $\mathbb{R}^{d}$, where $m$ and $d$ can be any positive integers. The standard $H^{1}$-conforming piecewise $r$-th order polynomial space is used to approximate the exact solution $u$, where $r$ can be any integer greater than or equal to $m$. Unlike the interior penalty method in [T.~Gudi and M.~Neilan, {\em An interior penalty method for a sixth-order elliptic equation}, IMA J. Numer. Anal., \textbf{31(4)} (2011), pp. 1734--1753], we avoid computing $D^{m}$ of numerical solution on each element and high order normal derivatives of numerical solution along mesh interfaces. Therefore our method can be easily implemented. After proving discrete $H^{m}$-norm bounded by the natural energy semi-norm associated with our method, we manage to obtain stability and optimal convergence with respect to discrete $H^{m}$-norm. Numerical experiments validate our theoretical estimate.

A finite element analysis of a Dirichlet boundary control problem governed by the linear parabolic equation is presented in this article. The Dirichlet control is considered in a closed and convex subset of the energy space $H^1(\Omega \times(0,T)).$ We prove well-posedness and discuss some regularity results for the control problem. We derive the optimality system for the optimal control problem. The first order necessary optimality condition results in a simplified Signorini type problem for control variable. The space discretization of the state variable is done using conforming finite elements, whereas the time discretization is based on discontinuous Galerkin methods. To discretize the control we use the conforming prismatic Lagrange finite elements. We derive an optimal order of convergence of error in control, state, and adjoint state. The theoretical results are corroborated by some numerical tests.

Normalizing flows are invertible neural networks with tractable change-of-volume terms, which allow optimization of their parameters to be efficiently performed via maximum likelihood. However, data of interest are typically assumed to live in some (often unknown) low-dimensional manifold embedded in a high-dimensional ambient space. The result is a modelling mismatch since -- by construction -- the invertibility requirement implies high-dimensional support of the learned distribution. Injective flows, mappings from low- to high-dimensional spaces, aim to fix this discrepancy by learning distributions on manifolds, but the resulting volume-change term becomes more challenging to evaluate. Current approaches either avoid computing this term entirely using various heuristics, or assume the manifold is known beforehand and therefore are not widely applicable. Instead, we propose two methods to tractably calculate the gradient of this term with respect to the parameters of the model, relying on careful use of automatic differentiation and techniques from numerical linear algebra. Both approaches perform end-to-end nonlinear manifold learning and density estimation for data projected onto this manifold. We study the trade-offs between our proposed methods, empirically verify that we outperform approaches ignoring the volume-change term by more accurately learning manifolds and the corresponding distributions on them, and show promising results on out-of-distribution detection. Our code is available at //github.com/layer6ai-labs/rectangular-flows.

Importance sampling is one of the most widely used variance reduction strategies in Monte Carlo rendering. In this paper, we propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples. Our approach considers an existing Monte Carlo rendering algorithm as a black box. During a scene-dependent training phase, we learn to generate samples with a desired density in the primary sample space of the rendering algorithm using maximum likelihood estimation. We leverage a recent neural network architecture that was designed to represent real-valued non-volume preserving ('Real NVP') transformations in high dimensional spaces. We use Real NVP to non-linearly warp primary sample space and obtain desired densities. In addition, Real NVP efficiently computes the determinant of the Jacobian of the warp, which is required to implement the change of integration variables implied by the warp. A main advantage of our approach is that it is agnostic of underlying light transport effects, and can be combined with many existing rendering techniques by treating them as a black box. We show that our approach leads to effective variance reduction in several practical scenarios.

For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.

北京阿比特科技有限公司