亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a mimetic dual-field discretization which conserves mass, kinetic energy and helicity for three-dimensional incompressible Navier-Stokes equations. The discretization makes use of a conservative dual-field mixed weak formulation where two evolution equations of velocity are employed and dual representations of the solution are sought for each variable. A temporal discretization, which staggers the evolution equations and handles the nonlinearity such that the resulting discrete algebraic systems are linear and decoupled, is constructed. The spatial discretization is mimetic in the sense that the finite dimensional function spaces form a discrete de Rham complex. Conservation of mass, kinetic energy and helicity in the absence of dissipative terms is proven at the discrete level. Proper dissipation rates of kinetic energy and helicity in the viscous case is also proven. Numerical tests supporting the method are provided.

相關內容

We present a data-driven approach to construct entropy-based closures for the moment system from kinetic equations. The proposed closure learns the entropy function by fitting the map between the moments and the entropy of the moment system, and thus does not depend on the space-time discretization of the moment system and specific problem configurations such as initial and boundary conditions. With convex and $C^2$ approximations, this data-driven closure inherits several structural properties from entropy-based closures, such as entropy dissipation, hyperbolicity, and H-Theorem. We construct convex approximations to the Maxwell-Boltzmann entropy using convex splines and neural networks, test them on the plane source benchmark problem for linear transport in slab geometry, and compare the results to the standard, optimization-based M$_N$ closures. Numerical results indicate that these data-driven closures provide accurate solutions in much less computation time than the M$_N$ closures.

Finite element simulations have been used to solve various partial differential equations (PDEs) that model physical, chemical, and biological phenomena. The resulting discretized solutions to PDEs often do not satisfy requisite physical properties, such as positivity or monotonicity. Such invalid solutions pose both modeling challenges, since the physical interpretation of simulation results is not possible, and computational challenges, since such properties may be required to advance the scheme. We, therefore, consider the problem of computing solutions that preserve these structural solution properties, which we enforce as additional constraints on the solution. We consider in particular the class of convex constraints, which includes positivity and monotonicity. By embedding such constraints as a postprocessing convex optimization procedure, we can compute solutions that satisfy general types of convex constraints. For certain types of constraints (including positivity and monotonicity), the optimization is a filter, i.e., a norm-decreasing operation. We provide a variety of tests on one-dimensional time-dependent PDEs that demonstrate the method's efficacy, and we empirically show that rates of convergence are unaffected by the inclusion of the constraints.

Highly heterogeneous, anisotropic coefficients, e.g. in the simulation of carbon-fibre composite components, can lead to extremely challenging finite element systems. Direct solvers for the resulting large and sparse linear systems suffer from severe memory requirements and limited parallel scalability, while iterative solvers in general lack robustness. Two-level spectral domain decomposition methods can provide such robustness for symmetric positive definite linear systems, by using coarse spaces based on independent generalized eigenproblems in the subdomains. Rigorous condition number bounds are independent of mesh size, number of subdomains, as well as coefficient contrast. However, their parallel scalability is still limited by the fact that (in order to guarantee robustness) the coarse problem is solved via a direct method. In this paper, we introduce a multilevel variant in the context of subspace correction methods and provide a general convergence theory for its robust convergence for abstract, elliptic variational problems. Assumptions of the theory are verified for conforming, as well as for discontinuous Galerkin methods applied to a scalar diffusion problem. Numerical results illustrate the performance of the method for two- and three-dimensional problems and for various discretization schemes, in the context of scalar diffusion and linear elasticity.

Maximal parabolic $L^p$-regularity of linear parabolic equations on an evolving surface is shown by pulling back the problem to the initial surface and studying the maximal $L^p$-regularity on a fixed surface. By freezing the coefficients in the parabolic equations at a fixed time and utilizing a perturbation argument around the freezed time, it is shown that backward difference time discretizations of linear parabolic equations on an evolving surface along characteristic trajectories can preserve maximal $L^p$-regularity in the discrete setting. The result is applied to prove the stability and convergence of time discretizations of nonlinear parabolic equations on an evolving surface, with linearly implicit backward differentiation formulae characteristic trajectories of the surface, for general locally Lipschitz nonlinearities. The discrete maximal $L^p$-regularity is used to prove the boundedness and stability of numerical solutions in the $L^\infty(0,T;W^{1,\infty})$ norm, which is used to bound the nonlinear terms in the stability analysis. Optimal-order error estimates of time discretizations in the $L^\infty(0,T;W^{1,\infty})$ norm is obtained by combining the stability analysis with the consistency estimates.

Many experimental paradigms in neuroscience involve driving the nervous system with periodic sensory stimuli. Neural signals recorded using a variety of techniques will then include phase-locked oscillations at the stimulation frequency. The analysis of such data often involves standard univariate statistics such as T-tests, conducted on the Fourier amplitude components (ignoring phase), either to test for the presence of a signal, or to compare signals across different conditions. However, the assumptions of these tests will sometimes be violated because amplitudes are not normally distributed, and furthermore weak signals might be missed if the phase information is discarded. An alternative approach is to conduct multivariate statistical tests using the real and imaginary Fourier components. Here the performance of two multivariate extensions of the T-test are compared: Hotelling's $T^2$ and a variant called $T^2_{circ}$. A novel test of the assumptions of $T^2_{circ}$ is developed, based on the condition index of the data (the square root of the ratio of eigenvalues of a bounding ellipse), and a heuristic for excluding outliers using the Mahalanobis distance is proposed. The $T^2_{circ}$ statistic is then extended to multi-level designs, resulting in a new statistical test termed $ANOVA^2_{circ}$. This has identical assumptions to $T^2_{circ}$, and is shown to be more sensitive than MANOVA when these assumptions are met. The use of these tests is demonstrated for two publicly available empirical data sets, and practical guidance is suggested for choosing which test to run. Implementations of these novel tools are provided as an R package and a Matlab toolbox, in the hope that their wider adoption will improve the sensitivity of statistical inferences involving periodic data.

In this work we propose a new, arbitrary order space-time finite element discretisation for Hamiltonian PDEs in multisymplectic formulation. We show that the new method which is obtained by using both continuous and discontinuous discretisations in space, admits a local and global conservation law of energy. We also show existence and uniqueness of solutions of the discrete equations. Further, we illustrate the error behaviour and the conservation properties of the proposed discretisation in extensive numerical experiments on the linear and nonlinear wave equation and on the nonlinear Schr\"odinger equation.

We introduce a formulation of optimal transport problem for distributions on function spaces, where the stochastic map between functional domains can be partially represented in terms of an (infinite-dimensional) Hilbert-Schmidt operator mapping a Hilbert space of functions to another. For numerous machine learning tasks, data can be naturally viewed as samples drawn from spaces of functions, such as curves and surfaces, in high dimensions. Optimal transport for functional data analysis provides a useful framework of treatment for such domains. In this work, we develop an efficient algorithm for finding the stochastic transport map between functional domains and provide theoretical guarantees on the existence, uniqueness, and consistency of our estimate for the Hilbert-Schmidt operator. We validate our method on synthetic datasets and study the geometric properties of the transport map. Experiments on real-world datasets of robot arm trajectories further demonstrate the effectiveness of our method on applications in domain adaptation.

We introduce a general framework for enforcing local or global maximum principles in high-order space-time discretizations of a scalar hyperbolic conservation law. We begin with sufficient conditions for a space discretization to be bound preserving (BP) and satisfy a semi-discrete maximum principle. Next, we propose a global monolithic convex (GMC) flux limiter which has the structure of a flux-corrected transport (FCT) algorithm but is applicable to spatial semi-discretizations and ensures the BP property of the fully discrete scheme for strong stability preserving (SSP) Runge-Kutta time discretizations. To circumvent the order barrier for SSP time integrators, we constrain the intermediate stages and/or the final stage of a general high-order RK method using GMC-type limiters. In this work, our theoretical and numerical studies are restricted to explicit schemes which are provably BP for sufficiently small time steps. The new GMC limiting framework offers the possibility of relaxing the bounds of inequality constraints to achieve higher accuracy at the cost of more stringent time step restrictions. The ability of the presented limiters to preserve global bounds and recognize well-resolved smooth solutions is verified numerically for three representative RK methods combined with weighted essentially nonoscillatory (WENO) finite volume space discretizations of linear and nonlinear test problems in 1D.

We consider the $\alpha$-sine transform of the form $T_\alpha f(y)=\int_0^\infty\vert\sin(xy)\vert^\alpha f(x)dx$ for $\alpha>-1$, where $f$ is an integrable function on $\mathbb{R}_+$. First, the inversion of this transform for $\alpha>1$ is discussed in the context of a more general family of integral transforms on the space of weighted, square-integrable functions on the positive real line. In an alternative approach, we show that the $\alpha$-sine transform of a function $f$ admits a series representation for all $\alpha>-1$, which involves the Fourier transform of $f$ and coefficients which can all be explicitly computed with the Gauss hypergeometric theorem. Based on this series representation we construct a system of linear equations whose solution is an approximation of the Fourier transform of $f$ at equidistant points. Sampling theory and Fourier inversion allow us to compute an estimate of $f$ from its $\alpha$-sine transform. The same approach can be extended to a similar $\alpha$-cosine transform on $\mathbb{R}_+$ for $\alpha>-1$, and the two-dimensional spherical $\alpha$-sine and cosine transforms for $\alpha>-1$, $\alpha\neq 0,2,4,\dots$. In an extensive numerical analysis, we consider a number of examples, and compare the inversion results of both methods presented.

This paper develops a two-level fourth-order scheme for solving time-fractional convection-diffusion-reaction equation with variable coefficients subjected to suitable initial and boundary conditions. The basis properties of the new approach are investigated and both stability and error estimates of the proposed numerical scheme are deeply analyzed in the $L^{\infty}(0,T;L^{2})$-norm. The theory indicates that the method is unconditionally stable with convergence of order $O(k^{2-\frac{\lambda}{2}}+h^{4})$, where $k$ and $h$ are time step and mesh size, respectively, and $\lambda\in(0,1)$. This result suggests that the two-level fourth-order technique is more efficient than a large class of numerical techniques widely studied in the literature for the considered problem. Some numerical evidences are provided to verify the unconditional stability and convergence rate of the proposed algorithm.

北京阿比特科技有限公司