亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents a novel spatial discretisation method for the reliable and efficient simulation of Bose-Einstein condensates modelled by the Gross-Pitaevskii equation and the corresponding nonlinear eigenvector problem. The method combines the high-accuracy properties of numerical homogenisation methods with a novel super-localisation approach for the calculation of the basis functions. A rigorous numerical analysis demonstrates superconvergence of the approach compared to classical polynomial and multiscale finite element methods, even in low regularity regimes. Numerical tests reveal the method's competitiveness with spectral methods, particularly in capturing critical physical effects in extreme conditions, such as vortex lattice formation in fast-rotating potential traps. The method's potential is further highlighted through a dynamic simulation of a phase transition from Mott insulator to Bose-Einstein condensate, emphasising its capability for reliable exploration of physical phenomena.

相關內容

This paper addresses the fundamental task of estimating covariance matrix functions for high-dimensional functional data/functional time series. We consider two functional factor structures encompassing either functional factors with scalar loadings or scalar factors with functional loadings, and postulate functional sparsity on the covariance of idiosyncratic errors after taking out the common unobserved factors. To facilitate estimation, we rely on the spiked matrix model and its functional generalization, and derive some novel asymptotic identifiability results, based on which we develop DIGIT and FPOET estimators under two functional factor models, respectively. Both estimators involve performing associated eigenanalysis to estimate the covariance of common components, followed by adaptive functional thresholding applied to the residual covariance. We also develop functional information criteria for the purpose of model selection. The convergence rates of estimated factors, loadings, and conditional sparse covariance matrix functions under various functional matrix norms, are respectively established for DIGIT and FPOET estimators. Numerical studies including extensive simulations and two real data applications on mortality rates and functional portfolio allocation are conducted to examine the finite-sample performance of the proposed methodology.

Implicit models for magnetic coenergy have been proposed by Pera et al. to describe the anisotropic nonlinear material behavior of electrical steel sheets. This approach aims at predicting magnetic response for any direction of excitation by interpolating measured of B--H curves in the rolling and transverse directions. In an analogous manner, an implicit model for magnetic energy is proposed. We highlight some mathematical properties of these implicit models and discuss their numerical realization, outline the computation of magnetic material laws via implicit differentiation, and discuss the potential use for finite element analysis in the context of nonlinear magnetostatics.

A comprehensive mathematical model of the multiphysics flow of blood and Cerebrospinal Fluid (CSF) in the brain can be expressed as the coupling of a poromechanics system and Stokes' equations: the first describes fluids filtration through the cerebral tissue and the tissue's elastic response, while the latter models the flow of the CSF in the brain ventricles. This model describes the functioning of the brain's waste clearance mechanism, which has been recently discovered to play an essential role in the progress of neurodegenerative diseases. To model the interactions between different scales in the porous medium, we propose a physically consistent coupling between Multi-compartment Poroelasticity (MPE) equations and Stokes' equations. In this work, we introduce a numerical scheme for the discretization of such coupled MPE-Stokes system, employing a high-order discontinuous Galerkin method on polytopal grids to efficiently account for the geometric complexity of the domain. We analyze the stability and convergence of the space semidiscretized formulation, we prove a-priori error estimates, and we present a temporal discretization based on a combination of Newmark's $\beta$-method for the elastic wave equation and the $\theta$-method for the other equations of the model. Numerical simulations carried out on test cases with manufactured solutions validate the theoretical error estimates. We also present numerical results on a two-dimensional slice of a patient-specific brain geometry reconstructed from diagnostic images, to test in practice the advantages of the proposed approach.

In this work we extend the shifted Laplacian approach to the elastic Helmholtz equation. The shifted Laplacian multigrid method is a common preconditioning approach for the discretized acoustic Helmholtz equation. In some cases, like geophysical seismic imaging, one needs to consider the elastic Helmholtz equation, which is harder to solve: it is three times larger and contains a nullity-rich grad-div term. These properties make the solution of the equation more difficult for multigrid solvers. The key idea in this work is combining the shifted Laplacian with approaches for linear elasticity. We provide local Fourier analysis and numerical evidence that the convergence rate of our method is independent of the Poisson's ratio. Moreover, to better handle the problem size, we complement our multigrid method with the domain decomposition approach, which works in synergy with the local nature of the shifted Laplacian, so we enjoy the advantages of both methods without sacrificing performance. We demonstrate the efficiency of our solver on 2D and 3D problems in heterogeneous media.

We study the numerical approximation of a coupled hyperbolic-parabolic system by a family of discontinuous Galerkin space-time finite element methods. The model is rewritten as a first-order evolutionary problem that is treated by the unified abstract solution theory of R.\ Picard. To preserve the mathematical structure of the evolutionary equation on the fully discrete level, suitable generalizations of the distribution gradient and divergence operators on broken polynomial spaces on which the discontinuous Galerkin approach is built on are defined. Well-posedness of the fully discrete problem and error estimates for the discontinuous Galerkin approximation in space and time are proved.

This paper describes a purely functional library for computing level-$p$-complexity of Boolean functions, and applies it to two-level iterated majority. Boolean functions are simply functions from $n$ bits to one bit, and they can describe digital circuits, voting systems, etc. An example of a Boolean function is majority, which returns the value that has majority among the $n$ input bits for odd $n$. The complexity of a Boolean function $f$ measures the cost of evaluating it: how many bits of the input are needed to be certain about the result of $f$. There are many competing complexity measures but we focus on level-$p$-complexity -- a function of the probability $p$ that a bit is 1. The level-$p$-complexity $D_p(f)$ is the minimum expected cost when the input bits are independent and identically distributed with Bernoulli($p$) distribution. We specify the problem as choosing the minimum expected cost of all possible decision trees -- which directly translates to a clearly correct, but very inefficient implementation. The library uses thinning and memoization for efficiency and type classes for separation of concerns. The complexity is represented using (sets of) polynomials, and the order relation used for thinning is implemented using polynomial factorisation and root-counting. Finally we compute the complexity for two-level iterated majority and improve on an earlier result by J.~Jansson.

This paper presents the workspace optimization of one-translational two-rotational (1T2R) parallel manipulators using a dimensionally homogeneous constraint-embedded Jacobian. The mixed degrees of freedom of 1T2R parallel manipulators, which cause dimensional inconsistency, make it difficult to optimize their architectural parameters. To solve this problem, a point-based approach with a shifting property, selection matrix, and constraint-embedded inverse Jacobian is proposed. A simplified formulation is provided, eliminating the complex partial differentiation required in previous approaches. The dimensional homogeneity of the proposed method was analytically proven, and its validity was confirmed by comparing it with the conventional point-based method using a 3-PRS manipulator. Furthermore, the approach was applied to an asymmetric 2-RRS/RRRU manipulator with no parasitic motion. This mechanism has a T-shape combination of limbs with different kinematic parameters, making it challenging to derive a dimensionally homogeneous Jacobian using the conventional method. Finally, optimization was performed, and the results show that the proposed method is more efficient than the conventional approach. The efficiency and simplicity of the proposed method were verified using two distinct parallel manipulators.

We study Whitney-type estimates for approximation of convex functions in the uniform norm on various convex multivariate domains while paying a particular attention to the dependence of the involved constants on the dimension and the geometry of the domain.

This paper focuses on the construction of accurate and predictive data-driven reduced models of large-scale numerical simulations with complex dynamics and sparse training data sets. In these settings, standard, single-domain approaches may be too inaccurate or may overfit and hence generalize poorly. Moreover, processing large-scale data sets typically requires significant memory and computing resources which can render single-domain approaches computationally prohibitive. To address these challenges, we introduce a domain decomposition formulation into the construction of a data-driven reduced model. In doing so, the basis functions used in the reduced model approximation become localized in space, which can increase the accuracy of the domain-decomposed approximation of the complex dynamics. The decomposition furthermore reduces the memory and computing requirements to process the underlying large-scale training data set. We demonstrate the effectiveness and scalability of our approach in a large-scale three-dimensional unsteady rotating detonation rocket engine simulation scenario with over $75$ million degrees of freedom and a sparse training data set. Our results show that compared to the single-domain approach, the domain-decomposed version reduces both the training and prediction errors for pressure by up to $13 \%$ and up to $5\%$ for other key quantities, such as temperature, and fuel and oxidizer mass fractions. Lastly, our approach decreases the memory requirements for processing by almost a factor of four, which in turn reduces the computing requirements as well.

Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP $T u$ that is the image of another GP $u$ under a linear transformation $T$ acting on the sample paths of $u$ are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when $T$ is an unbounded operator such as a differential operator, which is common in many modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator $T$ acting on the sample paths of a square-integrable (not necessarily Gaussian) stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.

北京阿比特科技有限公司