亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the present work we propose and study a time discrete scheme for the following chemotaxis-consumption model (for any $s\ge 1$), $$ \partial_t u - \Delta u = - \nabla \cdot (u \nabla v), \quad \partial_t v - \Delta v = - u^s v \quad \hbox{in $(0,T)\times \Omega$,}$$ endowed with isolated boundary conditions and initial conditions, where $(u,v)$ model cell density and chemical signal concentration. The proposed scheme is defined via a reformulation of the model, using the auxiliary variable $z = \sqrt{v + \alpha^2}$ combined with a Backward Euler scheme for the $(u,z)$-problem and a upper truncation of $u$ in the nonlinear chemotaxis and consumption terms. Then, two different ways of retrieving an approximation for the function $v$ are provided. We prove the existence of solution to the time discrete scheme and establish uniform in time \emph{a priori} estimates, yielding the convergence of the scheme towards a weak solution $(u,v)$ of the chemotaxis-consumption model.

相關內容

The Laguerre functions $l_{n,\tau}^\alpha$, $n=0,1,\dots$, are constructed from generalized Laguerre polynomials. The functions $l_{n,\tau}^\alpha$ depend on two parameters: scale $\tau>0$ and order of generalization $\alpha>-1$, and form an orthogonal basis in $L_2[0,\infty)$. Let the spectrum of a square matrix $A$ lie in the open left half-plane. Then the matrix exponential $H_A(t)=e^{At}$, $t>0$, belongs to $L_2[0,\infty)$. Hence the matrix exponential $H_A$ can be expanded in a series $H_A=\sum_{n=0}^\infty S_{n,\tau,\alpha,A}\,l_{n,\tau}^\alpha$. An estimate of the norm $\Bigl\lVert H_A-\sum_{n=0}^N S_{n,\tau,\alpha,A}\,l_{n,\tau}^\alpha\Bigr\rVert_{L_2[0,\infty)}$ is proposed. Finding the minimum of this estimate over $\tau$ and $\alpha$ is discussed. Numerical examples show that the optimal $\alpha$ is often almost 0, which essentially simplifies the problem.

In this work we propose an adaptive multilevel version of subset simulation to estimate the probability of rare events for complex physical systems. Given a sequence of nested failure domains of increasing size, the rare event probability is expressed as a product of conditional probabilities. The proposed new estimator uses different model resolutions and varying numbers of samples across the hierarchy of nested failure sets. In order to dramatically reduce the computational cost, we construct the intermediate failure sets such that only a small number of expensive high-resolution model evaluations are needed, whilst the majority of samples can be taken from inexpensive low-resolution simulations. A key idea in our new estimator is the use of a posteriori error estimators combined with a selective mesh refinement strategy to guarantee the critical subset property that may be violated when changing model resolution from one failure set to the next. The efficiency gains and the statistical properties of the estimator are investigated both theoretically via shaking transformations, as well as numerically. On a model problem from subsurface flow, the new multilevel estimator achieves gains of more than a factor 60 over standard subset simulation for a practically relevant relative error of 25%.

We consider the problem of chance constrained optimization where it is sought to optimize a function and satisfy constraints, both of which are affected by uncertainties. The real world declinations of this problem are particularly challenging because of their inherent computational cost. To tackle such problems, we propose a new Bayesian optimization method. It applies to the situation where the uncertainty comes from some of the inputs, so that it becomes possible to define an acquisition criterion in the joint controlled-uncontrolled input space. The main contribution of this work is an acquisition criterion that accounts for both the average improvement in objective function and the constraint reliability. The criterion is derived following the Stepwise Uncertainty Reduction logic and its maximization provides both optimal controlled and uncontrolled parameters. Analytical expressions are given to efficiently calculate the criterion. Numerical studies on test functions are presented. It is found through experimental comparisons with alternative sampling criteria that the adequation between the sampling criterion and the problem contributes to the efficiency of the overall optimization. As a side result, an expression for the variance of the improvement is given.

Singularly perturbed boundary value problems pose a significant challenge for their numerical approximations because of the presence of sharp boundary layers. These sharp boundary layers are responsible for the stiffness of solutions, which leads to large computational errors, if not properly handled. It is well-known that the classical numerical methods as well as the Physics-Informed Neural Networks (PINNs) require some special treatments near the boundary, e.g., using extensive mesh refinements or finer collocation points, in order to obtain an accurate approximate solution especially inside of the stiff boundary layer. In this article, we modify the PINNs and construct our new semi-analytic SL-PINNs suitable for singularly perturbed boundary value problems. Performing the boundary layer analysis, we first find the corrector functions describing the singular behavior of the stiff solutions inside boundary layers. Then we obtain the SL-PINN approximations of the singularly perturbed problems by embedding the explicit correctors in the structure of PINNs or by training the correctors together with the PINN approximations. Our numerical experiments confirm that our new SL-PINN methods produce stable and accurate approximations for stiff solutions.

In this work we consider the two dimensional instationary Navier-Stokes equations with homogeneous Dirichlet/no-slip boundary conditions. We show error estimates for the fully discrete problem, where a discontinuous Galerkin method in time and inf-sup stable finite elements in space are used. Recently, best approximation type error estimates for the Stokes problem in the $L^\infty(I;L^2(\Omega))$, $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms have been shown. The main result of the present work extends the error estimate in the $L^\infty(I;L^2(\Omega))$ norm to the Navier-Stokes equations, by pursuing an error splitting approach and an appropriate duality argument. In order to discuss the stability of solutions to the discrete primal and dual equations, a specially tailored discrete Gronwall lemma is presented. The techniques developed towards showing the $L^\infty(I;L^2(\Omega))$ error estimate, also allow us to show best approximation type error estimates in the $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms, which complement this work.

The joint modeling of multiple longitudinal biomarkers together with a time-to-event outcome is a challenging modeling task of continued scientific interest. In particular, the computational complexity of high dimensional (generalized) mixed effects models often restricts the flexibility of shared parameter joint models, even when the subject-specific marker trajectories follow highly nonlinear courses. We propose a parsimonious multivariate functional principal components representation of the shared random effects. This allows better scalability, as the dimension of the random effects does not directly increase with the number of markers, only with the chosen number of principal component basis functions used in the approximation of the random effects. The functional principal component representation additionally allows to estimate highly flexible subject-specific random trajectories without parametric assumptions. The modeled trajectories can thus be distinctly different for each biomarker. We build on the framework of flexible Bayesian additive joint models implemented in the R-package 'bamlss', which also supports estimation of nonlinear covariate effects via Bayesian P-splines. The flexible yet parsimonious functional principal components basis used in the estimation of the joint model is first estimated in a preliminary step. We validate our approach in a simulation study and illustrate its advantages by analyzing a study on primary biliary cholangitis.

Normal modal logics extending the logic K4.3 of linear transitive frames are known to lack the Craig interpolation property, except some logics of bounded depth such as S5. We turn this `negative' fact into a research question and pursue a non-uniform approach to Craig interpolation by investigating the following interpolant existence problem: decide whether there exists a Craig interpolant between two given formulas in any fixed logic above K4.3. Using a bisimulation-based characterisation of interpolant existence for descriptive frames, we show that this problem is decidable and coNP-complete for all finitely axiomatisable normal modal logics containing K4.3. It is thus not harder than entailment in these logics, which is in sharp contrast to other recent non-uniform interpolation results. We also extend our approach to Priorean temporal logics (with both past and future modalities) over the standard time flows-the integers, rationals, reals, and finite strict linear orders-none of which is blessed with the Craig interpolation property.

In the first part of the paper we study absolute error of sampling discretization of the integral $L_p$-norm for functional classes of continuous functions. We use chaining technique to provide a general bound for the error of sampling discretization of the $L_p$-norm on a given functional class in terms of entropy numbers in the uniform norm of this class. The general result yields new error bounds for sampling discretization of the $L_p$-norms on classes of multivariate functions with mixed smoothness. In the second part of the paper we apply the obtained bounds to study universal sampling discretization and the problem of optimal sampling recovery.

Exact methods for exponentiation of matrices of dimension $N$ can be computationally expensive in terms of execution time ($N^{3}$) and memory requirements ($N^{2}$) not to mention numerical precision issues. A type of matrix often exponentiated in the sciences is the rate matrix. Here we explore five methods to exponentiate rate matrices some of which apply even more broadly to other matrix types. Three of the methods leverage a mathematical analogy between computing matrix elements of a matrix exponential and computing transition probabilities of a dynamical processes (technically a Markov jump process, MJP, typically simulated using Gillespie). In doing so, we identify a novel MJP-based method relying on restricting the number of ``trajectory" jumps based on the magnitude of the matrix elements with favorable computational scaling. We then discuss this method's downstream implications on mixing properties of Monte Carlo posterior samplers. We also benchmark two other methods of matrix exponentiation valid for any matrix (beyond rate matrices and, more generally, positive definite matrices) related to solving differential equations: Runge-Kutta integrators and Krylov subspace methods. Under conditions where both the largest matrix element and the number of non-vanishing elements scale linearly with $N$ -- reasonable conditions for rate matrices often exponentiated -- computational time scaling with the most competitive methods (Krylov and one of the MJP-based methods) reduces to $N^2$ with total memory requirements of $N$.

We discuss avoidance of sure loss and coherence results for semicopulas and standardized functions, i.e., for grounded, 1-increasing functions with value $1$ at $(1,1,\ldots, 1)$. We characterize the existence of a $k$-increasing $n$-variate function $C$ fulfilling $A\leq C\leq B$ for standardized $n$-variate functions $A,B$ and discuss the method for constructing this function. Our proofs also include procedures for extending functions on some countably infinite mesh to functions on the unit box. We provide a characterization when $A$ respectively $B$ coincides with the pointwise infimum respectively supremum of the set of all $k$-increasing $n$-variate functions $C$ fulfilling $A\leq C\leq B$.

北京阿比特科技有限公司