亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.

相關內容

Let $G$ be a graph of order $n$. A classical upper bound for the domination number of a graph $G$ having no isolated vertices is $\lfloor\frac{n}{2}\rfloor$. However, for several families of graphs, we have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$ which gives a substantially improved upper bound. In this paper, we give a condition necessary for a graph $G$ to have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$, and some conditions sufficient for a graph $G$ to have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$. We also present a characterization of all connected graphs $G$ of order $n$ with $\gamma(G) = \lfloor\sqrt{n}\rfloor$. Further, we prove that for a graph $G$ not satisfying $rad(G)=diam(G)=rad(\overline{G})=diam(\overline{G})=2$, deciding whether $\gamma(G) \le \lfloor\sqrt{n}\rfloor$ or $\gamma(\overline{G}) \le \lfloor\sqrt{n}\rfloor$ can be done in polynomial time. We conjecture that this decision problem can be solved in polynomial time for any graph $G$.

Kleene's computability theory based on the S1-S9 computation schemes constitutes a model for computing with objects of any finite type and extends Turing's 'machine model' which formalises computing with real numbers. A fundamental distinction in Kleene's framework is between normal and non-normal functionals where the former compute the associated Kleene quantifier $\exists^n$ and the latter do not. Historically, the focus was on normal functionals, but recently new non-normal functionals have been studied based on well-known theorems, the weakest among which seems to be the uncountability of the reals. These new non-normal functionals are fundamentally different from historical examples like Tait's fan functional: the latter is computable from $\exists^2$, while the former are computable in $\exists^3$ but not in weaker oracles. Of course, there is a great divide or abyss separating $\exists^2$ and $\exists^3$ and we identify slight variations of our new non-normal functionals that are again computable in $\exists^2$, i.e. fall on different sides of this abyss. Our examples are based on mainstream mathematical notions, like quasi-continuity, Baire classes, bounded variation, and semi-continuity from real analysis.

In this study, we investigate an anisotropic weakly over-penalised symmetric interior penalty method for the Stokes equation {on convex domains}. Our approach is a simple discontinuous Galerkin method similar to the Crouzeix--Raviart finite element method. As our primary contribution, we show a new proof for the consistency term, which allows us to obtain an estimate of the anisotropic consistency error. The key idea of the proof is to apply the relation between the Raviart--Thomas finite element space and a discontinuous space. While inf-sup stable schemes of the discontinuous Galerkin method on shape-regular mesh partitions have been widely discussed, our results show that the Stokes element satisfies the inf-sup condition on anisotropic meshes. Furthermore, we also provide an error estimate in an energy norm on anisotropic meshes. In numerical experiments, we compare calculation results for standard and anisotropic mesh partitions.

We present algorithms for computing the reduced Gr\"{o}bner basis of the vanishing ideal of a finite set of points in a frame of ideal interpolation. Ideal interpolation is defined by a linear projector whose kernel is a polynomial ideal. In this paper, we translate interpolation condition functionals into formal power series via Taylor expansion, then the reduced Gr\"{o}bner basis is read from formal power series by Gaussian elimination. Our algorithm has a polynomial time complexity. It compares favorably with MMM algorithm in single point ideal interpolation and some several points ideal interpolation.

The distribution for the minimum of Brownian motion or the Cauchy process is well-known using the reflection principle. Here we consider the problem of finding the sample-by-sample minimum, which we call the online minimum search. We consider the possibility of the golden search method, but we show quantitatively that the bisection method is more efficient. In the bisection method there is a hierarchical parameter, which tunes the depth to which each sub-search is conducted, somewhat similarly to how a depth-first search works to generate a topological ordering on nodes. Finally, we consider the possibility of using harmonic measure, which is a novel idea that has so far been unexplored.

We describe a new dependent-rounding algorithmic framework for bipartite graphs. Given a fractional assignment $y$ of values to edges of graph $G = (U \cup V, E)$, the algorithms return an integral solution $Y$ such that each right-node $v \in V$ has at most one neighboring edge $f$ with $Y_f = 1$, and where the variables $Y_e$ also satisfy broad nonpositive-correlation properties. In particular, for any edges $e_1, e_2$ sharing a left-node $u \in U$, the variables $Y_{e_1}, Y_{e_2}$ have strong negative-correlation properties, i.e. the expectation of $Y_{e_1} Y_{e_2}$ is significantly below $y_{e_1} y_{e_2}$. This algorithm is based on generating negatively-correlated Exponential random variables and using them in a contention-resolution scheme inspired by an algorithm Im & Shadloo (2020). Our algorithm gives stronger and much more flexible negative correlation properties. Dependent rounding schemes with negative correlation properties have been used for approximation algorithms for job-scheduling on unrelated machines to minimize weighted completion times (Bansal, Srinivasan, & Svensson (2021), Im & Shadloo (2020), Im & Li (2023)). Using our new dependent-rounding algorithm, among other improvements, we obtain a $1.398$-approximation for this problem. This significantly improves over the prior $1.45$-approximation ratio of Im & Li (2023).

We consider nonlinear solvers for the incompressible, steady (or at a fixed time step for unsteady) Navier-Stokes equations in the setting where partial measurement data of the solution is available. The measurement data is incorporated/assimilated into the solution through a nudging term addition to the the Picard iteration that penalized the difference between the coarse mesh interpolants of the true solution and solver solution, analogous to how continuous data assimilation (CDA) is implemented for time dependent PDEs. This was considered in the paper [Li et al. {\it CMAME} 2023], and we extend the methodology by improving the analysis to be in the $L^2$ norm instead of a weighted $H^1$ norm where the weight depended on the coarse mesh width, and to the case of noisy measurement data. For noisy measurement data, we prove that the CDA-Picard method is stable and convergent, up to the size of the noise. Numerical tests illustrate the results, and show that a very good strategy when using noisy data is to use CDA-Picard to generate an initial guess for the classical Newton iteration.

Using elementary means, we derive the three most popular splittings of $e^{(A+B)}$ and their error bounds in the case when $A$ and $B$ are (possibly unbounded) operators in a Hilbert space, generating strongly continuous semigroups, $e^{tA}$, $e^{tB}$ and $e^{t(A+B)}$. The error of these splittings is bounded in terms of the norm of the commutators $[A, B]$, $[A, [A, B]]$ and $[B, [A, B]]$.

Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.

The advent of large-scale inference has spurred reexamination of conventional statistical thinking. In a Gaussian model for $n$ many $z$-scores with at most $k < \frac{n}{2}$ nonnulls, Efron suggests estimating the location and scale parameters of the null distribution. Placing no assumptions on the nonnull effects, the statistical task can be viewed as a robust estimation problem. However, the best known robust estimators fail to be consistent in the regime $k \asymp n$ which is especially relevant in large-scale inference. The failure of estimators which are minimax rate-optimal with respect to other formulations of robustness (e.g. Huber's contamination model) might suggest the impossibility of consistent estimation in this regime and, consequently, a major weakness of Efron's suggestion. A sound evaluation of Efron's model thus requires a complete understanding of consistency. We sharply characterize the regime of $k$ for which consistent estimation is possible and further establish the minimax estimation rates. It is shown consistent estimation of the location parameter is possible if and only if $\frac{n}{2} - k = \omega(\sqrt{n})$, and consistent estimation of the scale parameter is possible in the entire regime $k < \frac{n}{2}$. Faster rates than those in Huber's contamination model are achievable by exploiting the Gaussian character of the data. The minimax upper bound is obtained by considering estimators based on the empirical characteristic function. The minimax lower bound involves constructing two marginal distributions whose characteristic functions match on a wide interval containing zero. The construction notably differs from those in the literature by sharply capturing a scaling of $n-2k$ in the minimax estimation rate of the location.

北京阿比特科技有限公司