亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study cut finite element discretizations of a Darcy interface problem based on the mixed finite element pairs $\textbf{RT}_k\times Q_k$, $k\geq 0$. Here $Q_k$ is the space of discontinuous polynomial functions of degree less or equal to $k$ and $\textbf{RT}$ is the Raviart-Thomas space. We show that the standard ghost penalty stabilization, often added in the weak forms of cut finite element methods for stability and control of the condition number of the linear system matrix, destroys the divergence-free property of the considered element pairs. Therefore, we propose new stabilization terms for the pressure and show that we recover the optimal approximation of the divergence without losing control of the condition number of the linear system matrix. We prove that the method with the new stabilization term has pointwise divergence-free approximations of solenoidal velocity fields. We derive a priori error estimates for the proposed unfitted finite element discretization based on $\textbf{RT}_k\times Q_k$, $k\geq 0$. In addition, by decomposing the mesh into macro-elements and applying ghost penalty terms only on interior edges of macro-elements, stabilization is applied very restrictively and only where needed. Numerical experiments with element pairs $\textbf{RT}_0\times Q_0$, $\textbf{RT}_1\times Q_1$, and $\textbf{BDM}_1\times Q_0$ (where $\textbf{BDM}$ is the Brezzi-Douglas-Marini space) indicate that we have 1) optimal rates of convergence of the approximate velocity and pressure; 2) well-posed linear systems where the condition number of the system matrix scales as it does for fitted finite element discretizations; 3) optimal rates of convergence of the approximate divergence with pointwise divergence-free approximations of solenoidal velocity fields. All three properties hold independently of how the interface is positioned relative to the computational mesh.

相關內容

An integrated Equation of State (EOS) and strength/pore-crush/damage model framework is provided for modeling near to source (near-field) ground-shock response, where large deformations and pressures necessitate coupling EOS with pressure-dependent plastic yield and damage. Nonlinear pressure-dependence of strength up to high-pressures is combined with a Modified Cam-Clay-like cap-plasticity model in a way to allow degradation of strength from pore-crush damage, what we call the "Yp-Cap" model. Nonlinear hardening under compaction allows modeling the crush-out of pores in combination with a fully saturated EOS, i.e., for modeling partially saturated ground-shock response, where air-filled voids crush. Attention is given to algorithmic clarity and efficiency of the provided model, and the model is employed in example numerical simulations, including finite element simulations of underground explosions to exemplify its robustness and utility.

The Scott-Vogelius element is a popular finite element for the discretization of the Stokes equations which enjoys inf-sup stability and gives divergence-free velocity approximation. However, it is well known that the convergence rates for the discrete pressure deteriorate in the presence of certain $critical$ $vertices$ in a triangulation of the domain. Modifications of the Scott-Vogelius element such as the recently introduced pressure-wired Stokes element also suffer from this effect. In this paper we introduce a simple modification strategy for these pressure spaces that preserves the inf-sup stability while the pressure converges at an optimal rate.

Robust Markov Decision Processes (RMDPs) are a widely used framework for sequential decision-making under parameter uncertainty. RMDPs have been extensively studied when the objective is to maximize the discounted return, but little is known for average optimality (optimizing the long-run average of the rewards obtained over time) and Blackwell optimality (remaining discount optimal for all discount factors sufficiently close to 1). In this paper, we prove several foundational results for RMDPs beyond the discounted return. We show that average optimal policies can be chosen stationary and deterministic for sa-rectangular RMDPs but, perhaps surprisingly, that history-dependent (Markovian) policies strictly outperform stationary policies for average optimality in s-rectangular RMDPs. We also study Blackwell optimality for sa-rectangular RMDPs, where we show that {\em approximate} Blackwell optimal policies always exist, although Blackwell optimal policies may not exist. We also provide a sufficient condition for their existence, which encompasses virtually any examples from the literature. We then discuss the connection between average and Blackwell optimality, and we describe several algorithms to compute the optimal average return. Interestingly, our approach leverages the connections between RMDPs and stochastic games.

One of the main challenges for interpreting black-box models is the ability to uniquely decompose square-integrable functions of non-independent random inputs into a sum of functions of every possible subset of variables. However, dealing with dependencies among inputs can be complicated. We propose a novel framework to study this problem, linking three domains of mathematics: probability theory, functional analysis, and combinatorics. We show that, under two reasonable assumptions on the inputs (non-perfect functional dependence and non-degenerate stochastic dependence), it is always possible to decompose such a function uniquely. This generalizes the well-known Hoeffding decomposition. The elements of this decomposition can be expressed using oblique projections and allow for novel interpretability indices for evaluation and variance decomposition purposes. The properties of these novel indices are studied and discussed. This generalization offers a path towards a more precise uncertainty quantification, which can benefit sensitivity analysis and interpretability studies whenever the inputs are dependent. This decomposition is illustrated analytically, and the challenges for adopting these results in practice are discussed.

We often rely on censuses of triangulations to guide our intuition in $3$-manifold topology. However, this can lead to misplaced faith in conjectures if the smallest counterexamples are too large to appear in our census. Since the number of triangulations increases super-exponentially with size, there is no way to expand a census beyond relatively small triangulations; the current census only goes up to $10$ tetrahedra. Here, we show that it is feasible to search for large and hard-to-find counterexamples by using heuristics to selectively (rather than exhaustively) enumerate triangulations. We use this idea to find counterexamples to three conjectures which ask, for certain $3$-manifolds, whether one-vertex triangulations always have a "distinctive" edge that would allow us to recognise the $3$-manifold.

We consider approximating solutions to parameterized linear systems of the form $A(\mu_1,\mu_2) x(\mu_1,\mu_2) = b$, where $(\mu_1, \mu_2) \in \mathbb{R}^2$. Here the matrix $A(\mu_1,\mu_2) \in \mathbb{R}^{n \times n}$ is nonsingular, large, and sparse and depends nonlinearly on the parameters $\mu_1$ and $\mu_2$. Specifically, the system arises from a discretization of a partial differential equation and $x(\mu_1,\mu_2) \in \mathbb{R}^n$, $b \in \mathbb{R}^n$. This work combines companion linearization with the Krylov subspace method preconditioned bi-conjugate gradient (BiCG) and a decomposition of a tensor matrix of precomputed solutions, called snapshots. As a result, a reduced order model of $x(\mu_1,\mu_2)$ is constructed, and this model can be evaluated in a cheap way for many values of the parameters. Tensor decompositions performed on a set of snapshots can fail to reach a certain level of accuracy, and it is not known a priori if a decomposition will be successful. Moreover, the selection of snapshots can affect both the quality of the produced model and the computation time required for its construction. This new method offers a way to generate a new set of solutions on the same parameter space at little additional cost. An interpolation of the model is used to produce approximations on the entire parameter space, and this method can be used to solve a parameter estimation problem. Numerical examples of a parameterized Helmholtz equation show the competitiveness of our approach. The simulations are reproducible, and the software is available online.

We introduce a single-set axiomatisation of cubical $\omega$-categories, including connections and inverses. We justify these axioms by establishing a series of equivalences between the category of single-set cubical $\omega$-categories, and their variants with connections and inverses, and the corresponding cubical $\omega$-categories. We also report on the formalisation of cubical $\omega$-categories with the Isabelle/HOL proof assistant, which has been instrumental in finding the single-set axioms.

Let $D$ be a digraph. Its acyclic number $\vec{\alpha}(D)$ is the maximum order of an acyclic induced subdigraph and its dichromatic number $\vec{\chi}(D)$ is the least integer $k$ such that $V(D)$ can be partitioned into $k$ subsets inducing acyclic subdigraphs. We study ${\vec a}(n)$ and $\vec t(n)$ which are the minimum of $\vec\alpha(D)$ and the maximum of $\vec{\chi}(D)$, respectively, over all oriented triangle-free graphs of order $n$. For every $\epsilon>0$ and $n$ large enough, we show $(1/\sqrt{2} - \epsilon) \sqrt{n\log n} \leq \vec{a}(n) \leq \frac{107}{8} \sqrt n \log n$ and $\frac{8}{107} \sqrt n/\log n \leq \vec{t}(n) \leq (\sqrt 2 + \epsilon) \sqrt{n/\log n}$. We also construct an oriented triangle-free graph on 25 vertices with dichromatic number~3, and show that every oriented triangle-free graph of order at most 17 has dichromatic number at most 2.

Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be consistently estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing runtimes between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.

We address a prime counting problem across the homology classes of a graph, presenting a graph-theoretical Dirichlet-type analogue of the prime number theorem. The main machinery we have developed and employed is a spectral antisymmetry theorem, revealing that the spectra of the twisted graph adjacency matrices have an antisymmetric distribution over the character group of the graph. Additionally, we derive some trace formulas based on the twisted adjacency matrices as part of our analysis.

北京阿比特科技有限公司