亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the $h$-version of the finite-element method, where accuracy is increased by decreasing the meshwidth $h$ while keeping the polynomial degree $p$ constant, applied to the Helmholtz equation. Although the question "how quickly must $h$ decrease as the wavenumber $k$ increases to maintain accuracy?" has been studied intensively since the 1990s, none of the existing rigorous wavenumber-explicit analyses take into account the approximation of the geometry. In this paper we prove that for nontrapping problems solved using straight elements the geometric error is order $kh$, which is then less than the pollution error $k(kh)^{2p}$ when $k$ is large; this fact is then illustrated in numerical experiments. More generally, we prove that, even for problems with strong trapping, using degree four (in 2-d) or degree five (in 3-d) polynomials and isoparametric elements ensures that the geometric error is smaller than the pollution error for most large wavenumbers.

相關內容

 LESS 是一個開源的樣式語言,受到 Sass 的影響。嚴格來說,LESS 是一個嵌套的元語言,符合語法規范的 CSS 語句也是符合規范的 Less 代碼。

First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to find and verify Lyapunov functions for classes of ordinary and stochastic differential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [10], to continuous-time models. We retrieve convergence results comparable to those of discrete methods using fewer assumptions and convexity inequalities, and provide new results for stochastic accelerated gradient flows.

We demonstrate that Assembly Theory, pathway complexity, the assembly index, and the assembly number are subsumed and constitute a weak version of algorithmic (Kolmogorov-Solomonoff-Chaitin) complexity reliant on an approximation method based upon statistical compression, their results obtained due to the use of methods strictly equivalent to the LZ family of compression algorithms used in compressing algorithms such as ZIP, GZIP, or JPEG. Such popular algorithms have been shown to empirically reproduce the results of AT's assembly index and their use had already been reported in successful application to separating organic from non-organic molecules, and the study of selection and evolution. Here we exhibit and prove the connections and full equivalence of Assembly Theory to Shannon Entropy and statistical compression, and AT's disconnection as a statistical approach from causality. We demonstrate that formulating a traditional statistically compressed description of molecules, or the theory underlying it, does not imply an explanation or quantification of biases in generative (physical or biological) processes, including those brought about by selection and evolution, when lacking in logical consistency and empirical evidence. We argue that in their basic arguments, the authors of AT conflate how objects may assemble with causal directionality, and conclude that Assembly Theory does nothing to explain selection or evolution beyond known and previously established connections, some of which are reviewed here, based on sounder theory and better experimental evidence.

In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.

Let $T$ be a tree on $t$ vertices. We prove that for every positive integer $k$ and every graph $G$, either $G$ contains $k$ pairwise vertex-disjoint subgraphs each having a $T$ minor, or there exists a set $X$ of at most $t(k-1)$ vertices of $G$ such that $G-X$ has no $T$ minor. The bound on the size of $X$ is best possible and improves on an earlier $f(t)k$ bound proved by Fiorini, Joret, and Wood (2013) with some very fast growing function $f(t)$. Moreover, our proof is very short and simple.

We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

This article is concerned with the multilevel Monte Carlo (MLMC) methods for approximating expectations of some functions of the solution to the Heston 3/2-model from mathematical finance, which takes values in $(0, \infty)$ and possesses superlinearly growing drift and diffusion coefficients. To discretize the SDE model, a new Milstein-type scheme is proposed to produce independent sample paths. The proposed scheme can be explicitly solved and is positivity-preserving unconditionally, i.e., for any time step-size $h>0$. This positivity-preserving property for large discretization time steps is particularly desirable in the MLMC setting. Furthermore, a mean-square convergence rate of order one is proved in the non-globally Lipschitz regime, which is not trivial, as the diffusion coefficient grows super-linearly. The obtained order-one convergence in turn promises the desired relevant variance of the multilevel estimator and justifies the optimal complexity $\mathcal{O}(\epsilon^{-2})$ for the MLMC approach, where $\epsilon > 0$ is the required target accuracy. Numerical experiments are finally reported to confirm the theoretical findings.

An $\epsilon$-test for any non-trivial property (one for which there are both satisfying inputs and inputs of large distance from the property) should use a number of queries that is at least inversely proportional in $\epsilon$. However, to the best of our knowledge there is no reference proof for this intuition. Such a proof is provided here. It is written so as to not require any prior knowledge of the related literature, and in particular does not use Yao's method.

Let $a$ and $b$ be two non-zero elements of a finite field $\mathbb{F}_q$, where $q>2$. It has been shown that if $a$ and $b$ have the same multiplicative order in $\mathbb{F}_q$, then the families of $a$-constacyclic and $b$-constacyclic codes over $\mathbb{F}_q$ are monomially equivalent. In this paper, we investigate the monomial equivalence of $a$-constacyclic and $b$-constacyclic codes when $a$ and $b$ have distinct multiplicative orders. We present novel conditions for establishing monomial equivalence in such constacyclic codes, surpassing previous methods of determining monomially equivalent constacyclic and cyclic codes. As an application, we use these results to search for new linear codes more systematically. In particular, we present more than $70$ new record-breaking linear codes over various finite fields, as well as new binary quantum codes.

The study of homomorphisms of $(n,m)$-graphs, that is, adjacency preserving vertex mappings of graphs with $n$ types of arcs and $m$ types of edges was initiated by Ne\v{s}et\v{r}il and Raspaud [Journal of Combinatorial Theory, Series B 2000]. Later, some attempts were made to generalize the switch operation that is popularly used in the study of signed graphs, and study its effect on the above mentioned homomorphism. In this article, we too provide a generalization of the switch operation on $(n,m)$-graphs, which to the best of our knowledge, encapsulates all the previously known generalizations as special cases. We approach to study the homomorphism with respect to the switch operation axiomatically. We prove some fundamental results that are essential tools in the further study of this topic. In the process of proving the fundamental results, we have provided yet another solution to an open problem posed by Klostermeyer and MacGillivray [Discrete Mathematics 2004]. We also prove the existence of a categorical product for $(n,m)$-graphs on with respect to a particular class of generalized switch which implicitly uses category theory. This is a counter intuitive solution as the number of vertices in the Categorical product of two $(n,m)$-graphs on $p$ and $q$ vertices has a multiple of $pq$ many vertices, where the multiple depends on the switch. This solves an open question asked by Brewster in the PEPS 2012 workshop as a corollary. We also provide a way to calculate the product explicitly, and prove general properties of the product. We define the analog of chromatic number for $(n,m)$-graphs with respect to generalized switch and explore the interrelations between chromatic numbers with respect to different switch operations. We find the value of this chromatic number for the family of forests using group theoretic notions.

Robust Markov Decision Processes (RMDPs) are a widely used framework for sequential decision-making under parameter uncertainty. RMDPs have been extensively studied when the objective is to maximize the discounted return, but little is known for average optimality (optimizing the long-run average of the rewards obtained over time) and Blackwell optimality (remaining discount optimal for all discount factors sufficiently close to 1). In this paper, we prove several foundational results for RMDPs beyond the discounted return. We show that average optimal policies can be chosen stationary and deterministic for sa-rectangular RMDPs but, perhaps surprisingly, that history-dependent (Markovian) policies strictly outperform stationary policies for average optimality in s-rectangular RMDPs. We also study Blackwell optimality for sa-rectangular RMDPs, where we show that {\em approximate} Blackwell optimal policies always exist, although Blackwell optimal policies may not exist. We also provide a sufficient condition for their existence, which encompasses virtually any examples from the literature. We then discuss the connection between average and Blackwell optimality, and we describe several algorithms to compute the optimal average return. Interestingly, our approach leverages the connections between RMDPs and stochastic games.

北京阿比特科技有限公司