亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce and study a scale of operator classes on the annulus that is motivated by the $\mathcal{C}_{\rho}$ classes of $\rho$-contractions of Nagy and Foia\c{s}. In particular, our classes are defined in terms of the contractivity of the double-layer potential integral operator over the annulus. We prove that if, in addition, complete contractivity is assumed, then one obtains a complete characterization involving certain variants of the $\mathcal{C}_{\rho}$ classes. Recent work of Crouzeix-Greenbaum and Schwenninger-de Vries allows us to also obtain relevant K-spectral estimates, generalizing existing results from the literature on the annulus. Finally, we exhibit a special case where these estimates can be significantly strengthened.

相關內容

We prove strong rate resp. weak rate ${\mathcal O}(\tau)$ for a structure preserving temporal discretization (with $\tau$ the step size) of the stochastic Allen-Cahn equation with additive resp. multiplicative colored noise in $d=1,2,3$ dimensions. Direct variational arguments exploit the one-sided Lipschitz property of the cubic nonlinearity in the first setting to settle first order strong rate. It is the same property which allows for uniform bounds for the derivatives of the solution of the related Kolmogorov equation, and then leads to weak rate ${\mathcal O}(\tau)$ in the presence of multiplicative noise. Hence, we obtain twice the rate of convergence known for the strong error in the presence of multiplicative noise.

Given a sound first-order p-time theory $T$ capable of formalizing syntax of first-order logic we define a p-time function $g_T$ that stretches all inputs by one bit and we use its properties to show that $T$ must be incomplete. We leave it as an open problem whether for some $T$ the range of $g_T$ intersects all infinite NP sets (i.e. whether it is a proof complexity generator hard for all proof systems). A propositional version of the construction shows that at least one of the following three statements is true: - there is no p-optimal propositional proof system (this is equivalent to the non-existence of a time-optimal propositional proof search algorithm), - $E \not\subseteq P/poly$, - there exists function $h$ that stretches all inputs by one bit, is computable in sub-exponential time and its range $Rng(h)$ intersects all infinite NP sets.

Transition amplitudes and transition probabilities are relevant to many areas of physics simulation, including the calculation of response properties and correlation functions. These quantities can also be related to solving linear systems of equations. Here we present three related algorithms for calculating transition probabilities. First, we extend a previously published short-depth algorithm, allowing for the two input states to be non-orthogonal. Building on this first procedure, we then derive a higher-depth algorithm based on Trotterization and Richardson extrapolation that requires fewer circuit evaluations. Third, we introduce a tunable algorithm that allows for trading off circuit depth and measurement complexity, yielding an algorithm that can be tailored to specific hardware characteristics. Finally, we implement proof-of-principle numerics for models in physics and chemistry and for a subroutine in variational quantum linear solving (VQLS). The primary benefits of our approaches are that (a) arbitrary non-orthogonal states may now be used with small increases in quantum resources, (b) we (like another recently proposed method) entirely avoid subroutines such as the Hadamard test that may require three-qubit gates to be decomposed, and (c) in some cases fewer quantum circuit evaluations are required as compared to the previous state-of-the-art in NISQ algorithms for transition probabilities.

Iterative refinement (IR) is a popular scheme for solving a linear system of equations based on gradually improving the accuracy of an initial approximation. Originally developed to improve upon the accuracy of Gaussian elimination, interest in IR has been revived because of its suitability for execution on fast low-precision hardware such as analog devices and graphics processing units. IR generally converges when the error associated with the solution method is small, but is known to diverge when this error is large. We propose and analyze a novel enhancement to the IR algorithm by adding a line search optimization step that guarantees the algorithm will not diverge. Numerical experiments verify our theoretical results and illustrate the effectiveness of our proposed scheme.

We consider the problem of approximating the regression function from noisy vector-valued data by an online learning algorithm using an appropriate reproducing kernel Hilbert space (RKHS) as prior. In an online algorithm, i.i.d. samples become available one by one by a random process and are successively processed to build approximations to the regression function. We are interested in the asymptotic performance of such online approximation algorithms and show that the expected squared error in the RKHS norm can be bounded by $C^2 (m+1)^{-s/(2+s)}$, where $m$ is the current number of processed data, the parameter $0<s\leq 1$ expresses an additional smoothness assumption on the regression function and the constant $C$ depends on the variance of the input noise, the smoothness of the regression function and further parameters of the algorithm.

In the Activation Edge-Multicover problem we are given a multigraph $G=(V,E)$ with activation costs $\{c_{e}^u,c_{e}^v\}$ for every edge $e=uv \in E$, and degree requirements $r=\{r_v:v \in V\}$. The goal is to find an edge subset $J \subseteq E$ of minimum activation cost $\sum_{v \in V}\max\{c_{uv}^v:uv \in J\}$,such that every $v \in V$ has at least $r_v$ neighbors in the graph $(V,J)$. Let $k= \max_{v \in V} r_v$ be the maximum requirement and let $\theta=\max_{e=uv \in E} \frac{\max\{c_e^u,c_e^v\}}{\min\{c_e^u,c_e^v\}}$ be the maximum quotient between the two costs of an edge. For $\theta=1$ the problem admits approximation ratio $O(\log k)$. For $k=1$ it generalizes the Set Cover problem (when $\theta=\infty$), and admits a tight approximation ratio $O(\log n)$. This implies approximation ratio $O(k \log n)$ for general $k$ and $\theta$, and no better approximation ratio was known. We obtain the first logarithmic approximation ratio $O(\log k +\log\min\{\theta,n\})$, that bridges between the two known ratios -- $O(\log k)$ for $\theta=1$ and $O(\log n)$ for $k=1$. This implies approximation ratio $O\left(\log k +\log\min\{\theta,n\}\right) +\beta \cdot (\theta+1)$ for the Activation $k$-Connected Subgraph problem, where $\beta$ is the best known approximation ratio for the ordinary min-cost version of the problem.

We study the classical problem of approximating a non-decreasing function $f: \mathcal{X} \to \mathcal{Y}$ in $L^p(\mu)$ norm by sequentially querying its values, for known compact real intervals $\mathcal{X}$, $\mathcal{Y}$ and a known probability measure $\mu$ on $\cX$. For any function~$f$ we characterize the minimum number of evaluations of $f$ that algorithms need to guarantee an approximation $\hat{f}$ with an $L^p(\mu)$ error below $\epsilon$ after stopping. Unlike worst-case results that hold uniformly over all $f$, our complexity measure is dependent on each specific function $f$. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the $L^p(\mu)$ error of GreedyBox decreases much faster for piecewise-$C^2$ functions than predicted by the algorithm (without any knowledge on the smoothness of $f$). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.

We define a family of $C^1$ functions which we call "nowhere coexpanding functions" that is closed under composition and includes all $C^3$ functions with non-positive Schwarzian derivative. We establish results on the number and nature of the fixed points of these functions, including a generalisation of a classic result of Singer.

We are interested in numerical algorithms for computing the electrical field generated by a charge distribution localized on scale $l$ in an infinite heterogeneous correlated random medium, in a situation where the medium is only known in a box of diameter $L\gg l$ around the support of the charge. We show that the algorithm of Lu, Otto and Wang, suggesting optimal Dirichlet boundary conditions motivated by the multipole expansion of Bella, Giunti and Otto, still performs well in correlated media. With overwhelming probability, we obtain a convergence rate in terms of $l$, $L$ and the size of the correlations for which optimality is supported with numerical simulations. These estimates are provided for ensembles which satisfy a multi-scale logarithmic Sobolev inequality, where our main tool is an extension of the semi-group estimates established by the first author. As part of our strategy, we construct sub-linear second-order correctors in this correlated setting which is of independent interest.

Consider the point line-geometry ${\mathcal P}_t(n,k)$ having as points all the $[n,k]$-linear codes having minimum dual distance at least $t+1$ and where two points $X$ and $Y$ are collinear whenever $X\cap Y$ is a $[n,k-1]$-linear code having minimum dual distance at least $t+1$. We are interested in the collinearity graph $\Lambda_t(n,k)$ of ${\mathcal P}_t(n,k).$ The graph $\Lambda_t(n,k)$ is a subgraph of the Grassmann graph and also a subgraph of the graph $\Delta_t(n,k)$ of the linear codes having minimum dual distance at least $t+1$ introduced in~[M. Kwiatkowski, M. Pankov, On the distance between linear codes, Finite Fields Appl. 39 (2016), 251--263, doi:10.1016/j.ffa.2016.02.004, arXiv:1506.00215]. We shall study the structure of $\Lambda_t(n,k)$ in relation to that of $\Delta_t(n,k)$ and we will characterize the set of its isolated vertices. We will then focus on $\Lambda_1(n,k)$ and $\Lambda_2(n,k)$ providing necessary and sufficient conditions for them to be connected.

北京阿比特科技有限公司