亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a matrix-valued function $\mathcal{F}(\lambda)=\sum_{i=1}^d f_i(\lambda) A_i$, with complex matrices $A_i$ and $f_i(\lambda)$ analytic functions for $i=1,\ldots,d$, we discuss a method for the numerical approximation of the distance to singularity for $\mathcal{F}(\lambda)$. The closest matrix-valued function $\widetilde {\mathcal{F}}(\lambda)$ with respect to the Frobenius norm is approximated using an iterative method. The condition of singularity on the matrix-valued function is translated into a numerical constraint for a suitable minimization problem. Unlike the case of matrix polynomials, in the general setting of matrix-valued functions the main issue is that the function $\det ( \widetilde{\mathcal{F}}(\lambda) )$ may have an infinite number of roots. The main feature of the numerical method consists in the possibility of extending it to different structures, such as sparsity patterns induced by the matrix coefficients.

相關內容

A periodic temporal graph $\mathcal{G}=(G_0, G_1, \dots, G_{p-1})^*$ is an infinite periodic sequence of graphs $G_i=(V,E_i)$ where $G=(V,\cup_i E_i)$ is called the footprint. Recently, the arena where the Cops and Robber game is played has been extended from a graph to a periodic graph; in this case, the copnumber is also the minimum number of cops sufficient for capturing the robber. We study the connections and distinctions between the copnumber $c(\mathcal{G})$ of a periodic graph $\mathcal{G}$ and the copnumber $c(G)$ of its footprint $G$ and establish several facts. For instance, we show that the smallest periodic graph with $c(\mathcal{G}) = 3$ has at most $8$ nodes; in contrast, the smallest graph $G$ with $c(G) = 3$ has $10$ nodes. We push this investigation by generating multiple examples showing how the copnumbers of a periodic graph $\mathcal{G}$, the subgraphs $G_i$ and its footprint $G$ can be loosely tied. Based on these results, we derive upper bounds on the copnumber of a periodic graph from properties of its footprint such as its treewidth.

We give a strongly explicit construction of $\varepsilon$-approximate $k$-designs for the orthogonal group $\mathrm{O}(N)$ and the unitary group $\mathrm{U}(N)$, for $N=2^n$. Our designs are of cardinality $\mathrm{poly}(N^k/\varepsilon)$ (equivalently, they have seed length $O(nk + \log(1/\varepsilon)))$; up to the polynomial, this matches the number of design elements used by the construction consisting of completely random matrices.

A code $C \subseteq \{0, 1, 2\}^n$ of length $n$ is called trifferent if for any three distinct elements of $C$ there exists a coordinate in which they all differ. By $T(n)$ we denote the maximum cardinality of trifferent codes with length. $T(5)=10$ and $T(6)=13$ were recently determined. Here we determine $T(7)=16$, $T(8)=20$, and $T(9)=27$. For the latter case $n=9$ there also exist linear codes attaining the maximum possible cardinality $27$.

We study the problem of the nonparametric estimation for the density $\pi$ of the stationary distribution of a $d$-dimensional stochastic differential equation $(X_t)_{t \in [0, T]}$. From the continuous observation of the sampling path on $[0, T]$, we study the estimation of $\pi(x)$ as $T$ goes to infinity. For $d\ge2$, we characterize the minimax rate for the $\mathbf{L}^2$-risk in pointwise estimation over a class of anisotropic H\"older functions $\pi$ with regularity $\beta = (\beta_1, ... , \beta_d)$. For $d \ge 3$, our finding is that, having ordered the smoothness such that $\beta_1 \le ... \le \beta_d$, the minimax rate depends on whether $\beta_2 < \beta_3$ or $\beta_2 = \beta_3$. In the first case, this rate is $(\frac{\log T}{T})^\gamma$, and in the second case, it is $(\frac{1}{T})^\gamma$, where $\gamma$ is an explicit exponent dependent on the dimension and $\bar{\beta}_3$, the harmonic mean of smoothness over the $d$ directions after excluding $\beta_1$ and $\beta_2$, the smallest ones. We also demonstrate that kernel-based estimators achieve the optimal minimax rate. Furthermore, we propose an adaptive procedure for both $L^2$ integrated and pointwise risk. In the two-dimensional case, we show that kernel density estimators achieve the rate $\frac{\log T}{T}$, which is optimal in the minimax sense. Finally we illustrate the validity of our theoretical findings by proposing numerical results.

A numerical method is proposed for simulation of composite open quantum systems. It is based on Lindblad master equations and adiabatic elimination. Each subsystem is assumed to converge exponentially towards a stationary subspace, slightly impacted by some decoherence channels and weakly coupled to the other subsystems. This numerical method is based on a perturbation analysis with an asymptotic expansion. It exploits the formulation of the slow dynamics with reduced dimension. It relies on the invariant operators of the local and nominal dissipative dynamics attached to each subsystem. Second-order expansion can be computed only with local numerical calculations. It avoids computations on the tensor-product Hilbert space attached to the full system. This numerical method is particularly well suited for autonomous quantum error correction schemes. Simulations of such reduced models agree with complete full model simulations for typical gates acting on one and two cat-qubits (Z, ZZ and CNOT) when the mean photon number of each cat-qubit is less than 8. For larger mean photon numbers and gates with three cat-qubits (ZZZ and CCNOT), full model simulations are almost impossible whereas reduced model simulations remain accessible. In particular, they capture both the dominant phase-flip error-rate and the very small bit-flip error-rate with its exponential suppression versus the mean photon number.

Given a pair of non-negative random variables $X$ and $Y$, we introduce a class of nonparametric tests for the null hypothesis that $X$ dominates $Y$ in the total time on test order. Critical values are determined using bootstrap-based inference, and the tests are shown to be consistent. The same approach is used to construct tests for the excess wealth order. As a byproduct, we also obtain a class of goodness-of-fit tests for the NBUE family of distributions.

We study monotonicity testing of functions $f \colon \{0,1\}^d \to \{0,1\}$ using sample-based algorithms, which are only allowed to observe the value of $f$ on points drawn independently from the uniform distribution. A classic result by Bshouty-Tamon (J. ACM 1996) proved that monotone functions can be learned with $\exp(O(\min\{\frac{1}{\varepsilon}\sqrt{d},d\}))$ samples and it is not hard to show that this bound extends to testing. Prior to our work the only lower bound for this problem was $\Omega(\sqrt{\exp(d)/\varepsilon})$ in the small $\varepsilon$ parameter regime, when $\varepsilon = O(d^{-3/2})$, due to Goldreich-Goldwasser-Lehman-Ron-Samorodnitsky (Combinatorica 2000). Thus, the sample complexity of monotonicity testing was wide open for $\varepsilon \gg d^{-3/2}$. We resolve this question, obtaining a tight lower bound of $\exp(\Omega(\min\{\frac{1}{\varepsilon}\sqrt{d},d\}))$ for all $\varepsilon$ at most a sufficiently small constant. In fact, we prove a much more general result, showing that the sample complexity of $k$-monotonicity testing and learning for functions $f \colon \{0,1\}^d \to [r]$ is $\exp(\Theta(\min\{\frac{rk}{\varepsilon}\sqrt{d},d\}))$. For testing with one-sided error we show that the sample complexity is $\exp(\Theta(d))$. Beyond the hypercube, we prove nearly tight bounds (up to polylog factors of $d,k,r,1/\varepsilon$ in the exponent) of $\exp(\widetilde{\Theta}(\min\{\frac{rk}{\varepsilon}\sqrt{d},d\}))$ on the sample complexity of testing and learning measurable $k$-monotone functions $f \colon \mathbb{R}^d \to [r]$ under product distributions. Our upper bound improves upon the previous bound of $\exp(\widetilde{O}(\min\{\frac{k}{\varepsilon^2}\sqrt{d},d\}))$ by Harms-Yoshida (ICALP 2022) for Boolean functions ($r=2$).

Rational function approximations provide a simple but flexible alternative to polynomial approximation, allowing one to capture complex non-linearities without oscillatory artifacts. However, there have been few attempts to use rational functions on noisy data due to the likelihood of creating spurious singularities. To avoid the creation of singularities, we use Bernstein polynomials and appropriate conditions on their coefficients to force the denominator to be strictly positive. While this reduces the range of rational polynomials that can be expressed, it keeps all the benefits of rational functions while maintaining the robustness of polynomial approximation in noisy data scenarios. Our numerical experiments on noisy data show that existing rational approximation methods continually produce spurious poles inside the approximation domain. This contrasts our method, which cannot create poles in the approximation domain and provides better fits than a polynomial approximation and even penalized splines on functions with multiple variables. Moreover, guaranteeing pole-free in an interval is critical for estimating non-constant coefficients when numerically solving differential equations using spectral methods. This provides a compact representation of the original differential equation, allowing numeric solvers to achieve high accuracy quickly, as seen in our experiments.

Interpolatory necessary optimality conditions for $\mathcal{H}_2$-optimal reduced-order modeling of unstructured linear time-invariant (LTI) systems are well-known. Based on previous work on $\mathcal{L}_2$-optimal reduced-order modeling of stationary parametric problems, in this paper we develop and investigate optimality conditions for $\mathcal{H}_2$-optimal reduced-order modeling of structured LTI systems, in particular, for second-order, port-Hamiltonian, and time-delay systems. We show that across all these different structured settings, bitangential Hermite interpolation is the common form for optimality, thus proving a unifying optimality framework for structured reduced-order modeling.

The deletion distance between two binary words $u,v \in \{0,1\}^n$ is the smallest $k$ such that $u$ and $v$ share a common subsequence of length $n-k$. A set $C$ of binary words of length $n$ is called a $k$-deletion code if every pair of distinct words in $C$ has deletion distance greater than $k$. In 1965, Levenshtein initiated the study of deletion codes by showing that, for $k\ge 1$ fixed and $n$ going to infinity, a $k$-deletion code $C\subseteq \{0,1\}^n$ of maximum size satisfies $\Omega_k(2^n/n^{2k}) \leq |C| \leq O_k( 2^n/n^k)$. We make the first asymptotic improvement to these bounds by showing that there exist $k$-deletion codes with size at least $\Omega_k(2^n \log n/n^{2k})$. Our proof is inspired by Jiang and Vardy's improvement to the classical Gilbert--Varshamov bounds. We also establish several related results on the number of longest common subsequences and shortest common supersequences of a pair of words with given length and deletion distance.

北京阿比特科技有限公司