亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Krylov methods rely on iterated matrix-vector products $A^k u_j$ for an $n\times n$ matrix $A$ and vectors $u_1,\ldots,u_m$. The space spanned by all iterates $A^k u_j$ admits a particular basis -- the \emph{maximal Krylov basis} -- which consists of iterates of the first vector $u_1, Au_1, A^2u_1,\ldots$, until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig's 1985 algorithm whose complexity bound $O(n^\omega \log(n))$ comes from repeated squarings of $A$ and logarithmically many Gaussian eliminations. Here $\omega>2$ is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in $O(n^\omega\log\log(n))$ field operations when $m \in O(n)$, and even $O(n^\omega)$ as soon as $m\in O(n/\log(n)^c)$ for some fixed real $c>0$. As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in $O(n^\omega \log\log(n)^2)$, and therefore matrix exponentiation~$A^k$ can be performed in the latter complexity if $\log(k) \in O(n^{\omega-1-\varepsilon})$, for $\varepsilon>0$. A key idea for these improvements is to rely on fast algorithms for $m\times m$ polynomial matrices of average degree $n/m$, involving high-order lifting and minimal kernel bases.

相關內容

We consider the cubic nonlinear Schr\"odinger equation with a spatially rough potential, a key equation in the mathematical setup for nonlinear Anderson localization. Our study comprises two main parts: new optimal results on the well-posedness analysis on the PDE level, and subsequently a new efficient numerical method, its convergence analysis and simulations that illustrate our analytical results. In the analysis part, our results focus on understanding how the regularity of the solution is influenced by the regularity of the potential, where we provide quantitative and explicit characterizations. Ill-posedness results are also established to demonstrate the sharpness of the obtained regularity characterizations and to indicate the minimum regularity required from the potential for the NLS to be solvable. Building upon the obtained regularity results, we design an appropriate numerical discretization for the model and establish its convergence with an optimal error bound. The numerical experiments in the end not only verify the theoretical regularity results, but also confirm the established convergence rate of the proposed scheme. Additionally, a comparison with other existing schemes is conducted to demonstrate the better accuracy of our new scheme in the case of a rough potential.

Deep generative models aim to learn the underlying distribution of data and generate new ones. Despite the diversity of generative models and their high-quality generation performance in practice, most of them lack rigorous theoretical convergence proofs. In this work, we aim to establish some convergence results for OT-Flow, one of the deep generative models. First, by reformulating the framework of OT-Flow model, we establish the $\Gamma$-convergence of the formulation of OT-flow to the corresponding optimal transport (OT) problem as the regularization term parameter $\alpha$ goes to infinity. Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well. Meanwhile, the approximation capability of the neural network provides an upper bound for the discrete loss function of the minimizers. The proofs in both aspects provide convincing assurances for OT-Flow.

We study differentially private (DP) estimation of a rank-$r$ matrix $M \in \RR^{d_1\times d_2}$ under the trace regression model with Gaussian measurement matrices. Theoretically, the sensitivity of non-private spectral initialization is precisely characterized, and the differential-privacy-constrained minimax lower bound for estimating $M$ under the Schatten-$q$ norm is established. Methodologically, the paper introduces a computationally efficient algorithm for DP-initialization with a sample size of $n \geq \wt O (r^2 (d_1\vee d_2))$. Under certain regularity conditions, the DP-initialization falls within a local ball surrounding $M$. We also propose a differentially private algorithm for estimating $M$ based on Riemannian optimization (DP-RGrad), which achieves a near-optimal convergence rate with the DP-initialization and sample size of $n \geq \wt O(r (d_1 + d_2))$. Finally, the paper discusses the non-trivial gap between the minimax lower bound and the upper bound of low-rank matrix estimation under the trace regression model. It is shown that the estimator given by DP-RGrad attains the optimal convergence rate in a weaker notion of differential privacy. Our powerful technique for analyzing the sensitivity of initialization requires no eigengap condition between $r$ non-zero singular values.

We consider a class of symmetry hypothesis testing problems including testing isotropy on $\mathbb{R}^d$ and testing rotational symmetry on the hypersphere $\mathcal{S}^{d-1}$. For this class, we study the null and non-null behaviors of Sobolev tests, with emphasis on their consistency rates. Our main results show that: (i) Sobolev tests exhibit a detection threshold (see Bhattacharya, 2019, 2020) that does not only depend on the coefficients defining these tests; and (ii) tests with non-zero coefficients at odd (respectively, even) ranks only are blind to alternatives with angular functions whose $k$th-order derivatives at zero vanish for any $k$ odd (even). Our non-standard asymptotic results are illustrated with Monte Carlo exercises. A case study in astronomy applies the testing toolbox to evaluate the symmetry of orbits of long- and short-period comets.

Given a hypergraph $\mathcal{H}$, the dual hypergraph of $\mathcal{H}$ is the hypergraph of all minimal transversals of $\mathcal{H}$. The dual hypergraph is always Sperner, that is, no hyperedge contains another. A special case of Sperner hypergraphs are the conformal Sperner hypergraphs, which correspond to the families of maximal cliques of graphs. All these notions play an important role in many fields of mathematics and computer science, including combinatorics, algebra, database theory, etc. In this paper we study conformality of dual hypergraphs and prove several results related to the problem of recognizing this property. In particular, we show that the problem is in co-NP and can be solved in polynomial time for hypergraphs of bounded dimension. In the special case of dimension $3$, we reduce the problem to $2$-Satisfiability. Our approach has an implication in algorithmic graph theory: we obtain a polynomial-time algorithm for recognizing graphs in which all minimal transversals of maximal cliques have size at most $k$, for any fixed $k$.

For the Euler scheme of the stochastic linear evolution equations, discrete stochastic maximal $ L^p $-regularity estimate is established, and a sharp error estimate in the norm $ \|\cdot\|_{L^p((0,T)\times\Omega;L^q(\mathcal O))} $, $ p,q \in [2,\infty) $, is derived via a duality argument.

A graph $G$ is well-covered if all maximal independent sets are of the same cardinality. Let $w:V(G) \longrightarrow\mathbb{R}$ be a weight function. Then $G$ is $w$-well-covered if all maximal independent sets are of the same weight. An edge $xy \in E(G)$ is relating if there exists an independent set $S$ such that both $S \cup \{x\}$ and $S \cup \{y\}$ are maximal independent sets in the graph. If $xy$ is relating then $w(x)=w(y)$ for every weight function $w$ such that $G$ is $w$-well-covered. Relating edges play an important role in investigating $w$-well-covered graphs. The decision problem whether an edge in a graph is relating is NP-complete. We prove that the problem remains NP-complete when the input is restricted to graphs without cycles of length $6$. This is an unexpected result because recognizing relating edges is known to be polynomially solvable for graphs without cycles of lengths $4$ and $6$, graphs without cycles of lengths $5$ and $6$, and graphs without cycles of lengths $6$ and $7$. A graph $G$ belongs to the class $W_2$ if every two pairwise disjoint independent sets in $G$ are included in two pairwise disjoint maximum independent sets. It is known that if $G$ belongs to the class $W_2$, then it is well-covered. A vertex $v \in V(G)$ is shedding if for every independent set $S \subseteq V(G)-N[v]$, there exists a vertex $u \in N(v)$ such that $S \cup \{u\}$ is independent. Shedding vertices play an important role in studying the class $W_2$. Recognizing shedding vertices is co-NP-complete, even when the input is restricted to triangle-free graphs. We prove that the problem is co-NP-complete for graphs without cycles of length $6$.

We consider the problem of approximating a function from $L^2$ by an element of a given $m$-dimensional space $V_m$, associated with some feature map $\varphi$, using evaluations of the function at random points $x_1,\dots,x_n$. After recalling some results on optimal weighted least-squares using independent and identically distributed points, we consider weighted least-squares using projection determinantal point processes (DPP) or volume sampling. These distributions introduce dependence between the points that promotes diversity in the selected features $\varphi(x_i)$. We first provide a generalized version of volume-rescaled sampling yielding quasi-optimality results in expectation with a number of samples $n = O(m\log(m))$, that means that the expected $L^2$ error is bounded by a constant times the best approximation error in $L^2$. Also, further assuming that the function is in some normed vector space $H$ continuously embedded in $L^2$, we further prove that the approximation is almost surely bounded by the best approximation error measured in the $H$-norm. This includes the cases of functions from $L^\infty$ or reproducing kernel Hilbert spaces. Finally, we present an alternative strategy consisting in using independent repetitions of projection DPP (or volume sampling), yielding similar error bounds as with i.i.d. or volume sampling, but in practice with a much lower number of samples. Numerical experiments illustrate the performance of the different strategies.

For any positive integer $q\geq 2$ and any real number $\delta\in(0,1)$, let $\alpha_q(n,\delta n)$ denote the maximum size of a subset of $\mathbb{Z}_q^n$ with minimum Hamming distance at least $\delta n$, where $\mathbb{Z}_q=\{0,1,\dotsc,q-1\}$ and $n\in\mathbb{N}$. The asymptotic rate function is defined by $ R_q(\delta) = \limsup_{n\rightarrow\infty}\frac{1}{n}\log_q\alpha_q(n,\delta n).$ The famous $q$-ary asymptotic Gilbert-Varshamov bound, obtained in the 1950s, states that \[ R_q(\delta) \geq 1 - \delta\log_q(q-1)-\delta\log_q\frac{1}{\delta}-(1-\delta)\log_q\frac{1}{1-\delta} \stackrel{\mathrm{def}}{=}R_\mathrm{GV}(\delta,q) \] for all positive integers $q\geq 2$ and $0<\delta<1-q^{-1}$. In the case that $q$ is an even power of a prime with $q\geq 49$, the $q$-ary Gilbert-Varshamov bound was firstly improved by using algebraic geometry codes in the works of Tsfasman, Vladut, and Zink and of Ihara in the 1980s. These algebraic geometry codes have been modified to improve the $q$-ary Gilbert-Varshamov bound $R_\mathrm{GV}(\delta,q)$ at a specific tangent point $\delta=\delta_0\in (0,1)$ of the curve $R_\mathrm{GV}(\delta,q)$ for each given integer $q\geq 46$. However, the $q$-ary Gilbert-Varshamov bound $R_\mathrm{GV}(\delta,q)$ at $\delta=1/2$, i.e., $R_\mathrm{GV}(1/2,q)$, remains the largest known lower bound of $R_q(1/2)$ for infinitely many positive integers $q$ which is a generic prime and which is a generic non-prime-power integer. In this paper, by using codes from geometry of numbers introduced by Lenstra in the 1980s, we prove that the $q$-ary Gilbert-Varshamov bound $R_\mathrm{GV}(\delta,q)$ with $\delta\in(0,1)$ can be improved for all but finitely many positive integers $q$. It is shown that the growth defined by $\eta(\delta)= \liminf_{q\rightarrow\infty}\frac{1}{\log q}\log[1-\delta-R_q(\delta)]^{-1}$ for every $\delta\in(0,1)$ has actually a nontrivial lower bound.

We study the equational theory of the Weihrauch lattice with multiplication, meaning the collection of equations between terms built from variables, the lattice operations $\sqcup$, $\sqcap$, the product $\times$, and the finite parallelization $(-)^*$ which are true however we substitute Weihrauch degrees for the variables. We provide a combinatorial description of these in terms of a reducibility between finite graphs, and moreover, show that deciding which equations are true in this sense is complete for the third level of the polynomial hierarchy.

北京阿比特科技有限公司