亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $\mathcal{M}$ be a smooth $d$-dimensional submanifold of $\mathbb{R}^N$ with boundary that's equipped with the Euclidean (chordal) metric, and choose $m \leq N$. In this paper we consider the probability that a random matrix $A \in \mathbb{R}^{m \times N}$ will serve as a bi-Lipschitz function $A: \mathcal{M} \rightarrow \mathbb{R}^m$ with bi-Lipschitz constants close to one for three different types of distributions on the $m \times N$ matrices $A$, including two whose realizations are guaranteed to have fast matrix-vector multiplies. In doing so we generalize prior randomized metric space embedding results of this type for submanifolds of $\mathbb{R}^N$ by allowing for the presence of boundary while also retaining, and in some cases improving, prior lower bounds on the achievable embedding dimensions $m$ for which one can expect small distortion with high probability. In particular, motivated by recent modewise embedding constructions for tensor data, herein we present a new class of highly structured distributions on matrices which outperform prior structured matrix distributions for embedding sufficiently low-dimensional submanifolds of $\mathbb{R}^N$ (with $d \lesssim \sqrt{N}$) with respect to both achievable embedding dimension, and computationally efficient realizations. As a consequence we are able to present, for example, a general new class of Johnson-Lindenstrauss embedding matrices for $\mathcal{O}(\log^c N)$-dimensional submanifolds of $\mathbb{R}^N$ which enjoy $\mathcal{O}(N \log (\log N))$-time matrix vector multiplications.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲(chu)技術會議(yi)。 Publisher:USENIX。 SIT:

Solving the time-dependent Schr\"odinger equation is an important application area for quantum algorithms. We consider Schr\"odinger's equation in the semi-classical regime. Here the solutions exhibit strong multiple-scale behavior due to a small parameter $\hbar$, in the sense that the dynamics of the quantum states and the induced observables can occur on different spatial and temporal scales. Such a Schr\"odinger equation finds many applications, including in Born-Oppenheimer molecular dynamics and Ehrenfest dynamics. This paper considers quantum analogues of pseudo-spectral (PS) methods on classical computers. Estimates on the gate counts in terms of $\hbar$ and the precision $\varepsilon$ are obtained. It is found that the number of required qubits, $m$, scales only logarithmically with respect to $\hbar$. When the solution has bounded derivatives up to order $\ell$, the symmetric Trotting method has gate complexity $\mathcal{O}\Big({ (\varepsilon \hbar)^{-\frac12} \mathrm{polylog}(\varepsilon^{-\frac{3}{2\ell}} \hbar^{-1-\frac{1}{2\ell}})}\Big),$ provided that the diagonal unitary operators in the pseudo-spectral methods can be implemented with $\mathrm{poly}(m)$ operations. When physical observables are the desired outcomes, however, the step size in the time integration can be chosen independently of $\hbar$. The gate complexity in this case is reduced to $\mathcal{O}\Big({\varepsilon^{-\frac12} \mathrm{polylog}( \varepsilon^{-\frac3{2\ell}} \hbar^{-1} )}\Big),$ with $\ell$ again indicating the smoothness of the solution.

Consider $n$ iid real-valued random vectors of size $k$ having iid coordinates with a general distribution function $F$. A vector is a maximum if and only if there is no other vector in the sample which weakly dominates it in all coordinates. Let $p_{k,n}$ be the probability that the first vector is a maximum. The main result of the present paper is that if $k\equiv k_n$ is growing at a slower (faster) rate than a certain factor of $\log(n)$, then $p_{k,n} \rightarrow 0$ (resp. $p_{k,n}\rightarrow1$) as $n\to\infty$. Furthermore, the factor is fully characterized as a functional of $F$. We also study the effect of $F$ on $p_{k,n}$, showing that while $p_{k,n}$ may be highly affected by the choice of $F$, the phase transition is the same for all distribution functions up to a constant factor.

In this paper, we mainly study quaternary linear codes and their binary subfield codes. First we obtain a general explicit relationship between quaternary linear codes and their binary subfield codes in terms of generator matrices and defining sets. Second, we construct quaternary linear codes via simplicial complexes and determine the weight distributions of these codes. Third, the weight distributions of the binary subfield codes of these quaternary codes are also computed by employing the general characterization. Furthermore, we present two infinite families of optimal linear codes with respect to the Griesmer Bound, and a class of binary almost optimal codes with respect to the Sphere Packing Bound. We also need to emphasize that we obtain at least 9 new quaternary linear codes.

We study a fourth-order div problem and its approximation by the discontinuous Petrov-Galerkin method with optimal test functions. We present two variants, based on first and second-order systems. In both cases we prove well-posedness of the formulation and quasi-optimal convergence of the approximation. Our analysis includes the fully-discrete schemes with approximated test functions, for general dimension and polynomial degree in the first-order case, and for two dimensions and lowest-order approximation in the second-order case. Numerical results illustrate the performance for quasi-uniform and adaptively refined meshes.

The purpose of this article is to develop machinery to study the capacity of deep neural networks (DNNs) to approximate high-dimensional functions. In particular, we show that DNNs have the expressive power to overcome the curse of dimensionality in the approximation of a large class of functions. More precisely, we prove that these functions can be approximated by DNNs on compact sets such that the number of parameters necessary to represent the approximating DNNs grows at most polynomially in the reciprocal $1/\varepsilon$ of the approximation accuracy $\varepsilon>0$ and in the input dimension $d\in \mathbb{N} =\{1,2,3,\dots\}$. To this end, we introduce certain approximation spaces, consisting of sequences of functions that can be efficiently approximated by DNNs. We then establish closure properties which we combine with known and new bounds on the number of parameters necessary to approximate locally Lipschitz continuous functions, maximum functions, and product functions by DNNs. The main result of this article demonstrates that DNNs have sufficient expressiveness to approximate certain sequences of functions which can be constructed by means of a finite number of compositions using locally Lipschitz continuous functions, maxima, and products without the curse of dimensionality.

In this work we develop a novel fully discrete version of the plates complex, an exact Hilbert complex relevant for the mixed formulation of fourth-order problems. The derivation of the discrete complex follows the discrete de Rham paradigm, leading to an arbitrary-order construction that applies to meshes composed of general polygonal elements. The discrete plates complex is then used to derive a novel numerical scheme for Kirchhoff--Love plates, for which a full stability and convergence analysis are performed. Extensive numerical tests complete the exposition.

Let $|\cdot|:\mathbb{R}^d \to [0,\infty) $ be a $1$-homogeneous continuous map and let $\mathcal{T}=\mathbb{R}^l$ or $\mathcal{T}=\mathbb{Z}^l$ with $d,l$ positive integers. For a given $\mathbb{R}^d$-valued random field (rf) $Z(t),t\in \mathcal{T}$, which satisfies $\mathbb{E}\{ |Z(t)|^\alpha\} \in [0,\infty)$ for all $t\in \mathcal{T}$ and some $\alpha>0$ we define a class of rf's $\mathcal{K}^+_\alpha[Z]$ related to $Z$ via certain functional identities. In the case $\mathcal{T}=\mathbb{R}^l$ the elements of $\mathcal{K}^+_\alpha[Z]$ are assumed to be quadrant stochastically continuous. If $B^h Z \in \mathcal{K}^+_\alpha[Z]$ for any $h\in \mathcal{T}$ with $B^h Z(\cdot)= Z(\cdot -h), h\in \mathcal{T}$, we call $\mathcal{K}^+_\alpha[Z]$ shift-invariant. This paper is concerned with the basic properties of shift-invariant $\mathcal{K}^+_\alpha[Z]$'s. In particular, we discuss functional equations that characterise the shift-invariance and relate it with spectral tail and tail rf's introduced in this article for our general settings. Further, we investigate the class of universal maps $\mathbb{U}$, which is of particular interest for shift-representations. Two applications of our findings concern max-stable rf's and their extremal indices.

The aim of noisy phase retrieval is to estimate a signal $\mathbf{x}_0\in \mathbb{C}^d$ from $m$ noisy intensity measurements $b_j=\left\lvert \langle \mathbf{a}_j,\mathbf{x}_0 \rangle \right\rvert^2+\eta_j, \; j=1,\ldots,m$, where $\mathbf{a}_j \in \mathbb{C}^d$ are known measurement vectors and $\eta=(\eta_1,\ldots,\eta_m)^\top \in \mathbb{R}^m$ is a noise vector. A commonly used model for estimating $\mathbf{x}_0$ is the intensity-based model $\widehat{\mathbf{x}}:=\mbox{argmin}_{\mathbf{x} \in \mathbb{C}^d} \sum_{j=1}^m \big(\left\lvert \langle \mathbf{a}_j,\mathbf{x} \rangle \right\rvert^2-b_j \big)^2$. Although one has already developed many efficient algorithms to solve the intensity-based model, there are very few results about its estimation performance. In this paper, we focus on the estimation performance of the intensity-based model and prove that the error bound satisfies $\min_{\theta\in \mathbb{R}}\|\widehat{\mathbf{x}}-e^{i\theta}\mathbf{x}_0\|_2 \lesssim \min\Big\{\frac{\sqrt{\|\eta\|_2}}{{m}^{1/4}}, \frac{\|\eta\|_2}{\| \mathbf{x}_0\|_2 \cdot \sqrt{m}}\Big\}$ under the assumption of $m \gtrsim d$ and $\mathbf{a}_j, j=1,\ldots,m,$ being Gaussian random vectors. We also show that the error bound is sharp. For the case where $\mathbf{x}_0$ is a $s$-sparse signal, we present a similar result under the assumption of $m \gtrsim s \log (ed/s)$. To the best of our knowledge, our results are the first theoretical guarantees for the intensity-based model and its sparse version. Our proofs employ Mendelson's small ball method which can deliver an effective lower bound on a nonnegative empirical process.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for various NLP tasks by constructing a new robustness benchmark with realistic distribution shifts. We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformers' performance declines are substantially smaller. Pretrained transformers are also more effective at detecting anomalous or OOD examples, while many previous models are frequently worse than chance. We examine which factors affect robustness, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Finally, we show where future work can improve OOD robustness.

北京阿比特科技有限公司