亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The numerical solution of dynamical systems with memory requires the efficient evaluation of Volterra integral operators in an evolutionary manner. After appropriate discretisation, the basic problem can be represented as a matrix-vector product with a lower diagonal but densely populated matrix. For typical applications, like fractional diffusion or large scale dynamical systems with delay, the memory cost for storing the matrix approximations and complete history of the data then would become prohibitive for an accurate numerical approximation. For Volterra-integral operators of convolution type, the \emph{fast and oblivious convolution quadrature} method of Sch\"adle, Lopez-Fernandez, and Lubich allows to compute the discretized valuation with $N$ time steps in $O(N \log N)$ complexity and only requiring $O(\log N)$ active memory to store a compressed version of the complete history of the data. We will show that this algorithm can be interpreted as an $\mathcal{H}$-matrix approximation of the underlying integral operator and, consequently, a further improvement can be achieved, in principle, by resorting to $\mathcal{H}^2$-matrix compression techniques. We formulate a variant of the $\mathcal{H}^2$-matrix vector product for discretized Volterra integral operators that can be performed in an evolutionary and oblivious manner and requires only $O(N)$ operations and $O(\log N)$ active memory. In addition to the acceleration, more general asymptotically smooth kernels can be treated and the algorithm does not require a-priori knowledge of the number of time steps. The efficiency of the proposed method is demonstrated by application to some typical test problems.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集(ji)成(cheng),VLSI雜志。 Publisher:Elsevier。 SIT:

The matrix normal model, the family of Gaussian matrix-variate distributions whose covariance matrix is the Kronecker product of two lower dimensional factors, is frequently used to model matrix-variate data. The tensor normal model generalizes this family to Kronecker products of three or more factors. We study the estimation of the Kronecker factors of the covariance matrix in the matrix and tensor models. We show nonasymptotic bounds for the error achieved by the maximum likelihood estimator (MLE) in several natural metrics. In contrast to existing bounds, our results do not rely on the factors being well-conditioned or sparse. For the matrix normal model, all our bounds are minimax optimal up to logarithmic factors, and for the tensor normal model our bound for the largest factor and overall covariance matrix are minimax optimal up to constant factors provided there are enough samples for any estimator to obtain constant Frobenius error. In the same regimes as our sample complexity bounds, we show that an iterative procedure to compute the MLE known as the flip-flop algorithm converges linearly with high probability. Our main tool is geodesic strong convexity in the geometry on positive-definite matrices induced by the Fisher information metric. This strong convexity is determined by the expansion of certain random quantum channels. We also provide numerical evidence that combining the flip-flop algorithm with a simple shrinkage estimator can improve performance in the undersampled regime.

We study the spectral convergence of a symmetrized Graph Laplacian matrix induced by a Gaussian kernel evaluated on pairs of embedded data, sampled from a manifold with boundary, a sub-manifold of $\mathbb{R}^m$. Specifically, we deduce the convergence rates for eigenpairs of the discrete Graph-Laplacian matrix to the eigensolutions of the Laplace-Beltrami operator that are well-defined on manifolds with boundary, including the homogeneous Neumann and Dirichlet boundary conditions. For the Dirichlet problem, we deduce the convergence of the \emph{truncated Graph Laplacian}, which is recently numerically observed in applications, and provide a detailed numerical investigation on simple manifolds. Our method of proof relies on the min-max argument over a compact and symmetric integral operator, leveraging the RKHS theory for spectral convergence of integral operator and a recent pointwise asymptotic result of a Gaussian kernel integral operator on manifolds with boundary.

This paper studies the $\tau$-coherence of a (n x p)-observation matrix in a Gaussian framework. The $\tau$-coherence is defined as the largest magnitude outside a diagonal bandwith of size $\tau$ of the empirical correlation coefficients associated to our observations. Using the Chen-Stein method we derive the limiting law of the normalized coherence and show the convergence towards a Gumbel distribution. We generalize here the results of Cai and Jiang [CJ11a]. We assume that the covariance matrix of the model is bandwise. Moreover, we provide numerical considerations highlighting issues from the high dimension hypotheses. We numerically illustrate the asymptotic behaviour of the coherence with Monte-Carlo experiment using a HPC splitting strategy for high dimensional correlation matrices.

In this short report, we present a simple, yet effective approach to editing real images via generative adversarial networks (GAN). Unlike previous techniques, that treat all editing tasks as an operation that affects pixel values in the entire image in our approach we cut up the image into a set of smaller segments. For those segments corresponding latent codes of a generative network can be estimated with greater accuracy due to the lower number of constraints. When codes are altered by the user the content in the image is manipulated locally while the rest of it remains unaffected. Thanks to this property the final edited image better retains the original structures and thus helps to preserve natural look.

There is an increasing realization that algorithmic inductive biases are central in preventing overfitting; empirically, we often see a benign overfitting phenomenon in overparameterized settings for natural learning algorithms, such as stochastic gradient descent (SGD), where little to no explicit regularization has been employed. This work considers this issue in arguably the most basic setting: constant-stepsize SGD (with iterate averaging or tail averaging) for linear regression in the overparameterized regime. Our main result provides a sharp excess risk bound, stated in terms of the full eigenspectrum of the data covariance matrix, that reveals a bias-variance decomposition characterizing when generalization is possible: (i) the variance bound is characterized in terms of an effective dimension (specific for SGD) and (ii) the bias bound provides a sharp geometric characterization in terms of the location of the initial iterate (and how it aligns with the data covariance matrix). More specifically, for SGD with iterate averaging, we demonstrate the sharpness of the established excess risk bound by proving a matching lower bound (up to constant factors). For SGD with tail averaging, we show its advantage over SGD with iterate averaging by proving a better excess risk bound together with a nearly matching lower bound. Moreover, we reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares (minimum-norm interpolation) and ridge regression. Experimental results on synthetic data corroborate our theoretical findings.

The assignment flow recently introduced in the J. Math. Imaging and Vision 58/2 (2017), constitutes a high-dimensional dynamical system that evolves on an elementary statistical manifold and performs contextual labeling (classification) of data given in any metric space. Vertices of a given graph index the data points and define a system of neighborhoods. These neighborhoods together with nonnegative weight parameters define regularization of the evolution of label assignments to data points, through geometric averaging induced by the affine e-connection of information geometry. Regarding evolutionary game dynamics, the assignment flow may be characterized as a large system of replicator equations that are coupled by geometric averaging. This paper establishes conditions on the weight parameters that guarantee convergence of the continuous-time assignment flow to integral assignments (labelings), up to a negligible subset of situations that will not be encountered when working with real data in practice. Furthermore, we classify attractors of the flow and quantify corresponding basins of attraction. This provides convergence guarantees for the assignment flow which are extended to the discrete-time assignment flow that results from applying a Runge-Kutta-Munthe-Kaas scheme for numerical geometric integration of the assignment flow. Several counter-examples illustrate that violating the conditions may entail unfavorable behavior of the assignment flow regarding contextual data classification.

We consider the numerical solution of the real time equilibrium Dyson equation, which is used in calculations of the dynamical properties of quantum many-body systems. We show that this equation can be written as a system of coupled, nonlinear, convolutional Volterra integro-differential equations, for which the kernel depends self-consistently on the solution. As is typical in the numerical solution of Volterra-type equations, the computational bottleneck is the quadratic-scaling cost of history integration. However, the structure of the nonlinear Volterra integral operator precludes the use of standard fast algorithms. We propose a quasilinear-scaling FFT-based algorithm which respects the structure of the nonlinear integral operator. The resulting method can reach large propagation times, and is thus well-suited to explore quantum many-body phenomena at low energy scales. We demonstrate the solver with two standard model systems: the Bethe graph, and the Sachdev-Ye-Kitaev model.

We prove Besov regularity estimates for the solution of the Dirichlet problem involving the integral fractional Laplacian of order $s$ in bounded Lipschitz domains $\Omega$: \[ \|u\|_{\dot{B}^{s+r}_{2,\infty}(\Omega)} \le C \|f\|_{L^2(\Omega)} \quad r = \min\{s,1/2\}. \] This estimate is consistent with the regularity on smooth domains and shows that there is no loss of regularity due to Lipschitz boundaries. The proof uses elementary ingredients, such as the variational structure of the problem and the difference quotient technique.

In this note we compute the logarithmic energy of points in the unit interval $[-1,1]$ chosen from different Gegenbauer Determinantal Point Processes. We check that all the different families of Gegenbauer polynomials yield the same asymptotic result to third order, we compute exactly the value for Chebyshev polynomials and we give a closed expresion for the minimal possible logarithmic energy. The comparison suggests that DPPs cannot match the value of the minimum beyond the third asymptotic term.

In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.

北京阿比特科技有限公司