亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Suppose $A \in \mathbb{R}^{n \times n}$ is invertible and we are looking for the solution of $Ax = b$. Given an initial guess $x_1 \in \mathbb{R}$, we show that by reflecting through hyperplanes generated by the rows of $A$, we can generate an infinite sequence $(x_k)_{k=1}^{\infty}$ such that all elements have the same distance to the solution, i.e. $\|x_k - x\| = \|x_1 - x\|$. If the hyperplanes are chosen at random, averages over the sequence converge and $$ \mathbb{E} \left\| x - \frac{1}{m} \sum_{k=1}^{m}{ x_k} \right\| \leq \frac{1 + \|A\|_F \|A^{-1}\|}{\sqrt{m}} \cdot\|x-x_1\|.$$ The bound does not depend on the dimension of the matrix. This introduces a purely geometric way of attacking the problem: are there fast ways of estimating the location of the center of a sphere from knowing many points on the sphere? Our convergence rate (coinciding with that of the Random Kaczmarz method) comes from averaging, can one do better?

相關內容

The concept of Nash equilibrium enlightens the structure of rational behavior in multi-agent settings. However, the concept is as helpful as one may compute it efficiently. We introduce the Cut-and-Play, an algorithm to compute Nash equilibria for non-cooperative simultaneous games where each player's objective is linear in their variables and bilinear in the other players' variables. Using the rich theory of integer programming, we alternate between constructing (i.) increasingly tighter outer approximations of the convex hull of each player's feasible set -- by using branching and cutting plane methods -- and (ii.) increasingly better inner approximations of these hulls -- by finding extreme points and rays of the convex hulls. In particular, we prove the correctness of our algorithm when these convex hulls are polyhedra. Our algorithm allows us to leverage the mixed integer programming technology to compute equilibria for a large class of games. Further, we integrate existing cutting plane families inside the algorithm, significantly speeding up equilibria computation. We showcase a set of extensive computational results for Integer Programming Games and simultaneous games among bilevel leaders. In both cases, our framework outperforms the state-of-the-art in computing time and solution quality.

We give a new algorithm for the estimation of the cross-covariance matrix $\mathbb{E} XY'$ of two large dimensional signals $X\in\mathbb{R}^n$, $Y\in \mathbb{R}^p$ in the context where the number $T$ of observations of the pair $(X,Y)$ is itself large, but with $T/n$ and $T/p$ not supposed to be small. In the asymptotic regime where $n,p,T$ are large, with high probability, this algorithm is optimal for the Frobenius norm among rotationally invariant estimators, i.e. estimators derived from the empirical estimator by cleaning the singular values, while letting singular vectors unchanged.

The {\em tensor power method} generalizes the matrix power method to higher order arrays, or tensors. Like in the matrix case, the fixed points of the tensor power method are the eigenvectors of the tensor. While every real symmetric matrix has an eigendecomposition, the vectors generating a symmetric decomposition of a real symmetric tensor are not always eigenvectors of the tensor. In this paper we show that whenever an eigenvector {\em is} a generator of the symmetric decomposition of a symmetric tensor, then (if the order of the tensor is sufficiently high) this eigenvector is {\em robust} , i.e., it is an attracting fixed point of the tensor power method. We exhibit new classes of symmetric tensors whose symmetric decomposition consists of eigenvectors. Generalizing orthogonally decomposable tensors, we consider {\em equiangular tight frame decomposable} and {\em equiangular set decomposable} tensors. Our main result implies that such tensors can be decomposed using the tensor power method.

The purpose of this paper is to perform an error analysis of the variational integrators of mechanical systems subject to external forcing. Essentially, we prove that when a discretization of contact order $r$ of the Lagrangian and force are used, the integrator has the same contact order. Our analysis is performed first for discrete forced mechanical systems defined over $TQ$, where we study the existence of flows, the construction and properties of discrete exact systems and the contact order of the flows (variational integrators) in terms of the contact order of the original systems. Then we use those results to derive the corresponding analysis for the analogous forced systems defined over $Q\times Q$.

Computational fluctuating hydrodynamics aims at understanding the impact of thermal fluctuations on fluid motions at small scales through numerical exploration. These fluctuations are modeled as stochastic flux terms and incorporated into the classical Navier-Stokes equations, which need to be solved numerically. In this paper, we present a novel projection-based method for solving the incompressible fluctuating hydrodynamics equations. By analyzing the equilibrium structure factor spectrum of the velocity field, we investigate how the inherent splitting errors affect the numerical solution of the stochastic partial differential equations in the presence of non-periodic boundary conditions, and how iterative corrections can reduce these errors. Our computational examples demonstrate both the capability of our approach to reproduce correctly stochastic properties of fluids at small scales as well as its potential use in the simulations of multi-physics problems.

Algebraic Riccati equations with indefinite quadratic terms play an important role in applications related to robust controller design. While there are many established approaches to solve these in case of small-scale dense coefficients, there is no approach available to compute solutions in the large-scale sparse setting. In this paper, we develop an iterative method to compute low-rank approximations of stabilizing solutions of large-scale sparse continuous-time algebraic Riccati equations with indefinite quadratic terms. We test the developed approach for dense examples in comparison to other established matrix equation solvers, and investigate the applicability and performance in large-scale sparse examples.

Distribution-dependent stochastic dynamical systems arise widely in engineering and science. We consider a class of such systems which model the limit behaviors of interacting particles moving in a vector field with random fluctuations. We aim to examine the most likely transition path between equilibrium stable states of the vector field. In the small noise regime, we find that the rate function (or action functional) does not involve with the solution of the skeleton equation, which describes unperturbed deterministic flow of the vector field shifted by the interaction at zero distance. As a result, we are led to study the most likely transition path for a stochastic differential equation without distribution-dependency. This enables the computation of the most likely transition path for these distribution-dependent stochastic dynamical systems by the adaptive minimum action method and we illustrate our approach in two examples.

We derive a priori error of the Godunov method for the multidimensional Euler system of gas dynamics. To this end we apply the relative energy principle and estimate the distance between the numerical solution and the strong solution. This yields also the estimates of the $L^2$-norm of errors in density, momentum and entropy. Under the assumption that the numerical density and energy are bounded, we obtain a convergence rate of $1/2$ for the relative energy in the $L^1$-norm. Further, under the assumption -- the total variation of numerical solution is bounded, we obtain the first order convergence rate for the relative energy in the $L^1$-norm. Consequently, numerical solutions (density, momentum and entropy) converge in the $L^2$-norm with the convergence rate of $1/2$. The numerical results presented for Riemann problems are consistent with our theoretical analysis.

We revisit the basic problem of quantum state certification: given copies of unknown mixed state $\rho\in\mathbb{C}^{d\times d}$ and the description of a mixed state $\sigma$, decide whether $\sigma = \rho$ or $\|\sigma - \rho\|_{\mathsf{tr}} \ge \epsilon$. When $\sigma$ is maximally mixed, this is mixedness testing, and it is known that $\Omega(d^{\Theta(1)}/\epsilon^2)$ copies are necessary, where the exact exponent depends on the type of measurements the learner can make [OW15, BCL20], and in many of these settings there is a matching upper bound [OW15, BOW19, BCL20]. Can one avoid this $d^{\Theta(1)}$ dependence for certain kinds of mixed states $\sigma$, e.g. ones which are approximately low rank? More ambitiously, does there exist a simple functional $f:\mathbb{C}^{d\times d}\to\mathbb{R}_{\ge 0}$ for which one can show that $\Theta(f(\sigma)/\epsilon^2)$ copies are necessary and sufficient for state certification with respect to any $\sigma$? Such instance-optimal bounds are known in the context of classical distribution testing, e.g. [VV17]. Here we give the first bounds of this nature for the quantum setting, showing (up to log factors) that the copy complexity for state certification using nonadaptive incoherent measurements is essentially given by the copy complexity for mixedness testing times the fidelity between $\sigma$ and the maximally mixed state. Surprisingly, our bound differs substantially from instance optimal bounds for the classical problem, demonstrating a qualitative difference between the two settings.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司