亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the phase retrieval problem one seeks to recover an unknown $n$ dimensional signal vector $\mathbf{x}$ from $m$ measurements of the form $y_i = |(\mathbf{A} \mathbf{x})_i|$, where $\mathbf{A}$ denotes the sensing matrix. Many algorithms for this problem are based on approximate message passing. For these algorithms, it is known that if the sensing matrix $\mathbf{A}$ is generated by sub-sampling $n$ columns of a uniformly random (i.e., Haar distributed) orthogonal matrix, in the high dimensional asymptotic regime ($m,n \rightarrow \infty, n/m \rightarrow \kappa$), the dynamics of the algorithm are given by a deterministic recursion known as the state evolution. For a special class of linearized message-passing algorithms, we show that the state evolution is universal: it continues to hold even when $\mathbf{A}$ is generated by randomly sub-sampling columns of the Hadamard-Walsh matrix, provided the signal is drawn from a Gaussian prior.

相關內容

Boolean Matrix Factorization (BMF) aims to find an approximation of a given binary matrix as the Boolean product of two low-rank binary matrices. Binary data is ubiquitous in many fields, and representing data by binary matrices is common in medicine, natural language processing, bioinformatics, computer graphics, among many others. Unfortunately, BMF is computationally hard and heuristic algorithms are used to compute Boolean factorizations. Very recently, the theoretical breakthrough was obtained independently by two research groups. Ban et al. (SODA 2019) and Fomin et al. (Trans. Algorithms 2020) show that BMF admits an efficient polynomial-time approximation scheme (EPTAS). However, despite the theoretical importance, the high double-exponential dependence of the running times from the rank makes these algorithms unimplementable in practice. The primary research question motivating our work is whether the theoretical advances on BMF could lead to practical algorithms. The main conceptional contribution of our work is the following. While EPTAS for BMF is a purely theoretical advance, the general approach behind these algorithms could serve as the basis in designing better heuristics. We also use this strategy to develop new algorithms for related $\mathbb{F}_p$-Matrix Factorization. Here, given a matrix $A$ over a finite field GF($p$) where $p$ is a prime, and an integer $r$, our objective is to find a matrix $B$ over the same field with GF($p$)-rank at most $r$ minimizing some norm of $A-B$. Our empirical research on synthetic and real-world data demonstrates the advantage of the new algorithms over previous works on BMF and $\mathbb{F}_p$-Matrix Factorization.

For capillary driven flow the interface curvature is essential in the modelling of surface tension via the imposition of the Young--Laplace jump condition. We show that traditional geometric volume of fluid (VOF) methods, that are based on a piecewise linear approximation of the interface, do not lead to an interface curvature which is convergent under mesh refinement in time-dependent problems. Instead, we propose to use a piecewise parabolic approximation of the interface, resulting in a class of piecewise parabolic interface calculation (PPIC) methods. In particular, we introduce the parabolic LVIRA and MOF methods, PLVIRA and PMOF, respectively. We show that a Lagrangian remapping method is sufficiently accurate for the advection of such a parabolic interface. It is numerically demonstrated that the newly proposed PPIC methods result in an increase of reconstruction accuracy by one order, convergence of the interface curvature in time-dependent advection problems and Weber number independent convergence of a droplet translation problem, where the advection method is coupled to a two-phase Navier--Stokes solver. The PLVIRA method is applied to the simulation of a 2D rising bubble, which shows good agreement to a reference solution.

Bi-quadratic programming over unit spheres is a fundamental problem in quantum mechanics introduced by pioneer work of Einstein, Schr\"odinger, and others. It has been shown to be NP-hard; so it must be solve by efficient heuristic algorithms such as the block improvement method (BIM). This paper focuses on the maximization of bi-quadratic forms, which leads to a rank-one approximation problem that is equivalent to computing the M-spectral radius and its corresponding eigenvectors. Specifically, we provide a tight upper bound of the M-spectral radius for nonnegative fourth-order partially symmetric (PS) tensors, which can be considered as an approximation of the M-spectral radius. Furthermore, we showed that the proposed upper bound can be obtained more efficiently, if the nonnegative fourth-order PS-tensors is a member of certain monoid semigroups. Furthermore, as an extension of the proposed upper bound, we derive the exact solutions of the M-spectral radius and its corresponding M-eigenvectors for certain classes of fourth-order PS-tensors. Lastly, as an application of the proposed bound, we obtain a practically testable sufficient condition for nonsingular elasticity M-tensors with strong ellipticity condition. We conduct several numerical experiments to demonstrate the utility of the proposed results. The results show that: (a) our proposed method can attain a tight upper bound of the M-spectral radius with little computational burden, and (b) such tight and efficient upper bounds greatly enhance the convergence speed of the BIM-algorithm, allowing it to be applicable for large-scale problems in applications.

We present a non-asymptotic lower bound on the eigenspectrum of the design matrix generated by any linear bandit algorithm with sub-linear regret when the action set has well-behaved curvature. Specifically, we show that the minimum eigenvalue of the expected design matrix grows as $\Omega(\sqrt{n})$ whenever the expected cumulative regret of the algorithm is $O(\sqrt{n})$, where $n$ is the learning horizon, and the action-space has a constant Hessian around the optimal arm. This shows that such action-spaces force a polynomial lower bound rather than a logarithmic lower bound, as shown by \cite{lattimore2017end}, in discrete (i.e., well-separated) action spaces. Furthermore, while the previous result is shown to hold only in the asymptotic regime (as $n \to \infty$), our result for these ``locally rich" action spaces is any-time. Additionally, under a mild technical assumption, we obtain a similar lower bound on the minimum eigen value holding with high probability. We apply our result to two practical scenarios -- \emph{model selection} and \emph{clustering} in linear bandits. For model selection, we show that an epoch-based linear bandit algorithm adapts to the true model complexity at a rate exponential in the number of epochs, by virtue of our novel spectral bound. For clustering, we consider a multi agent framework where we show, by leveraging the spectral result, that no forced exploration is necessary -- the agents can run a linear bandit algorithm and estimate their underlying parameters at once, and hence incur a low regret.

We study the problem of the nonparametric estimation for the density $\pi$ of the stationary distribution of a $d$-dimensional stochastic differential equation $(X_t)_{t \in [0, T]}$ with possibly unbounded drift. From the continuous observation of the sampling path on $[0, T]$, we study the rate of estimation of $\pi(x)$ as $T$ goes to infinity. One finding is that, for $d \ge 3$, the rate of estimation depends on the smoothness $\beta = (\beta_1, ... , \beta_d)$ of $\pi$. In particular, having ordered the smoothness such that $\beta_1 \le ... \le \beta_d$, it depends on the fact that $\beta_2 < \beta_3$ or $\beta_2 = \beta_3$. We show that kernel density estimators achieve the rate $(\frac{\log T}{T})^\gamma$ in the first case and $(\frac{1}{T})^\gamma$ in the second, for an explicit exponent $\gamma$ depending on the dimension and on $\bar{\beta}_3$, the harmonic mean of the smoothness over the $d$ directions after having removed $\beta_1$ and $\beta_2$, the smallest ones. Moreover, we obtain a minimax lower bound on the $\mathbf{L}^2$-risk for the pointwise estimation with the same rates $(\frac{\log T}{T})^\gamma$ or $(\frac{1}{T})^\gamma$, depending on the value of $\beta_2$ and $\beta_3$.

LU and Cholesky matrix factorization algorithms are core subroutines used to solve systems of linear equations (SLEs) encountered while solving an optimization problem. Standard factorization algorithms are highly efficient but remain susceptible to the accumulation of roundoff errors, which can lead solvers to return feasibility and optimality claims that are actually invalid. This paper introduces a novel approach for solving sequences of closely related SLEs encountered in nonlinear programming efficiently and without roundoff errors. Specifically, it introduces rank-one update algorithms for the roundoff-error-free (REF) factorization framework, a toolset built on integer-preserving arithmetic that has led to the development and implementation of fail-proof SLE solution subroutines for linear programming. The formal guarantees of the proposed algorithms are established through the derivation of theoretical insights. Their advantages are supported with computational experiments, which demonstrate upwards of 75x-improvements over exact factorization run-times on fully dense matrices with over one million entries. A significant advantage of the methodology is that the length of any coefficient calculated via the proposed algorithms is bounded polynomially in the size of the inputs without having to resort to greatest common divisor operations, which are required by and thereby hinder an efficient implementation of exact rational arithmetic approaches.

In this paper, we present a numerical strategy to check the strong stability (or GKS-stability) of one-step explicit totally upwind scheme in 1D with numerical boundary conditions. The underlying approximated continuous problem is a hyperbolic partial differential equation. Our approach is based on the Uniform Kreiss-Lopatinskii Condition, using linear algebra and complex analysis to count the number of zeros of the associated determinant. The study is illustrated with the Beam-Warming scheme together with the simplified inverse Lax-Wendroff procedure at the boundary.

When learning disconnected distributions, Generative adversarial networks (GANs) are known to face model misspecification. Indeed, a continuous mapping from a unimodal latent distribution to a disconnected one is impossible, so GANs necessarily generate samples outside of the support of the target distribution. This raises a fundamental question: what is the latent space partition that minimizes the measure of these areas? Building on a recent result of geometric measure theory, we prove that an optimal GANs must structure its latent space as a 'simplicial cluster' - a Voronoi partition where cells are convex cones - when the dimension of the latent space is larger than the number of modes. In this configuration, each Voronoi cell maps to a distinct mode of the data. We derive both an upper and a lower bound on the optimal precision of GANs learning disconnected manifolds. Interestingly, these two bounds have the same order of decrease: $\sqrt{\log m}$, $m$ being the number of modes. Finally, we perform several experiments to exhibit the geometry of the latent space and experimentally show that GANs have a geometry with similar properties to the theoretical one.

This paper studies \emph{linear} and \emph{affine} error-correcting codes for correcting synchronization errors such as insertions and deletions. We call such codes linear/affine insdel codes. Linear codes that can correct even a single deletion are limited to have information rate at most $1/2$ (achieved by the trivial 2-fold repetition code). Previously, it was (erroneously) reported that more generally no non-trivial linear codes correcting $k$ deletions exist, i.e., that the $(k+1)$-fold repetition codes and its rate of $1/(k+1)$ are basically optimal for any $k$. We disprove this and show the existence of binary linear codes of length $n$ and rate just below $1/2$ capable of correcting $\Omega(n)$ insertions and deletions. This identifies rate $1/2$ as a sharp threshold for recovery from deletions for linear codes, and reopens the quest for a better understanding of the capabilities of linear codes for correcting insertions/deletions. We prove novel outer bounds and existential inner bounds for the rate vs. (edit) distance trade-off of linear insdel codes. We complement our existential results with an efficient synchronization-string-based transformation that converts any asymptotically-good linear code for Hamming errors into an asymptotically-good linear code for insdel errors. Lastly, we show that the $\frac{1}{2}$-rate limitation does not hold for affine codes by giving an explicit affine code of rate $1-\epsilon$ which can efficiently correct a constant fraction of insdel errors.

Gaussian processes have become a promising tool for various safety-critical settings, since the posterior variance can be used to directly estimate the model error and quantify risk. However, state-of-the-art techniques for safety-critical settings hinge on the assumption that the kernel hyperparameters are known, which does not apply in general. To mitigate this, we introduce robust Gaussian process uniform error bounds in settings with unknown hyperparameters. Our approach computes a confidence region in the space of hyperparameters, which enables us to obtain a probabilistic upper bound for the model error of a Gaussian process with arbitrary hyperparameters. We do not require to know any bounds for the hyperparameters a priori, which is an assumption commonly found in related work. Instead, we are able to derive bounds from data in an intuitive fashion. We additionally employ the proposed technique to derive performance guarantees for a class of learning-based control problems. Experiments show that the bound performs significantly better than vanilla and fully Bayesian Gaussian processes.

北京阿比特科技有限公司