亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the phase reconstruction of signals $f$ belonging to complex Gaussian shift-invariant spaces $V^\infty(\varphi)$ from spectrogram measurements $|\mathcal{G}f(X)|$ where $\mathcal{G}$ is the Gabor transform and $X \subseteq \mathbb{R}^2$. An explicit reconstruction formula will demonstrate that such signals can be recovered from measurements located on parallel lines in the time-frequency plane by means of a Riesz basis expansion. Moreover, connectedness assumptions on $|f|$ result in stability estimates in the situation where one aims to reconstruct $f$ on compact intervals. Driven by a recent observation that signals in Gaussian shift-invariant spaces are determined by lattice measurements [Grohs, P., Liehr, L., Injectivity of Gabor phase retrieval from lattice measurements, arXiv:2008.07238] we prove a sampling result on the stable approximation from finitely many spectrogram samples. The resulting algorithm provides a non-iterative, provably stable and convergent approximation technique. In addition, it constitutes a method of approximating signals in function spaces beyond $V^\infty(\varphi)$, such as Paley-Wiener spaces.

相關內容

We present novel analysis and algorithms for solving sparse phase retrieval and sparse principal component analysis (PCA) with convex lifted matrix formulations. The key innovation is a new mixed atomic matrix norm that, when used as regularization, promotes low-rank matrices with sparse factors. We show that convex programs with this atomic norm as a regularizer provide near-optimal sample complexity and error rate guarantees for sparse phase retrieval and sparse PCA. While we do not know how to solve the convex programs exactly with an efficient algorithm, for the phase retrieval case we carefully analyze the program and its dual and thereby derive a practical heuristic algorithm. We show empirically that this practical algorithm performs similarly to existing state-of-the-art algorithms.

Quantum state tomography (QST) for reconstructing pure states requires exponentially increasing resources and measurements with the number of qubits by using state-of-the-art quantum compressive sensing (CS) methods. In this article, QST reconstruction for any pure state composed of the superposition of $K$ different computational basis states of $n$ qubits in a specific measurement set-up, i.e., denoted as $K$-sparse, is achieved without any initial knowledge and with quantum polynomial-time complexity of resources based on the assumption of the existence of polynomial size quantum circuits for implementing exponentially large powers of a specially designed unitary operator. The algorithm includes $\mathcal{O}(2 \, / \, \vert c_{k}\vert^2)$ repetitions of conventional phase estimation algorithm depending on the probability $\vert c_{k}\vert^2$ of the least possible basis state in the superposition and $\mathcal{O}(d \, K \,(log K)^c)$ measurement settings with conventional quantum CS algorithms independent from the number of qubits while dependent on $K$ for constant $c$ and $d$. Quantum phase estimation algorithm is exploited based on the favorable eigenstructure of the designed operator to represent any pure state as a superposition of eigenvectors. Linear optical set-up is presented for realizing the special unitary operator which includes beam splitters and phase shifters where propagation paths of single photon are tracked with which-path-detectors. Quantum circuit implementation is provided by using only CNOT, phase shifter and $- \pi \, / \, 2$ rotation gates around X-axis in Bloch sphere, i.e., $R_{X}(- \pi \, / \, 2)$, allowing to be realized in NISQ devices. Open problems are discussed regarding the existence of the unitary operator and its practical circuit implementation.

In this paper we propose a tool for high-dimensional approximation based on trigonometric polynomials where we allow only low-dimensional interactions of variables. In a general high-dimensional setting, it is already possible to deal with special sampling sets such as sparse grids or rank-1 lattices. This requires black-box access to the function, i.e., the ability to evaluate it at any point. Here, we focus on scattered data points and grouped frequency index sets along the dimensions. From there we propose a fast matrix-vector multiplication, the grouped Fourier transform, for high-dimensional grouped index sets. Those transformations can be used in the application of the previously introduced method of approximating functions with low superposition dimension based on the analysis of variance (ANOVA) decomposition where there is a one-to-one correspondence from the ANOVA terms to our proposed groups. The method is able to dynamically detected important sets of ANOVA terms in the approximation. In this paper, we consider the involved least-squares problem and add different forms of regularization: Classical Tikhonov-regularization, namely, regularized least squares and the technique of group lasso, which promotes sparsity in the groups. As for the latter, there are no explicit solution formulas which is why we applied the fast iterative shrinking-thresholding algorithm to obtain the minimizer. Moreover, we discuss the possibility of incorporating smoothness information into the least-squares problem. Numerical experiments in under-, overdetermined, and noisy settings indicate the applicability of our algorithms. While we consider periodic functions, the idea can be directly generalized to non-periodic functions as well.

This paper provides a double encryption algorithm that uses the lack of invertibility of the fractional Fourier transform (FRFT) on $L^{1}$. One encryption key is a function, which maps a ``good" $L^{2}$-signal to a ``bad" $L^{1}$-signal. The FRFT parameter which describes the rotation associated with this operator on the time-frequency plane provides the other encryption key. With the help of approximate identities, such as of the Abel and Gauss means of the FRFT established in \cite{CFGW}, we recover the encrypted signal on the FRFT domain. This design of an encryption algorithm seems new even when using the classical Fourier transform. Finally, the feasibility of the new strategy is verified by simulation and audio examples.

The California Innocence Project (CIP), a clinical law school program aiming to free wrongfully convicted prisoners, evaluates thousands of mails containing new requests for assistance and corresponding case files. Processing and interpreting this large amount of information presents a significant challenge for CIP officials, which can be successfully aided by topic modeling techniques.In this paper, we apply Non-negative Matrix Factorization (NMF) method and implement various offshoots of it to the important and previously unstudied data set compiled by CIP. We identify underlying topics of existing case files and classify request files by crime type and case status (decision type). The results uncover the semantic structure of current case files and can provide CIP officials with a general understanding of newly received case files before further examinations. We also provide an exposition of popular variants of NMF with their experimental results and discuss the benefits and drawbacks of each variant through the real-world application.

We study the problem of multi-compression and reconstructing a stochastic signal observed by several independent sensors (or compressors) that transmit compressed information to a fusion center. { The key aspect of this problem is to find models of the sensors and fusion center that are optimized in the sense of an error minimization under a certain criterion, such as the mean square error (MSE).} { A novel technique to solve this problem is developed. The novelty is as follows. First, the multi-compressors are non-linear and modeled using second degree polynomials. This may increase the accuracy of the signal estimation through the optimization in a higher dimensional parameter space compared to the linear case. Second, the required models are determined by a method based on a combination of the second degree transform (SDT) with the maximum block improvement (MBI) method and the generalized rank-constrained matrix approximation. It allows us to use the advantages of known methods to further increase the estimation accuracy of the source signal. Third, the proposed method is justified in terms of pseudo-inverse matrices. As a result, the models of compressors and fusion center always exist and are numerically stable.} In other words, the proposed models may provide compression, de-noising and reconstruction of distributed signals in cases when known methods either are not applicable or may produce larger associated errors.

The Fourier extension method, also known as the Fourier continuation method, is a method for approximating non-periodic functions on an interval using truncated Fourier series with period larger than the interval on which the function is defined. When the function being approximated is known at only finitely many points, the approximation is constructed as a projection based on this discrete set of points. In this paper we address the issue of estimating the absolute error in the approximation. The error can be expressed in terms of a system of discrete orthogonal polynomials on an arc of the unit circle, and these polynomials are then evaluated asymptotically using Riemann--Hilbert methods.

The question of how individual patient data from cohort studies or historical clinical trials can be leveraged for designing more powerful, or smaller yet equally powerful, clinical trials becomes increasingly important in the era of digitalisation. Today, the traditional statistical analyses approaches may seem questionable to practitioners in light of ubiquitous historical covariate information. Several methodological developments aim at incorporating historical information in the design and analysis of future clinical trials, most importantly Bayesian information borrowing, propensity score methods, stratification, and covariate adjustment. Recently, adjusting the analysis with respect to a prognostic score, which was obtained from some machine learning procedure applied to historical data, has been suggested and we study the potential of this approach for randomised clinical trials. In an idealised situation of a normal outcome in a two-arm trial with 1:1 allocation, we derive a simple sample size reduction formula as a function of two criteria characterising the prognostic score: (1) The coefficient of determination $R^2$ on historical data and (2) the correlation $\rho$ between the estimated and the true unknown prognostic scores. While maintaining the same power, the original total sample size $n$ planned for the unadjusted analysis reduces to $(1 - R^2 \rho^2) \times n$ in an adjusted analysis. Robustness in less ideal situations was assessed empirically. We conclude that there is potential for substantially more powerful or smaller trials, but only when prognostic scores can be accurately estimated.

We study the $c$-approximate near neighbor problem under the continuous Fr\'echet distance: Given a set of $n$ polygonal curves with $m$ vertices, a radius $\delta > 0$, and a parameter $k \leq m$, we want to preprocess the curves into a data structure that, given a query curve $q$ with $k$ vertices, either returns an input curve with Fr\'echet distance at most $c\cdot \delta$ to $q$, or returns that there exists no input curve with Fr\'echet distance at most $\delta$ to $q$. We focus on the case where the input and the queries are one-dimensional polygonal curves -- also called time series -- and we give a comprehensive analysis for this case. We obtain new upper bounds that provide different tradeoffs between approximation factor, preprocessing time, and query time. Our data structures improve upon the state of the art in several ways. We show that for any $0 < \varepsilon \leq 1$ an approximation factor of $(1+\varepsilon)$ can be achieved within the same asymptotic time bounds as the previously best result for $(2+\varepsilon)$. Moreover, we show that an approximation factor of $(2+\varepsilon)$ can be obtained by using preprocessing time and space $O(nm)$, which is linear in the input size, and query time in $O(\frac{1}{\varepsilon})^{k+2}$, where the previously best result used preprocessing time in $n \cdot O(\frac{m}{\varepsilon k})^k$ and query time in $O(1)^k$. We complement our upper bounds with matching conditional lower bounds based on the Orthogonal Vectors Hypothesis. Interestingly, some of our lower bounds already hold for any super-constant value of $k$. This is achieved by proving hardness of a one-sided sparse version of the Orthogonal Vectors problem as an intermediate problem, which we believe to be of independent interest.

Several recent applications of optimal transport (OT) theory to machine learning have relied on regularization, notably entropy and the Sinkhorn algorithm. Because matrix-vector products are pervasive in the Sinkhorn algorithm, several works have proposed to \textit{approximate} kernel matrices appearing in its iterations using low-rank factors. Another route lies instead in imposing low-rank constraints on the feasible set of couplings considered in OT problems, with no approximations on cost nor kernel matrices. This route was first explored by Forrow et al., 2018, who proposed an algorithm tailored for the squared Euclidean ground cost, using a proxy objective that can be solved through the machinery of regularized 2-Wasserstein barycenters. Building on this, we introduce in this work a generic approach that aims at solving, in full generality, the OT problem under low-rank constraints with arbitrary costs. Our algorithm relies on an explicit factorization of low rank couplings as a product of \textit{sub-coupling} factors linked by a common marginal; similar to an NMF approach, we alternatively updates these factors. We prove the non-asymptotic stationary convergence of this algorithm and illustrate its efficiency on benchmark experiments.

北京阿比特科技有限公司