亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recovering a signal (function) from finitely many binary or Fourier samples is one of the core problems in modern medical imaging, and by now there exist a plethora of methods for recovering a signal from such samples. Examples of methods, which can utilise wavelet reconstruction, include generalised sampling, infinite-dimensional compressive sensing, the parameterised-background data-weak (PBDW) method etc. However, for any of these methods to be applied in practice, accurate and fast modelling of an $N \times M$ section of the infinite-dimensional change-of-basis matrix between the sampling basis (Fourier or Walsh-Hadamard samples) and the wavelet reconstruction basis is paramount. In this work, we derive an algorithm, which bypasses the $NM$ storage requirement and the $\mathcal{O}(NM)$ computational cost of matrix-vector multiplication with this matrix when using Walsh-Hadamard samples and wavelet reconstruction. The proposed algorithm computes the matrix-vector multiplication in $\mathcal{O}(N\log N)$ operations and has a storage requirement of $\mathcal{O}(2^q)$, where $N=2^{dq} M$, (usually $q \in \{1,2\}$) and $d=1,2$ is the dimension. As matrix-vector multiplications is the computational bottleneck for iterative algorithms used by the mentioned reconstruction methods, the proposed algorithm speeds up the reconstruction of wavelet coefficients from Walsh-Hadamard samples considerably.

相關內容

We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the H\"older-Zygmund space of mixed smoothness defined on the $d$-dimensional unit cube when the dimension $d$ may be very large. The approximation error is measured in the norm of isotropic Sobolev space. For every function $f$ from the H\"older-Zygmund space of mixed smoothness, we explicitly construct a deep ReLU neural network having an output that approximates $f$ with a prescribed accuracy $\varepsilon$, and prove tight dimension-dependent upper and lower bounds of the computation complexity of this approximation, characterized as the size and the depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. The proof of these results are in particular, relied on the approximation by sparse-grid sampling recovery based on the Faber series.

The null space of the $k$-th order Laplacian $\mathbf{\mathcal L}_k$, known as the {\em $k$-th homology vector space}, encodes the non-trivial topology of a manifold or a network. Understanding the structure of the homology embedding can thus disclose geometric or topological information from the data. The study of the null space embedding of the graph Laplacian $\mathbf{\mathcal L}_0$ has spurred new research and applications, such as spectral clustering algorithms with theoretical guarantees and estimators of the Stochastic Block Model. In this work, we investigate the geometry of the $k$-th homology embedding and focus on cases reminiscent of spectral clustering. Namely, we analyze the {\em connected sum} of manifolds as a perturbation to the direct sum of their homology embeddings. We propose an algorithm to factorize the homology embedding into subspaces corresponding to a manifold's simplest topological components. The proposed framework is applied to the {\em shortest homologous loop detection} problem, a problem known to be NP-hard in general. Our spectral loop detection algorithm scales better than existing methods and is effective on diverse data such as point clouds and images.

Vessel segmentation is an essential task in many clinical applications. Although supervised methods have achieved state-of-art performance, acquiring expert annotation is laborious and mostly limited for two-dimensional datasets with a small sample size. On the contrary, unsupervised methods rely on handcrafted features to detect tube-like structures such as vessels. However, those methods require complex pipelines involving several hyper-parameters and design choices rendering the procedure sensitive, dataset-specific, and not generalizable. We propose a self-supervised method with a limited number of hyper-parameters that is generalizable across modalities. Our method uses tube-like structure properties, such as connectivity, profile consistency, and bifurcation, to introduce inductive bias into a learning algorithm. To model those properties, we generate a vector field that we refer to as a flow. Our experiments on various public datasets in 2D and 3D show that our method performs better than unsupervised methods while learning useful transferable features from unlabeled data. Unlike generic self-supervised methods, the learned features learn vessel-relevant features that are transferable for supervised approaches, which is essential when the number of annotated data is limited.

Riesz potentials are well known objects of study in the theory of singular integrals that have been the subject of recent, increased interest from the numerical analysis community due to their connections with fractional Laplace problems and proposed use in certain domain decomposition methods. While the L$^p$-mapping properties of Riesz potentials on flat geometries are well-established, comparable results on rougher geometries for Sobolev spaces are very scarce. In this article, we study the continuity properties of the surface Riesz potential generated by the $1/\sqrt{x}$ singular kernel on a polygonal domain $\Omega \subset \mathbb{R}^2$. We prove that this surface Riesz potential maps L$^{2}(\partial\Omega)$ into H$^{+1/2}(\partial\Omega)$. Our proof is based on a careful analysis of the Riesz potential in the neighbourhood of corners of the domain $\Omega$. The main tool we use for this corner analysis is the Mellin transform which can be seen as a counterpart of the Fourier transform that is adapted to corner geometries.

Low-rank tensor decomposition generalizes low-rank matrix approximation and is a powerful technique for discovering low-dimensional structure in high-dimensional data. In this paper, we study Tucker decompositions and use tools from randomized numerical linear algebra called ridge leverage scores to accelerate the core tensor update step in the widely-used alternating least squares (ALS) algorithm. Updating the core tensor, a severe bottleneck in ALS, is a highly-structured ridge regression problem where the design matrix is a Kronecker product of the factor matrices. We show how to use approximate ridge leverage scores to construct a sketched instance for any ridge regression problem such that the solution vector for the sketched problem is a $(1+\varepsilon)$-approximation to the original instance. Moreover, we show that classical leverage scores suffice as an approximation, which then allows us to exploit the Kronecker structure and update the core tensor in time that depends predominantly on the rank and the sketching parameters (i.e., sublinear in the size of the input tensor). We also give upper bounds for ridge leverage scores as rows are removed from the design matrix (e.g., if the tensor has missing entries), and we demonstrate the effectiveness of our approximate ridge regressioni algorithm for large, low-rank Tucker decompositions on both synthetic and real-world data.

We consider parameter estimation in distributed networks, where each sensor in the network observes an independent sample from an underlying distribution and has $k$ bits to communicate its sample to a centralized processor which computes an estimate of a desired parameter. We develop lower bounds for the minimax risk of estimating the underlying parameter for a large class of losses and distributions. Our results show that under mild regularity conditions, the communication constraint reduces the effective sample size by a factor of $d$ when $k$ is small, where $d$ is the dimension of the estimated parameter. Furthermore, this penalty reduces at most exponentially with increasing $k$, which is the case for some models, e.g., estimating high-dimensional distributions. For other models however, we show that the sample size reduction is re-mediated only linearly with increasing $k$, e.g. when some sub-Gaussian structure is available. We apply our results to the distributed setting with product Bernoulli model, multinomial model, Gaussian location models, and logistic regression which recover or strengthen existing results. Our approach significantly deviates from existing approaches for developing information-theoretic lower bounds for communication-efficient estimation. We circumvent the need for strong data processing inequalities used in prior work and develop a geometric approach which builds on a new representation of the communication constraint. This approach allows us to strengthen and generalize existing results with simpler and more transparent proofs.

We study the problem of deciding reconfigurability of target sets of a graph. Given a graph $G$ with vertex thresholds $\tau$, consider a dynamic process in which vertex $v$ becomes activated once at least $\tau(v)$ of its neighbors are activated. A vertex set $S$ is called a target set if all vertices of $G$ would be activated when initially activating vertices of $S$. In the Target Set Reconfiguration problem, given two target sets $X$ and $Y$ of the same size, we are required to determine whether $X$ can be transformed into $Y$ by repeatedly swapping one vertex in the current set with another vertex not in the current set preserving every intermediate set as a target set. In this paper, we investigate the complexity of Target Set Reconfiguration in restricted cases. On the hardness side, we prove that Target Set Reconfiguration is PSPACE-complete on bipartite planar graphs of degree $3$ or $4$ and of threshold $2$, bipartite $3$-regular graphs of threshold $1$ or $2$, and split graphs, which is in contrast to the fact that a special case called Vertex Cover Reconfiguration is in P for the last graph class. On the positive side, we present a polynomial-time algorithm for Target Set Reconfiguration on graphs of maximum degree $2$ and trees. The latter result can be thought of as a generalization of that for Vertex Cover Reconfiguration.

In this paper we develop a plane wave type method for discretization of homogeneous Helmholtz equations with variable wave numbers. In the proposed method, local basis functions (on each element) are constructed by the geometric optics ansatz such that they approximately satisfy a homogeneous Helmholtz equation without boundary condition. More precisely, each basis function is expressed as the product of an exponential plane wave function and a polynomial function, where the phase function in the exponential function approximately satisfies the eikonal equation and the polynomial factor is recursively determined by transport equations associated with the considered Helmholtz equation. We prove that the resulting plane wave spaces have high order $h$-approximations as the standard plane wave spaces (which are available only to the case with constant wave number). We apply the proposed plane wave spaces to the discretization of nonhomogeneous Helmholtz equations with variable wave numbers and establish the corresponding error estimates of their finite element solutions. We report some numerical results to illustrate the efficiency of the proposed method.

In this paper we present and analyse a high accuracy method for computing wave directions defined in the geometrical optics ansatz of Helmholtz equation with variable wave number. Then we define an "adaptive" plane wave space with small dimensions, in which each plane wave basis function is determined by such an approximate wave direction. We establish a best $L^2$ approximation of the plane wave space for the analytic solutions of homogeneous Helmholtz equations with large wave numbers and report some numerical results to illustrate the efficiency of the proposed method.

We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of $\widetilde{\mathcal{O}}(1/t^2)$. This contrasts with a rate of $\mathcal{O}(1/\log(t))$ for standard gradient descent, and $\mathcal{O}(1/t)$ for normalized gradient descent. This momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.

北京阿比特科技有限公司