亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The recovery of a signal from the magnitudes of its transformation, like the Fourier transform, is known as the phase retrieval problem and is of big relevance in various fields of engineering and applied physics. In this paper, we present a fast inertial/momentum based algorithm for the phase retrieval problem and we prove a convergence guarantee for the new algorithm and for the Fast Griffin-Lim algorithm, whose convergence remained unproven in the past decade. In the final chapter, we compare the algorithm for the Short Time Fourier transform phase retrieval with the Griffin-Lim algorithm and FGLA and to other iterative algorithms typically used for this type of problem.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和(he)存儲技術會議。 Publisher:USENIX。 SIT:

We develop the no-propagate algorithm for sampling the linear response of random dynamical systems, which are non-uniform hyperbolic deterministic systems perturbed by noise with smooth density. We first derive a Monte-Carlo type formula and then the algorithm, which is different from the ensemble (stochastic gradient) algorithms, finite-element algorithms, and fast-response algorithms; it does not involve the propagation of vectors or covectors, and only the density of the noise is differentiated, so the formula is not cursed by gradient explosion, dimensionality, or non-hyperbolicity. We demonstrate our algorithm on a tent map perturbed by noise and a chaotic neural network with 51 layers $\times$ 9 neurons. By itself, this algorithm approximates the linear response of non-hyperbolic deterministic systems, with an additional error proportional to the noise. We also discuss the potential of using this algorithm as a part of a bigger algorithm with smaller error.

An increasingly common viewpoint is that protein dynamics data sets reside in a non-linear subspace of low conformational energy. Ideal data analysis tools for such data sets should therefore account for such non-linear geometry. The Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich structure to account for a wide range of geometries that can be modelled after an energy landscape. Second, many standard data analysis tools initially developed for data in Euclidean space can also be generalised to data on a Riemannian manifold. In the context of protein dynamics, a conceptual challenge comes from the lack of a suitable smooth manifold and the lack of guidelines for constructing a smooth Riemannian structure based on an energy landscape. In addition, computational feasibility in computing geodesics and related mappings poses a major challenge. This work considers these challenges. The first part of the paper develops a novel local approximation technique for computing geodesics and related mappings on Riemannian manifolds in a computationally feasible manner. The second part constructs a smooth manifold of point clouds modulo rigid body group actions and a Riemannian structure that is based on an energy landscape for protein conformations. The resulting Riemannian geometry is tested on several data analysis tasks relevant for protein dynamics data. It performs exceptionally well on coarse-grained molecular dynamics simulated data. In particular, the geodesics with given start- and end-points approximately recover corresponding molecular dynamics trajectories for proteins that undergo relatively ordered transitions with medium sized deformations. The Riemannian protein geometry also gives physically realistic summary statistics and retrieves the underlying dimension even for large-sized deformations within seconds on a laptop.

We derive a Bernstein von-Mises theorem in the context of misspecified, non-i.i.d., hierarchical models parametrized by a finite-dimensional parameter of interest. We apply our results to hierarchical models containing non-linear operators, including the squared integral operator, and PDE-constrained inverse problems. More specifically, we consider the elliptic, time-independent Schr\"odinger equation with parametric boundary condition and general parabolic PDEs with parametric potential and boundary constraints. Our theoretical results are complemented with numerical analysis on synthetic data sets, considering both the square integral operator and the Schr\"odinger equation.

We prove closed-form equations for the exact high-dimensional asymptotics of a family of first order gradient-based methods, learning an estimator (e.g. M-estimator, shallow neural network, ...) from observations on Gaussian data with empirical risk minimization. This includes widely used algorithms such as stochastic gradient descent (SGD) or Nesterov acceleration. The obtained equations match those resulting from the discretization of dynamical mean-field theory (DMFT) equations from statistical physics when applied to gradient flow. Our proof method allows us to give an explicit description of how memory kernels build up in the effective dynamics, and to include non-separable update functions, allowing datasets with non-identity covariance matrices. Finally, we provide numerical implementations of the equations for SGD with generic extensive batch-size and with constant learning rates.

Medical studies for chronic disease are often interested in the relation between longitudinal risk factor profiles and individuals' later life disease outcomes. These profiles may typically be subject to intermediate structural changes due to treatment or environmental influences. Analysis of such studies may be handled by the joint model framework. However, current joint modeling does not consider structural changes in the residual variability of the risk profile nor consider the influence of subject-specific residual variability on the time-to-event outcome. In the present paper, we extend the joint model framework to address these two heterogeneous intra-individual variabilities. A Bayesian approach is used to estimate the unknown parameters and simulation studies are conducted to investigate the performance of the method. The proposed joint model is applied to the Framingham Heart Study to investigate the influence of anti-hypertensive medication on the systolic blood pressure variability together with its effect on the risk of developing cardiovascular disease. We show that anti-hypertensive medication is associated with elevated systolic blood pressure variability and increased variability elevates risk of developing cardiovascular disease.

The approach to analysing compositional data has been dominated by the use of logratio transformations, to ensure exact subcompositional coherence and, in some situations, exact isometry as well. A problem with this approach is that data zeros, found in most applications, have to be replaced to allow the logarithmic transformation. An alternative new approach, called the `chiPower' transformation, which allows data zeros, is to combine the standardization inherent in the chi-square distance in correspondence analysis, with the essential elements of the Box-Cox power transformation. The chiPower transformation is justified because it} defines between-sample distances that tend to logratio distances for strictly positive data as the power parameter tends to zero, and are then equivalent to transforming to logratios. For data with zeros, a value of the power can be identified that brings the chiPower transformation as close as possible to a logratio transformation, without having to substitute the zeros. Especially in the area of high-dimensional data, this alternative approach can present such a high level of coherence and isometry as to be a valid approach to the analysis of compositional data. Furthermore, in a supervised learning context, if the compositional variables serve as predictors of a response in a modelling framework, for example generalized linear models, then the power can be used as a tuning parameter in optimizing the accuracy of prediction through cross-validation. The chiPower-transformed variables have a straightforward interpretation, since they are each identified with single compositional parts, not ratios.

Global Climate Model (GCM) tuning (calibration) is a tedious and time-consuming process, with high-dimensional input and output fields. Experts typically tune by iteratively running climate simulations with hand-picked values of tuning parameters. Many, in both the statistical and climate literature, have proposed alternative calibration methods, but most are impractical or difficult to implement. We present a practical, robust and rigorous calibration approach on the atmosphere-only model of the Department of Energy's Energy Exascale Earth System Model (E3SM) version 2. Our approach can be summarized into two main parts: (1) the training of a surrogate that predicts E3SM output in a fraction of the time compared to running E3SM, and (2) gradient-based parameter optimization. To train the surrogate, we generate a set of designed ensemble runs that span our input parameter space and use polynomial chaos expansions on a reduced output space to fit the E3SM output. We use this surrogate in an optimization scheme to identify values of the input parameters for which our model best matches gridded spatial fields of climate observations. To validate our choice of parameters, we run E3SMv2 with the optimal parameter values and compare prediction results to expertly-tuned simulations across 45 different output fields. This flexible, robust, and automated approach is straightforward to implement, and we demonstrate that the resulting model output matches present day climate observations as well or better than the corresponding output from expert tuned parameter values, while considering high-dimensional output and operating in a fraction of the time.

Data sets of multivariate normal distributions abound in many scientific areas like diffusion tensor imaging, structure tensor computer vision, radar signal processing, machine learning, just to name a few. In order to process those normal data sets for downstream tasks like filtering, classification or clustering, one needs to define proper notions of dissimilarities between normals and paths joining them. The Fisher-Rao distance defined as the Riemannian geodesic distance induced by the Fisher information metric is such a principled metric distance which however is not known in closed-form excepts for a few particular cases. In this work, we first report a fast and robust method to approximate arbitrarily finely the Fisher-Rao distance between multivariate normal distributions. Second, we introduce a class of distances based on diffeomorphic embeddings of the normal manifold into a submanifold of the higher-dimensional symmetric positive-definite cone corresponding to the manifold of centered normal distributions. We show that the projective Hilbert distance on the cone yields a metric on the embedded normal submanifold and we pullback that cone distance with its associated straight line Hilbert cone geodesics to obtain a distance and smooth paths between normal distributions. Compared to the Fisher-Rao distance approximation, the pullback Hilbert cone distance is computationally light since it requires to compute only the extreme minimal and maximal eigenvalues of matrices. Finally, we show how to use those distances in clustering tasks.

We propose a Hermite spectral method for the inelastic Boltzmann equation, which makes two-dimensional periodic problem computation affordable by the hardware nowadays. The new algorithm is based on a Hermite expansion, where the expansion coefficients for the VHS model are reduced into several summations and can be derived exactly. Moreover, a new collision model is built with a combination of the quadratic collision operator and a linearized collision operator, which helps us to balance the computational cost and the accuracy. Various numerical experiments, including spatially two-dimensional simulations, demonstrate the accuracy and efficiency of this numerical scheme.

Operator splitting is a popular divide-and-conquer strategy for solving differential equations. Typically, the right-hand side of the differential equation is split into a number of parts that are then integrated separately. Many methods are known that split the right-hand side into two parts. This approach is limiting, however, and there are situations when 3-splitting is more natural and ultimately more advantageous. The second-order Strang operator-splitting method readily generalizes to a right-hand side splitting into any number of operators. It is arguably the most popular method for 3-splitting because of its efficiency, ease of implementation, and intuitive nature. Other 3-splitting methods exist, but they are less well-known, and \rev{analysis and} evaluation of their performance in practice are scarce. We demonstrate the effectiveness of some alternative 3-split, second-order methods to Strang splitting on two problems: the reaction-diffusion Brusselator, which can be split into three parts that each have closed-form solutions, and the kinetic Vlasov--Poisson equations that is used in semi-Lagrangian plasma simulations. We find alternative second-order 3-operator-splitting methods that realize efficiency gains of 10\%--20\% over traditional Strang splitting. Our analysis for the practical assessment of efficiency of operator-splitting methods includes the computational cost of the integrators and can be used in method design.

北京阿比特科技有限公司