亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work we propose a method to perform optimization on manifolds. We assume to have an objective function $f$ defined on a manifold and think of it as the potential energy of a mechanical system. By adding a momentum-dependent kinetic energy we define its Hamiltonian function, which allows us to write the corresponding Hamiltonian system. We make it conformal by introducing a dissipation term: the result is the continuous model of our scheme. We solve it via splitting methods (Lie-Trotter and leapfrog): we combine the RATTLE scheme, approximating the conserved flow, with the exact dissipated flow. The result is a conformal symplectic method for constant stepsizes. We also propose an adaptive stepsize version of it. We test it on an example, the minimization of a function defined on a sphere, and compare it with the usual gradient descent method.

相關內容

Vintage factor analysis is one important type of factor analysis that aims to first find a low-dimensional representation of the original data, and then to seek a rotation such that the rotated low-dimensional representation is scientifically meaningful. Perhaps the most widely used vintage factor analysis is the Principal Component Analysis (PCA) followed by the varimax rotation. Despite its popularity, little theoretical guarantee can be provided mainly because varimax rotation requires to solve a non-convex optimization over the set of orthogonal matrices. In this paper, we propose a deflation varimax procedure that solves each row of an orthogonal matrix sequentially. In addition to its net computational gain and flexibility, we are able to fully establish theoretical guarantees for the proposed procedure in a broad context. Adopting this new varimax approach as the second step after PCA, we further analyze this two step procedure under a general class of factor models. Our results show that it estimates the factor loading matrix in the optimal rate when the signal-to-noise-ratio (SNR) is moderate or large. In the low SNR regime, we offer possible improvement over using PCA and the deflation procedure when the additive noise under the factor model is structured. The modified procedure is shown to be optimal in all SNR regimes. Our theory is valid for finite sample and allows the number of the latent factors to grow with the sample size as well as the ambient dimension to grow with, or even exceed, the sample size. Extensive simulation and real data analysis further corroborate our theoretical findings.

Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.

The categorical Gini correlation, $\rho_g$, was proposed by Dang et al. to measure the dependence between a categorical variable, $Y$ , and a numerical variable, $X$. It has been shown that $\rho_g$ has more appealing properties than current existing dependence measurements. In this paper, we develop the jackknife empirical likelihood (JEL) method for $\rho_g$. Confidence intervals for the Gini correlation are constructed without estimating the asymptotic variance. Adjusted and weighted JEL are explored to improve the performance of the standard JEL. Simulation studies show that our methods are competitive to existing methods in terms of coverage accuracy and shortness of confidence intervals. The proposed methods are illustrated in an application on two real datasets.

This work is concerned with cone-beam computed tomography with circular source trajectory, where the reconstruction inverse problem requires an accurate knowledge of source, detector and rotational axis relative positions and orientations. We address this problem as a preceding step of the reconstruction process directly from the acquired projections. The method estimates both the detector shift (orthogonal to focal and rotational axes) and the in-plane detector rotation, relative to source and rotational axis. The obtained algorithm is based on a fan-beam symmetry condition and the variable projection optimization approach with a low computational cost. Therefore, the alignment problem for fan-beam tomography is addressed as well. The methods are validated with simulated and real industrial tomographic data with code examples available for both fan- and cone-beam geometries.

In this paper a set of previous general results for the development of B--series for a broad class of stochastic differential equations has been collected. The applicability of these results is demonstrated by the derivation of B--series for non-autonomous semi-linear SDEs and exponential Runge-Kutta methods applied to this class of SDEs, which is a significant generalization of existing theory on such methods.

We introduce the extremal range, a local statistic for studying the spatial extent of extreme events in random fields on $\mathbb{R}^2$. Conditioned on exceedance of a high threshold at a location $s$, the extremal range at $s$ is the random variable defined as the smallest distance from $s$ to a location where there is a non-exceedance. We leverage tools from excursion-set theory to study distributional properties of the extremal range, propose parametric models and predict the median extremal range at extreme threshold levels. The extremal range captures the rate at which the spatial extent of conditional extreme events scales for increasingly high thresholds, and we relate its distributional properties with the bivariate tail dependence coefficient and the extremal index of time series in classical Extreme-Value Theory. Consistent estimation of the distribution function of the extremal range for stationary random fields is proven. For non-stationary random fields, we implement generalized additive median regression to predict extremal-range maps at very high threshold levels. An application to two large daily temperature datasets, namely reanalyses and climate-model simulations for France, highlights decreasing extremal dependence for increasing threshold levels and reveals strong differences in joint tail decay rates between reanalyses and simulations.

Gaussian boson sampling, a computational model that is widely believed to admit quantum supremacy, has already been experimentally demonstrated and is claimed to surpass the classical simulation capabilities of even the most powerful supercomputers today. However, whether the current approach limited by photon loss and noise in such experiments prescribes a scalable path to quantum advantage is an open question. To understand the effect of photon loss on the scalability of Gaussian boson sampling, we analytically derive the asymptotic operator entanglement entropy scaling, which relates to the simulation complexity. As a result, we observe that efficient tensor network simulations are likely possible under the $N_\text{out}\propto\sqrt{N}$ scaling of the number of surviving photons orange$N_\text{out}$ in the number of input photons $N$. We numerically verify this result using a tensor network algorithm with $U(1)$ symmetry, and overcome previous challenges due to the large local Hilbert space dimensions in Gaussian boson sampling with hardware acceleration. Additionally, we observe that increasing the photon number through larger squeezing does not increase the entanglement entropy significantly. Finally, we numerically find the bond dimension necessary for fixed accuracy simulations, providing more direct evidence for the complexity of tensor networks.

Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal $O(n \log n)$ sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using $L^p$-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of $L^p$-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the $L^p$-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph $G(n,d/n)$, where the previously known algorithms run in time $n^{O(\log d)}$ or applied only to large $d$. We refine these algorithmic bounds significantly, and develop fast $n^{1+o(1)}$ algorithms based on Glauber dynamics that apply to all $d$, throughout the uniqueness regime.

A key challenge when trying to understand innovation is that it is a dynamic, ongoing process, which can be highly contingent on ephemeral factors such as culture, economics, or luck. This means that any analysis of the real-world process must necessarily be historical - and thus probably too late to be most useful - but also cannot be sure what the properties of the web of connections between innovations is or was. Here I try to address this by designing and generating a set of synthetic innovation web "dictionaries" that can be used to host sampled innovation timelines, probe the overall statistics and behaviours of these processes, and determine the degree of their reliance on the structure or generating algorithm. Thus, inspired by the work of Fink, Reeves, Palma and Farr (2017) on innovation in language, gastronomy, and technology, I study how new symbol discovery manifests itself in terms of additional "word" vocabulary being available from dictionaries generated from a finite number of symbols. Several distinct dictionary generation models are investigated using numerical simulation, with emphasis on the scaling of knowledge as dictionary generators and parameters are varied, and the role of which order the symbols are discovered in.

The stability of an approximating sequence $(A_n)$ for an operator $A$ usually requires, besides invertibility of $A$, the invertibility of further operators, say $B, C, \dots$, that are well-associated to the sequence $(A_n)$. We study this set, $\{A,B,C,\dots\}$, of so-called stability indicators of $(A_n)$ and connect it to the asymptotics of $\|A_n\|$, $\|A_n^{-1}\|$ and $\kappa(A_n)=\|A_n\|\|A_n^{-1}\|$ as well as to spectral pollution by showing that $\limsup {\rm Spec}_\varepsilon A_n= {\rm Spec}_\varepsilon A\cup{\rm Spec}_\varepsilon B\cup{\rm Spec}_\varepsilon C\cup\dots$. We further specify, for each of $\|A_n\|$, $\|A_n^{-1}\|$, $\kappa(A_n)$ and ${\rm Spec}_\varepsilon A_n$, under which conditions even convergence applies.

北京阿比特科技有限公司