亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider smooth optimization problems with a Hermitian positive semi-definite fixed-rank constraint, where a quotient geometry with three Riemannian metrics $g^i(\cdot, \cdot)$ $(i=1,2,3)$ is used to represent this constraint. By taking the nonlinear conjugate gradient method (CG) as an example, we show that CG on the quotient geometry with metric $g^1$ is equivalent to CG on the factor-based optimization framework, which is often called the Burer--Monteiro approach. We also show that CG on the quotient geometry with metric $g^3$ is equivalent to CG on the commonly-used embedded geometry. We call two CG methods equivalent if they produce an identical sequence of iterates $\{X_k\}$. In addition, we show that if the limit point of the sequence $\{X_k\}$ generated by an algorithm has lower rank, that is $X_k\in \mathbb C^{n\times n}, k = 1, 2, \ldots$ has rank $p$ and the limit point $X_*$ has rank $r < p$, then the condition number of the Riemannian Hessian with metric $g^1$ can be unbounded, but those of the other two metrics stay bounded. Numerical experiments show that the Burer--Monteiro CG method has slower local convergence rate if the limit point has a reduced rank, compared to CG on the quotient geometry under the other two metrics. This slower convergence rate can thus be attributed to the large condition number of the Hessian near a minimizer.

相關內容

We establish the following two main results on order types of points in general position in the plane (realizable simple planar order types, realizable uniform acyclic oriented matroids of rank $3$): (a) The number of extreme points in an $n$-point order type, chosen uniformly at random from all such order types, is on average $4+o(1)$. For labeled order types, this number has average $4- \frac{8}{n^2 - n +2}$ and variance at most $3$. (b) The (labeled) order types read off a set of $n$ points sampled independently from the uniform measure on a convex planar domain, smooth or polygonal, or from a Gaussian distribution are concentrated, i.e. such sampling typically encounters only a vanishingly small fraction of all order types of the given size. Result (a) generalizes to arbitrary dimension $d$ for labeled order types with the average number of extreme points $2d+o(1)$ and constant variance. We also discuss to what extent our methods generalize to the abstract setting of uniform acyclic oriented matroids. Moreover, our methods allow to show the following relative of the Erd\H{o}s-Szekeres theorem: for any fixed $k$, as $n \to \infty$, a proportion $1 - O(1/n)$ of the $n$-point simple order types contain a triangle enclosing a convex $k$-chain over an edge. For the unlabeled case in (a), we prove that for any antipodal, finite subset of the $2$-dimensional sphere, the group of orientation preserving bijections is cyclic, dihedral or one of $A_4$, $S_4$ or $A_5$ (and each case is possible). These are the finite subgroups of $SO(3)$ and our proof follows the lines of their characterization by Felix Klein.

We propose a co-variance corrected random batch method for interacting particle systems. By establishing a certain entropic central limit theorem, we provide entropic convergence guarantees for the law of the entire trajectories of all particles of the proposed method to the law of the trajectories of the discrete time interacting particle system whenever the batch size $B \gg (\alpha n)^{\frac{1}{3}}$ (where $n$ is the number of particles and $\alpha$ is the time discretization parameter). This in turn implies that the outputs of these methods are nearly \emph{statistically indistinguishable} when $B$ is even moderately large. Previous works mainly considered convergence in Wasserstein distance with required stringent assumptions on the potentials or the bounds had an exponential dependence on the time horizon. This work makes minimal assumptions on the interaction potentials and in particular establishes that even when the particle trajectories diverge to infinity, they do so in the same way for both the methods. Such guarantees are very useful in light of the recent advances in interacting particle based algorithms for sampling.

Bilevel optimization has arisen as a powerful tool in modern machine learning. However, due to the nested structure of bilevel optimization, even gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations, which can be costly and unscalable in practice. Recently, Hessian-free bilevel schemes have been proposed to resolve this issue, where the general idea is to use zeroth- or first-order methods to approximate the full hypergradient of the bilevel problem. However, we empirically observe that such approximation can lead to large variance and unstable training, but estimating only the response Jacobian matrix as a partial component of the hypergradient turns out to be extremely effective. To this end, we propose a new Hessian-free method, which adopts the zeroth-order-like method to approximate the response Jacobian matrix via taking difference between two optimization paths. Theoretically, we provide the convergence rate analysis for the proposed algorithms, where our key challenge is to characterize the approximation and smoothness properties of the trajectory-dependent estimator, which can be of independent interest. This is the first known convergence rate result for this type of Hessian-free bilevel algorithms. Experimentally, we demonstrate that the proposed algorithms outperform baseline bilevel optimizers on various bilevel problems. Particularly, in our experiment on few-shot meta-learning with ResNet-12 network over the miniImageNet dataset, we show that our algorithm outperforms baseline meta-learning algorithms, while other baseline bilevel optimizers do not solve such meta-learning problems within a comparable time frame.

Constrained optimization problems can be difficult because their search spaces have properties not conducive to search, e.g., multimodality, discontinuities, or deception. To address such difficulties, considerable research has been performed on creating novel evolutionary algorithms or specialized genetic operators. However, if the representation that defined the search space could be altered such that it only permitted valid solutions that satisfied the constraints, the task of finding the optimal would be made more feasible without any need for specialized optimization algorithms. We propose Constrained Optimization in Latent Space (COIL), which uses a VAE to generate a learned latent representation from a dataset comprising samples from the valid region of the search space according to a constraint, thus enabling the optimizer to find the objective in the new space defined by the learned representation. Preliminary experiments show promise: compared to an identical GA using a standard representation that cannot meet the constraints or find fit solutions, COIL with its learned latent representation can perfectly satisfy different types of constraints while finding high-fitness solutions.

We study sequential decision making problems aimed at maximizing the expected total reward while satisfying a constraint on the expected total utility. We employ the natural policy gradient method to solve the discounted infinite-horizon optimal control problem for Constrained Markov Decision Processes (constrained MDPs). Specifically, we propose a new Natural Policy Gradient Primal-Dual (NPG-PD) method that updates the primal variable via natural policy gradient ascent and the dual variable via projected sub-gradient descent. Although the underlying maximization involves a nonconcave objective function and a nonconvex constraint set, under the softmax policy parametrization we prove that our method achieves global convergence with sublinear rates regarding both the optimality gap and the constraint violation. Such convergence is independent of the size of the state-action space, i.e., it is~dimension-free. Furthermore, for log-linear and general smooth policy parametrizations, we establish sublinear convergence rates up to a function approximation error caused by restricted policy parametrization. We also provide convergence and finite-sample complexity guarantees for two sample-based NPG-PD algorithms. Finally, we use computational experiments to showcase the merits and the effectiveness of our approach.

Solving the time-dependent Schr\"odinger equation is an important application area for quantum algorithms. We consider Schr\"odinger's equation in the semi-classical regime. Here the solutions exhibit strong multiple-scale behavior due to a small parameter $\hbar$, in the sense that the dynamics of the quantum states and the induced observables can occur on different spatial and temporal scales. Such a Schr\"odinger equation finds many applications, including in Born-Oppenheimer molecular dynamics and Ehrenfest dynamics. This paper considers quantum analogues of pseudo-spectral (PS) methods on classical computers. Estimates on the gate counts in terms of $\hbar$ and the precision $\varepsilon$ are obtained. It is found that the number of required qubits, $m$, scales only logarithmically with respect to $\hbar$. When the solution has bounded derivatives up to order $\ell$, the symmetric Trotting method has gate complexity $\mathcal{O}\Big({ (\varepsilon \hbar)^{-\frac12} \mathrm{polylog}(\varepsilon^{-\frac{3}{2\ell}} \hbar^{-1-\frac{1}{2\ell}})}\Big),$ provided that the diagonal unitary operators in the pseudo-spectral methods can be implemented with $\mathrm{poly}(m)$ operations. When physical observables are the desired outcomes, however, the step size in the time integration can be chosen independently of $\hbar$. The gate complexity in this case is reduced to $\mathcal{O}\Big({\varepsilon^{-\frac12} \mathrm{polylog}( \varepsilon^{-\frac3{2\ell}} \hbar^{-1} )}\Big),$ with $\ell$ again indicating the smoothness of the solution.

The problem of constrained Markov decision process is considered. An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs (the number of constraints is relatively small). A new dual approach is proposed with the integration of two ingredients: entropy regularized policy optimizer and Vaidya's dual optimizer, both of which are critical to achieve faster convergence. The finite-time error bound of the proposed approach is provided. Despite the challenge of the nonconcave objective subject to nonconcave constraints, the proposed approach is shown to converge (with linear rate) to the global optimum. The complexity expressed in terms of the optimality gap and the constraint violation significantly improves upon the existing primal-dual approaches.

We introduce a novel approach to waveform inversion, based on a data driven reduced order model (ROM) of the wave operator. The presentation is for the acoustic wave equation, but the approach can be extended to elastic or electromagnetic waves. The data are time resolved measurements of the pressure wave at the sensors in an active array, which probe the unknown medium with pulses and measure the generated waves. The ROM depends nonlinearly on the data but it can be constructed from them using numerical linear algebra methods. We show that the ROM can be used for the inverse problem of velocity estimation. While the full-waveform inversion approach of {nonlinear least-squares} data fitting is challenging without low frequency information, due to multiple minima of the objective function, the minimization of the ROM misfit function has a better behavior, even for a poor initial guess. In fact, the ROM misfit function is demonstrably a convex function for low-dimensional parametrizations of the unknown velocity. We give the construction of the ROM, introduce the inversion approach based on the ROM misfit and assess its performance with numerical simulations.

Multiclass probability estimation is the problem of estimating conditional probabilities of a data point belonging to a class given its covariate information. It has broad applications in statistical analysis and data science. Recently a class of weighted Support Vector Machines (wSVMs) has been developed to estimate class probabilities through ensemble learning for $K$-class problems (Wu, Zhang and Liu, 2010; Wang, Zhang and Wu, 2019), where $K$ is the number of classes. The estimators are robust and achieve high accuracy for probability estimation, but their learning is implemented through pairwise coupling, which demands polynomial time in $K$. In this paper, we propose two new learning schemes, the baseline learning and the One-vs-All (OVA) learning, to further improve wSVMs in terms of computational efficiency and estimation accuracy. In particular, the baseline learning has optimal computational complexity in the sense that it is linear in $K$. Though not being most efficient in computation, the OVA offers the best estimation accuracy among all the procedures under comparison. The resulting estimators are distribution-free and shown to be consistent. We further conduct extensive numerical experiments to demonstrate finite sample performance.

Hypothesis testing of random forest (RF) variable importance measures (VIMP) remains the subject of ongoing research. Among recent developments, heuristic approaches to parametric testing have been proposed whose distributional assumptions are based on empirical evidence. Other formal tests under regularity conditions were derived analytically. However, these approaches can be computationally expensive or even practically infeasible. This problem also occurs with non-parametric permutation tests, which are, however, distribution-free and can generically be applied to any type of RF and VIMP. Embracing this advantage, it is proposed here to use sequential permutation tests and sequential p-value estimation to reduce the high computational costs associated with conventional permutation tests. The popular and widely used permutation VIMP serves as a practical and relevant application example. The results of simulation studies confirm that the theoretical properties of the sequential tests apply, that is, the type-I error probability is controlled at a nominal level and a high power is maintained with considerably fewer permutations needed in comparison to conventional permutation testing. The numerical stability of the methods is investigated in two additional application studies. In summary, theoretically sound sequential permutation testing of VIMP is possible at greatly reduced computational costs. Recommendations for application are given. A respective implementation is provided through the accompanying R package $rfvimptest$. The approach can also be easily applied to any kind of prediction model.

北京阿比特科技有限公司