亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quadratic minimization problems with orthogonality constraints (QMPO) play an important role in many applications of science and engineering. However, some existing methods may suffer from low accuracy or heavy workload for large-scale QMPO. Krylov subspace methods are popular for large-scale optimization problems. In this work, we propose a block Lanczos method for solving the large-scale QMPO. In the proposed method, the original problem is projected into a small-sized one, and the Riemannian Trust-Region method is employed to solve the reduced QMPO. Convergence results on the optimal solution, the optimal objective function value, the multiplier and the KKT error are established. Moreover, we give the convergence speed of optimal solution, and show that if the block Lanczos process terminates, then an exact KKT solution is derived. Numerical experiments illustrate the numerical behavior of the proposed algorithm, and demonstrate that it is more powerful than many state-of-the-art algorithms for large-scale quadratic minimization problems with orthogonality constraints.

相關內容

We consider minimizing functions for which it is expensive to compute the (possibly stochastic) gradient. Such functions are prevalent in reinforcement learning, imitation learning and adversarial training. Our target optimization framework uses the (expensive) gradient computation to construct surrogate functions in a \emph{target space} (e.g. the logits output by a linear model for classification) that can be minimized efficiently. This allows for multiple parameter updates to the model, amortizing the cost of gradient computation. In the full-batch setting, we prove that our surrogate is a global upper-bound on the loss, and can be (locally) minimized using a black-box optimization algorithm. We prove that the resulting majorization-minimization algorithm ensures convergence to a stationary point of the loss. Next, we instantiate our framework in the stochastic setting and propose the $SSO$ algorithm, which can be viewed as projected stochastic gradient descent in the target space. This connection enables us to prove theoretical guarantees for $SSO$ when minimizing convex functions. Our framework allows the use of standard stochastic optimization algorithms to construct surrogates which can be minimized by any deterministic optimization method. To evaluate our framework, we consider a suite of supervised learning and imitation learning problems. Our experiments indicate the benefits of target optimization and the effectiveness of $SSO$.

We analyze the mixing time of Metropolized Hamiltonian Monte Carlo (HMC) with the leapfrog integrator to sample from a distribution on $\mathbb{R}^d$ whose log-density is smooth, has Lipschitz Hessian in Frobenius norm and satisfies isoperimetry. We bound the gradient complexity to reach $\epsilon$ error in total variation distance from a warm start by $\tilde O(d^{1/4}\text{polylog}(1/\epsilon))$ and demonstrate the benefit of choosing the number of leapfrog steps to be larger than 1. To surpass previous analysis on Metropolis-adjusted Langevin algorithm (MALA) that has $\tilde{O}(d^{1/2}\text{polylog}(1/\epsilon))$ dimension dependency in Wu et al. (2022), we reveal a key feature in our proof that the joint distribution of the location and velocity variables of the discretization of the continuous HMC dynamics stays approximately invariant. This key feature, when shown via induction over the number of leapfrog steps, enables us to obtain estimates on moments of various quantities that appear in the acceptance rate control of Metropolized HMC. Moreover, to deal with another bottleneck on the HMC proposal distribution overlap control in the literature, we provide a new approach to upper bound the Kullback-Leibler divergence between push-forwards of the Gaussian distribution through HMC dynamics initialized at two different points. Notably, our analysis does not require log-concavity or independence of the marginals, and only relies on an isoperimetric inequality. To illustrate the applicability of our result, several examples of natural functions that fall into our framework are discussed.

We present and analyse a hybridized discontinuous Galerkin method for incompressible flow problems using non-affine cells, proving that it preserves a key invariance property that illudes most methods, namely that any irrotational component of the prescribed force is exactly balanced by the pressure gradient and does not influence the velocity field. This invariance property can be preserved in the discrete problem if the incompressibility constraint is satisfied in a sufficiently strong sense. We derive sufficient conditions to guarantee discretely divergence-free functions are exactly divergence-free, and give examples of divergence-free finite elements on meshes containing triangular, quadrilateral, tetrahedral, or hexahedral cells generated by a (possibly non-affine) map from their respective reference cells. In the case of quadrilateral cells, we prove an optimal error estimate for the velocity field that does not depend on the pressure approximation. Our theoretical analysis is supported by numerical results.

We consider the point-to-point lossy coding for computing and channel coding problems with two-sided information. We first unify these problems by considering a new generalized problem. Then we develop graph-based characterizations and derive interesting reductions through explicit graph operations, which reduce the number of decision variables. After that, we design alternating optimization algorithms for the unified problems, so that numerical computations for both the source and channel problems are covered. With the help of extra root-finding techniques, proper multiplier update strategies are developed. Thus our algorithms can compute the problems for a given distortion or cost constraint and the convergence can be proved. Also, extra heuristic deflation techniques are introduced which largely reduce the computational time. Numerical results show the accuracy and efficiency of our algorithms.

We propose, analyze and implement a virtual element discretization for an interfacial poroelasticity-elasticity consolidation problem. The formulation of the time-dependent poroelasticity equations uses displacement, fluid pressure, and total pressure, and the elasticity equations are written in the displacement-pressure formulation. The construction of the virtual element scheme does not require Lagrange multipliers to impose the transmission conditions (continuity of displacement and total traction, and no-flux for the fluid) on the interface. We show the stability and convergence of the virtual element method for different polynomial degrees, and the error bounds are robust with respect to delicate model parameters (such as Lame constants, permeability, and storativity coefficient). Finally, we provide numerical examples that illustrate the properties of the scheme.

End-to-end backpropagation has a few shortcomings: it requires loading the entire model during training, which can be impossible in constrained settings, and suffers from three locking problems (forward locking, update locking and backward locking), which prohibit training the layers in parallel. Solving layer-wise optimization problems can address these problems and has been used in on-device training of neural networks. We develop a layer-wise training method, particularly welladapted to ResNets, inspired by the minimizing movement scheme for gradient flows in distribution space. The method amounts to a kinetic energy regularization of each block that makes the blocks optimal transport maps and endows them with regularity. It works by alleviating the stagnation problem observed in layer-wise training, whereby greedily-trained early layers overfit and deeper layers stop increasing test accuracy after a certain depth. We show on classification tasks that the test accuracy of block-wise trained ResNets is improved when using our method, whether the blocks are trained sequentially or in parallel.

Matrix factor model is drawing growing attention for simultaneous two-way dimension reduction of well-structured matrix-valued observations. This paper focuses on robust statistical inference for matrix factor model in the ``diverging dimension" regime. We derive the convergence rates of the robust estimators for loadings, factors and common components under finite second moment assumption of the idiosyncratic errors. In addition, the asymptotic distributions of the estimators are also derived under mild conditions. We propose a rank minimization and an eigenvalue-ratio method to estimate the pair of factor numbers consistently. Numerical studies confirm the iterative Huber regression algorithm is a practical and reliable approach for the estimation of matrix factor model, especially under the cases with heavy-tailed idiosyncratic errors . We illustrate the practical usefulness of the proposed methods by two real datasets, one on financial portfolios and one on the macroeconomic indices of China.

In many practical applications including remote sensing, multi-task learning, and multi-spectrum imaging, data are described as a set of matrices sharing a common column space. We consider the joint estimation of such matrices from their noisy linear measurements. We study a convex estimator regularized by a pair of matrix norms. The measurement model corresponds to block-wise sensing and the reconstruction is possible only when the total energy is well distributed over blocks. The first norm, which is the maximum-block-Frobenius norm, favors such a solution. This condition is analogous to the notion of low-spikiness in matrix completion or column-wise sensing. The second norm, which is a tensor norm on a pair of suitable Banach spaces, induces low-rankness in the solution together with the first norm. We demonstrate that the joint estimation provides a significant gain over the individual recovery of each matrix when the number of matrices sharing a column space and the ambient dimension of the shared column space are large relative to the number of columns in each matrix. The convex estimator is cast as a semidefinite program and an efficient ADMM algorithm is derived. The empirical behavior of the convex estimator is illustrated using Monte Carlo simulations and recovery performance is compared to existing methods in the literature.

Non-convex optimization is ubiquitous in modern machine learning. Researchers devise non-convex objective functions and optimize them using off-the-shelf optimizers such as stochastic gradient descent and its variants, which leverage the local geometry and update iteratively. Even though solving non-convex functions is NP-hard in the worst case, the optimization quality in practice is often not an issue -- optimizers are largely believed to find approximate global minima. Researchers hypothesize a unified explanation for this intriguing phenomenon: most of the local minima of the practically-used objectives are approximately global minima. We rigorously formalize it for concrete instances of machine learning problems.

Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.

北京阿比特科技有限公司