Domain decomposition methods are among the most efficient for solving sparse linear systems of equations. Their effectiveness relies on a judiciously chosen coarse space. Originally introduced and theoretically proved to be efficient for self-adjoint operators, spectral coarse spaces have been proposed in the past few years for indefinite and non-self-adjoint operators. This paper presents a new spectral coarse space that can be constructed in a fully-algebraic way unlike most existing spectral coarse spaces. We present theoretical convergence result for Hermitian positive definite diagonally dominant matrices. Numerical experiments and comparisons against state-of-the-art preconditioners in the multigrid community show that the resulting two-level Schwarz preconditioner is efficient especially for non-self-adjoint operators. Furthermore, in this case, our proposed preconditioner outperforms state-of-the-art preconditioners.
In this paper, an upwind GFDM is developed for the coupled heat and mass transfer problems in porous media. GFDM is a meshless method that can obtain the difference schemes of spatial derivatives by using Taylor expansion in local node influence domains and the weighted least squares method. The first-order single-point upstream scheme in the FDM/FVM-based reservoir simulator is introduced to GFDM to form the upwind GFDM, based on which, a sequential coupled discrete scheme of the pressure diffusion equation and the heat convection-conduction equation is solved to obtain pressure and temperature profiles. This paper demonstrates that this method can be used to obtain the meshless solution of the convection-diffusion equation with a stable upwind effect. For porous flow problems, the upwind GFDM is more practical and stable than the method of manually adjusting the influence domain based on the prior information of the flow field to achieve the upwind effect. Two types of calculation errors are analyzed, and three numerical examples are implemented to illustrate the good calculation accuracy and convergence of the upwind GFDM for heat and mass transfer problems in porous media, and indicate the increase of the radius of the node influence domain will increase the calculation error of temperature profiles. Overall, the upwind GFDM discretizes the computational domain using only a point cloud that is generated with much less topological constraints than the generated mesh, but achieves good computational performance as the mesh-based approaches, and therefore has great potential to be developed as a general-purpose numerical simulator for various porous flow problems in domains with complex geometry.
We introduce a new constrained optimization method for policy gradient reinforcement learning, which uses two trust regions to regulate each policy update. In addition to using the proximity of one single old policy as the first trust region as done by prior works, we propose to form a second trust region through the construction of another virtual policy that represents a wide range of past policies. We then enforce the new policy to stay closer to the virtual policy, which is beneficial in case the old policy performs badly. More importantly, we propose a mechanism to automatically build the virtual policy from a memory buffer of past policies, providing a new capability for dynamically selecting appropriate trust regions during the optimization process. Our proposed method, dubbed as Memory-Constrained Policy Optimization (MCPO), is examined on a diverse suite of environments including robotic locomotion control, navigation with sparse rewards and Atari games, consistently demonstrating competitive performance against recent on-policy constrained policy gradient methods.
The monotone variational inequality is a central problem in mathematical programming that unifies and generalizes many important settings such as smooth convex optimization, two-player zero-sum games, convex-concave saddle point problems, etc. The extragradient method by Korpelevich [1976] is one of the most popular methods for solving monotone variational inequalities. Despite its long history and intensive attention from the optimization and machine learning community, the following major problem remains open. What is the last-iterate convergence rate of the extragradient method for monotone and Lipschitz variational inequalities with constraints? We resolve this open problem by showing a tight $O\left(\frac{1}{\sqrt{T}}\right)$ last-iterate convergence rate for arbitrary convex feasible sets, which matches the lower bound by Golowich et al. [2020]. Our rate is measured in terms of the standard gap function. The technical core of our result is the monotonicity of a new performance measure -- the tangent residual, which can be viewed as an adaptation of the norm of the operator that takes the local constraints into account. To establish the monotonicity, we develop a new approach that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient method. We believe our approach has many additional applications in the analysis of iterative methods.
We propose a decomposition method for the spectral peaks in an observed frequency spectrum, which is efficiently acquired by utilizing the Fast Fourier Transform. In contrast to the traditional methods of waveform fitting on the spectrum, we optimize the problem from a more robust perspective. We model the peaks in spectrum as pseudo-symmetric functions, where the only constraint is a nonincreasing behavior around a central frequency when the distance increases. Our approach is more robust against arbitrary distortion, interference and noise on the spectrum that may be caused by an observation system. The time complexity of our method is linear, i.e., $O(N)$ per extracted spectral peak. Moreover, the decomposed spectral peaks show a pseudo-orthogonal behavior, where they conform to a power preserving equality.
The scattering and transmission of harmonic acoustic waves at a penetrable material are commonly modelled by a set of Helmholtz equations. This system of partial differential equations can be rewritten into boundary integral equations defined at the surface of the objects and solved with the boundary element method (BEM). High frequencies or geometrical details require a fine surface mesh, which increases the number of degrees of freedom in the weak formulation. Then, matrix compression techniques need to be combined with iterative linear solvers to limit the computational footprint. Moreover, the convergence of the iterative linear solvers often depends on the frequency of the wave field and the objects' characteristic size. Here, the robust PMCHWT formulation is used to solve the acoustic transmission problem. An operator preconditioner based on on-surface radiation conditions (OSRC) is designed that yields frequency-robust convergence characteristics. Computational benchmarks compare the performance of this novel preconditioned formulation with other preconditioners and boundary integral formulations. The OSRC preconditioned PMCHWT formulation effectively simulates large-scale problems of engineering interest, such as focused ultrasound treatment of osteoid osteoma.
Computing a dense subgraph is a fundamental problem in graph mining, with a diverse set of applications ranging from electronic commerce to community detection in social networks. In many of these applications, the underlying context is better modelled as a weighted hypergraph that keeps evolving with time. This motivates the problem of maintaining the densest subhypergraph of a weighted hypergraph in a {\em dynamic setting}, where the input keeps changing via a sequence of updates (hyperedge insertions/deletions). Previously, the only known algorithm for this problem was due to Hu et al. [HWC17]. This algorithm worked only on unweighted hypergraphs, and had an approximation ratio of $(1+\epsilon)r^2$ and an update time of $O(\text{poly} (r, \log n))$, where $r$ denotes the maximum rank of the input across all the updates. We obtain a new algorithm for this problem, which works even when the input hypergraph is weighted. Our algorithm has a significantly improved (near-optimal) approximation ratio of $(1+\epsilon)$ that is independent of $r$, and a similar update time of $O(\text{poly} (r, \log n))$. It is the first $(1+\epsilon)$-approximation algorithm even for the special case of weighted simple graphs. To complement our theoretical analysis, we perform experiments with our dynamic algorithm on large-scale, real-world data-sets. Our algorithm significantly outperforms the state of the art [HWC17] both in terms of accuracy and efficiency.
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system. To perform highly accurate representation learning on it is of great significance owing to the great desire of extracting latent knowledge and patterns from it. Latent factor analysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models perform such embeddings on a HiDS matrix directly without exploiting its hidden graph structures, thereby resulting in accuracy loss. To address this issue, this paper proposes a graph-incorporated latent factor analysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden high-order interaction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representa-tion learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
Low-rank matrix estimation under heavy-tailed noise is challenging, both computationally and statistically. Convex approaches have been proven statistically optimal but suffer from high computational costs, especially since robust loss functions are usually non-smooth. More recently, computationally fast non-convex approaches via sub-gradient descent are proposed, which, unfortunately, fail to deliver a statistically consistent estimator even under sub-Gaussian noise. In this paper, we introduce a novel Riemannian sub-gradient (RsGrad) algorithm which is not only computationally efficient with linear convergence but also is statistically optimal, be the noise Gaussian or heavy-tailed. Convergence theory is established for a general framework and specific applications to absolute loss, Huber loss, and quantile loss are investigated. Compared with existing non-convex methods, ours reveals a surprising phenomenon of dual-phase convergence. In phase one, RsGrad behaves as in a typical non-smooth optimization that requires gradually decaying stepsizes. However, phase one only delivers a statistically sub-optimal estimator which is already observed in the existing literature. Interestingly, during phase two, RsGrad converges linearly as if minimizing a smooth and strongly convex objective function and thus a constant stepsize suffices. Underlying the phase-two convergence is the smoothing effect of random noise to the non-smooth robust losses in an area close but not too close to the truth. Lastly, RsGrad is applicable for low-rank tensor estimation under heavy-tailed noise where a statistically optimal rate is attainable with the same phenomenon of dual-phase convergence, and a novel shrinkage-based second-order moment method is guaranteed to deliver a warm initialization. Numerical simulations confirm our theoretical discovery and showcase the superiority of RsGrad over prior methods.
We introduce a fast solver for the phase field crystal (PFC) and functionalized Cahn-Hilliard (FCH) equations with periodic boundary conditions on a rectangular domain that features the preconditioned Nesterov accelerated gradient descent (PAGD) method. We discretize these problems with a Fourier collocation method in space, and employ various second-order schemes in time. We observe a significant speedup with this solver when compared to the preconditioned gradient descent (PGD) method. With the PAGD solver, fully implicit, second-order-in-time schemes are not only feasible to solve the PFC and FCH equations, but also do so more efficiently than some semi-implicit schemes in some cases where accuracy issues are taken into account. Benchmark computations of five different schemes for the PFC and FCH equations are conducted and the results indicate that, for the FCH experiments, the fully implicit schemes (midpoint rule and BDF2 equipped with the PAGD as a nonlinear time marching solver) perform better than their IMEX versions in terms of computational cost needed to achieve a certain precision. For the PFC, the results are not as conclusive as in the FCH experiments, which, we believe, is due to the fact that the nonlinearity in the PFC is milder nature compared to the FCH equation. We also discuss some practical matters in applying the PAGD. We introduce an averaged Newton preconditioner and a sweeping-friction strategy as heuristic ways to choose good preconditioner parameters. The sweeping-friction strategy exhibits almost as good a performance as the case of the best manually tuned parameters.
Autoscaling is a critical component for efficient resource utilization with satisfactory quality of service (QoS) in cloud computing. This paper investigates proactive autoscaling for widely-used scaling-per-query applications where scaling is required for each query, such as container registry and function-as-a-service (FaaS). In these scenarios, the workload often exhibits high uncertainty with complex temporal patterns like periodicity, noises and outliers. Conservative strategies that scale out unnecessarily many instances lead to high resource costs whereas aggressive strategies may result in poor QoS. We present RobustScaler to achieve superior trade-off between cost and QoS. Specifically, we design a novel autoscaling framework based on non-homogeneous Poisson processes (NHPP) modeling and stochastically constrained optimization. Furthermore, we develop a specialized alternating direction method of multipliers (ADMM) to efficiently train the NHPP model, and rigorously prove the QoS guarantees delivered by our optimization-based proactive strategies. Extensive experiments show that RobustScaler outperforms common baseline autoscaling strategies in various real-world traces, with large margins for complex workload patterns.