A novel numerical approach to solving the shallow-water equations on the sphere using high-order numerical discretizations in both space and time is proposed. A space-time tensor formalism is used to express the equations of motion covariantly and to describe the geometry of the rotated cubed-sphere grid. The spatial discretization is done with the direct flux reconstruction method, which is an alternative formulation to the discontinuous Galerkin approach. The equations of motion are solved in differential form and the resulting discretization is free from quadrature rules. It is well known that the time step of traditional explicit methods is limited by the phase velocity of the fastest waves. Exponential integration is employed to enable integrations with significantly larger time step sizes and improve the efficiency of the overall time integration. New multistep-type exponential propagation iterative methods of orders 4, 5 and 6 are constructed and applied to integrate the shallow-water equations in time. These new schemes enable time integration with high-order accuracy but without significant increases in computational time compared to low-order methods. The exponential matrix functions-vector products used in the exponential schemes are approximated using the complex-step approximation of the Jacobian in the Krylov-based KIOPS (Krylov with incomplete orthogonalization procedure solver) algorithm. Performance of the new numerical methods is evaluated using a set of standard benchmark tests.
We show that a specific skew-symmetric form of hyperbolic problems leads to energy conservation and an energy bound. Next, the compressible Euler equations is transformed to this skew-symmetric form and it is explained how to obtain an energy estimate. Finally we show that the new formulation lead to energy stable and energy conserving discrete approximations if the scheme is formulated on summation-by-parts form.
We consider the deformation of a geological structure with non-intersecting faults that can be represented by a layered system of viscoelastic bodies satisfying rate- and state-depending friction conditions along the common interfaces. We derive a mathematical model that contains classical Dieterich- and Ruina-type friction as special cases and accounts for possibly large tangential displacements. Semi-discretization in time by a Newmark scheme leads to a coupled system of non-smooth, convex minimization problems for rate and state to be solved in each time step. Additional spatial discretization by a mortar method and piecewise constant finite elements allows for the decoupling of rate and state by a fixed point iteration and efficient algebraic solution of the rate problem by truncated non-smooth Newton methods. Numerical experiments with a spring slider and a layered multiscale system illustrate the behavior of our model as well as the efficiency and reliability of the numerical solver.
Statistical depths provide a fundamental generalization of quantiles and medians to data in higher dimensions. This paper proposes a new type of globally defined statistical depth, based upon control theory and eikonal equations, which measures the smallest amount of probability density that has to be passed through in a path to points outside the support of the distribution: for example spatial infinity. This depth is easy to interpret and compute, expressively captures multi-modal behavior, and extends naturally to data that is non-Euclidean. We prove various properties of this depth, and provide discussion of computational considerations. In particular, we demonstrate that this notion of depth is robust under an aproximate isometrically constrained adversarial model, a property which is not enjoyed by the Tukey depth. Finally we give some illustrative examples in the context of two-dimensional mixture models and MNIST.
Approximations of optimization problems arise in computational procedures and sensitivity analysis. The resulting effect on solutions can be significant, with even small approximations of components of a problem translating into large errors in the solutions. We specify conditions under which approximations are well behaved in the sense of minimizers, stationary points, and level-sets and this leads to a framework of consistent approximations. The framework is developed for a broad class of composite problems, which are neither convex nor smooth. We demonstrate the framework using examples from stochastic optimization, neural-network based machine learning, distributionally robust optimization, penalty and augmented Lagrangian methods, interior-point methods, homotopy methods, smoothing methods, extended nonlinear programming, difference-of-convex programming, and multi-objective optimization. An enhanced proximal method illustrates the algorithmic possibilities. A quantitative analysis supplements the development by furnishing rates of convergence.
High-order implicit shock tracking is a new class of numerical methods to approximate solutions of conservation laws with non-smooth features. These methods align elements of the computational mesh with non-smooth features to represent them perfectly, allowing high-order basis functions to approximate smooth regions of the solution without the need for nonlinear stabilization, which leads to accurate approximations on traditionally coarse meshes. The hallmark of these methods is the underlying optimization formulation whose solution is a feature-aligned mesh and the corresponding high-order approximation to the flow; the key challenge is robustly solving the central optimization problem. In this work, we develop a robust optimization solver for high-order implicit shock tracking methods so they can be reliably used to simulate complex, high-speed, compressible flows in multiple dimensions. The proposed method integrates practical robustness measures into a sequential quadratic programming method, including dimension- and order-independent simplex element collapses, mesh smoothing, and element-wise solution re-initialization, which prove to be necessary to reliably track complex discontinuity surfaces, such as curved and reflecting shocks, shock formation, and shock-shock interaction. A series of nine numerical experiments -- including two- and three-dimensional compressible flows with complex discontinuity surfaces -- are used to demonstrate: 1) the robustness of the solver, 2) the meshes produced are high-quality and track continuous, non-smooth features in addition to discontinuities, 3) the method achieves the optimal convergence rate of the underlying discretization even for flows containing discontinuities, and 4) the method produces highly accurate solutions on extremely coarse meshes relative to approaches based on shock capturing.
Traditional quantile estimators that are based on one or two order statistics are a common way to estimate distribution quantiles based on the given samples. These estimators are robust, but their statistical efficiency is not always good enough. A more efficient alternative is the Harrell-Davis quantile estimator which uses a weighted sum of all order statistics. Whereas this approach provides more accurate estimations for the light-tailed distributions, it's not robust. To be able to customize the trade-off between statistical efficiency and robustness, we could consider a trimmed modification of the Harrell-Davis quantile estimator. In this approach, we discard order statistics with low weights according to the highest density interval of the beta distribution.
We consider the inverse problem of reconstructing the boundary curve of a cavity embedded in a bounded domain. The problem is formulated in two dimensions for the wave equation. We combine the Laguerre transform with the integral equation method and we reduce the inverse problem to a system of boundary integral equations. We propose an iterative scheme that linearizes the equation using the Fr\'echet derivative of the forward operator. The application of special quadrature rules results to an ill-conditioned linear system which we solve using Tikhonov regularization. The numerical results show that the proposed method produces accurate and stable reconstructions.
We develop two adaptive discretization algorithms for convex semi-infinite optimization, which terminate after finitely many iterations at approximate solutions of arbitrary precision. In particular, they terminate at a feasible point of the considered optimization problem. Compared to the existing finitely feasible algorithms for general semi-infinite optimization problems, our algorithms work with considerably smaller discretizations and are thus computationally favorable. Also, our algorithms terminate at approximate solutions of arbitrary precision, while for general semi-infinite optimization problems the best possible approximate-solution precision can be arbitrarily bad. All occurring finite optimization subproblems in our algorithms have to be solved only approximately, and continuity is the only regularity assumption on our objective and constraint functions. Applications to parametric and non-parametric regression problems under shape constraints are discussed.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.