In this work, we develop an efficient high order discontinuous Galerkin (DG) method for solving the Electrical Impedance Tomography (EIT). EIT is a highly nonlinear ill-posed inverse problem where the interior conductivity of an object is recovered from the surface measurements of voltage and current flux. We first propose a new optimization problem based on the recovery of the conductivity from the Dirichlet-to-Neumann map to minimize the mismatch between the predicted current and the measured current on the boundary. And we further prove the existence of the minimizer. Numerically the optimization problem is solved by a third order DG method with quadratic polynomials. Numerical results for several two-dimensional problems with both single and multiple inclusions are demonstrated to show the high {accuracy and efficiency} of the proposed high order DG method. Analysis and computation for discontinuous conductivities are also studied in this work.
An inner-product Hilbert space formulation is defined over a domain of all permutations with ties upon the extended real line. We demonstrate this work to resolve the common first and second order biases found in the pervasive Kendall and Spearman non-parametric correlation estimators, while presenting as unbiased minimum variance (Gauss-Markov) estimators. We conclude by showing upon finite samples that a strictly sub-Gaussian probability distribution is to be preferred for the Kemeny $\tau_{\kappa}$ and $\rho_{\kappa}$ estimators, allowing for the construction of expected Wald test statistics which are analytically consistent with the Gauss-Markov properties upon finite samples.
The problem of low rank approximation is ubiquitous in science. Traditionally this problem is solved in unitary invariant norms such as Frobenius or spectral norm due to existence of efficient methods for building approximations. However, recent results reveal the potential of low rank approximations in Chebyshev norm, which naturally arises in many applications. In this paper we tackle the problem of building optimal rank-1 approximations in the Chebyshev norm. We investigate the properties of alternating minimization algorithm for building the low rank approximations and demonstrate how to use it to construct optimal rank-1 approximation. As a result we propose an algorithm that is capable of building optimal rank-1 approximations in Chebyshev norm for small matrices.
We study Glauber dynamics for sampling from discrete distributions $\mu$ on the hypercube $\{\pm 1\}^n$. Recently, techniques based on spectral independence have successfully yielded optimal $O(n)$ relaxation times for a host of different distributions $\mu$. We show that spectral independence is universal: a relaxation time of $O(n)$ implies spectral independence. We then study a notion of tractability for $\mu$, defined in terms of smoothness of the multilinear extension of its Hamiltonian -- $\log \mu$ -- over $[-1,+1]^n$. We show that Glauber dynamics has relaxation time $O(n)$ for such $\mu$, and using the universality of spectral independence, we conclude that these distributions are also fractionally log-concave and consequently satisfy modified log-Sobolev inequalities. We sharpen our estimates and obtain approximate tensorization of entropy and the optimal $\widetilde{O}(n)$ mixing time for random Hamiltonians, i.e. the classically studied mixed $p$-spin model at sufficiently high temperature. These results have significant downstream consequences for concentration of measure, statistical testing, and learning.
In this study, we present an integro-differential model to simulate the local spread of infections. The model incorporates a standard susceptible-infected-recovered (\textit{SIR}-) model enhanced by an integral kernel, allowing for non-homogeneous mixing between susceptibles and infectives. We define requirements for the kernel function and derive analytical results for both the \textit{SIR}- and a reduced susceptible-infected-susceptible (\textit{SIS}-) model, especially the uniqueness of solutions. In order to optimize the balance between disease containment and the social and political costs associated with lockdown measures, we set up requirements for the implementation of control function, and show examples for three different formulations for the control: continuous and time-dependent, continuous and space- and time-dependent, and piecewise constant space- and time-dependent. Latter represent reality more closely as the control cannot be updated for every time and location. We found the optimal control values for all of those setups, which are by nature best for a continuous and space-and time dependent control, yet found reasonable results for the discrete setting as well. To validate the numerical results of the integro-differential model, we compare them to an established agent-based model that incorporates social and other microscopical factors more accurately and thus acts as a benchmark for the validity of the integro-differential approach. A close match between the results of both models validates the integro-differential model as an efficient macroscopic proxy. Since computing an optimal control strategy for agent-based models is computationally very expensive, yet comparatively cheap for the integro-differential model, using the proxy model might have interesting implications for future research.
In this paper, we investigate the convergence properties of the stochastic gradient descent (SGD) method and its variants, especially in training neural networks built from nonsmooth activation functions. We develop a novel framework that assigns different timescales to stepsizes for updating the momentum terms and variables, respectively. Under mild conditions, we prove the global convergence of our proposed framework in both single-timescale and two-timescale cases. We show that our proposed framework encompasses a wide range of well-known SGD-type methods, including heavy-ball SGD, SignSGD, Lion, normalized SGD and clipped SGD. Furthermore, when the objective function adopts a finite-sum formulation, we prove the convergence properties for these SGD-type methods based on our proposed framework. In particular, we prove that these SGD-type methods find the Clarke stationary points of the objective function with randomly chosen stepsizes and initial points under mild assumptions. Preliminary numerical experiments demonstrate the high efficiency of our analyzed SGD-type methods.
There are multiple cluster randomised trial designs that vary in when the clusters cross between control and intervention states, when observations are made within clusters, and how many observations are made at that time point. Identifying the most efficient study design is complex though, owing to the correlation between observations within clusters and over time. In this article, we present a review of statistical and computational methods for identifying optimal cluster randomised trial designs. We also adapt methods from the experimental design literature for experimental designs with correlated observations to the cluster trial context. We identify three broad classes of methods: using exact formulae for the treatment effect estimator variance for specific models to derive algorithms or weights for cluster sequences; generalised methods for estimating weights for experimental units; and, combinatorial optimisation algorithms to select an optimal subset of experimental units. We also discuss methods for rounding weights to whole numbers of clusters and extensions to non-Gaussian models. We present results from multiple cluster trial examples that compare the different methods, including problems involving determining optimal allocation of clusters across a set of cluster sequences, and selecting the optimal number of single observations to make in each cluster-period for both Gaussian and non-Gaussian models, and including exchangeable and exponential decay covariance structures.
For time-dependent PDEs, the numerical schemes can be rendered bound-preserving without losing conservation and accuracy, by a post processing procedure of solving a constrained minimization in each time step. Such a constrained optimization can be formulated as a nonsmooth convex minimization, which can be efficiently solved by first order optimization methods, if using the optimal algorithm parameters. By analyzing the asymptotic linear convergence rate of the generalized Douglas-Rachford splitting method, optimal algorithm parameters can be approximately expressed as a simple function of the number of out-of-bounds cells. We demonstrate the efficiency of this simple choice of algorithm parameters by applying such a limiter to cell averages of a discontinuous Galerkin scheme solving phase field equations for 3D demanding problems. Numerical tests on a sophisticated 3D Cahn-Hilliard-Navier-Stokes system indicate that the limiter is high order accurate, very efficient, and well-suited for large-scale simulations. For each time step, it takes at most $20$ iterations for the Douglas-Rachford splitting to enforce bounds and conservation up to the round-off error, for which the computational cost is at most $80N$ with $N$ being the total number of cells.
We present a method for sampling-based model predictive control that makes use of a generic physics simulator as the dynamical model. In particular, we propose a Model Predictive Path Integral controller (MPPI), that uses the GPU-parallelizable IsaacGym simulator to compute the forward dynamics of a problem. By doing so, we eliminate the need for manual encoding of robot dynamics and interactions among objects and allow one to effortlessly solve complex navigation and contact-rich tasks. Since no explicit dynamic modeling is required, the method is easily extendable to different objects and robots. We demonstrate the effectiveness of this method in several simulated and real-world settings, among which mobile navigation with collision avoidance, non-prehensile manipulation, and whole-body control for high-dimensional configuration spaces. This method is a powerful and accessible tool to solve a large variety of contact-rich motion planning tasks.
This paper is concerned with the designing, analyzing and implementing linear and nonlinear discretization scheme for the distributed optimal control problem (OCP) with the Cahn-Hilliard (CH) equation as constrained. We propose three difference schemes to approximate and investigate the solution behaviour of the OCP for the CH equation. We present the convergence analysis of the proposed discretization. We verify our findings by presenting numerical experiments.
We present an algorithm for compressing the radiosity view factor model commonly used in radiation heat transfer and computer graphics. We use a format inspired by the hierarchical off-diagonal low rank format, where elements are recursively partitioned using a quadtree or octree and blocks are compressed using a sparse singular value decomposition -- the hierarchical matrix is assembled using dynamic programming. The motivating application is time-dependent thermal modeling on vast planetary surfaces, with a focus on permanently shadowed craters which receive energy through indirect irradiance. In this setting, shape models are comprised of a large number of triangular facets which conform to a rough surface. At each time step, a quadratic number of triangle-to-triangle scattered fluxes must be summed; that is, as the sun moves through the sky, we must solve the same view factor system of equations for a potentially unlimited number of time-varying righthand sides. We first conduct numerical experiments with a synthetic spherical cap-shaped crater, where the equilibrium temperature is analytically available. We also test our implementation with triangle meshes of planetary surfaces derived from digital elevation models recovered by orbiting spacecrafts. Our results indicate that the compressed view factor matrix can be assembled in quadratic time, which is comparable to the time it takes to assemble the full view matrix itself. Memory requirements during assembly are reduced by a large factor. Finally, for a range of compression tolerances, the size of the compressed view factor matrix and the speed of the resulting matrix vector product both scale linearly (as opposed to quadratically for the full matrix), resulting in orders of magnitude savings in processing time and memory space.