In this paper we are concerned with energy-conserving methods for Poisson problems, which are effectively solved by defining a suitable generalization of HBVMs, a class of energy-conserving methods for Hamiltonian problems. The actual implementation of the methods is fully discussed, with a particular emphasis on the conservation of Casimirs. Some numerical tests are reported, in order to assess the theoretical findings.
We propose and analyze an unfitted finite element method for solving elliptic problems on domains with curved boundaries and interfaces. The approximation space on the whole domain is obtained by the direct extension of the finite element space defined on interior elements, in the sense that there is no degree of freedom locating in boundary/interface elements. The boundary/jump conditions are imposed in a weak sense in the scheme. The method is shown to be stable without any mesh adjustment or any special stabilization. Optimal convergence rates under the $L^2$ norm and the energy norm are derived. Numerical results in both two and three dimensions are presented to illustrate the accuracy and the robustness of the method.
Isospectral flows appear in a variety of applications, e.g. the Toda lattice in solid state physics or in discrete models for two-dimensional hydrodynamics, with the isospectral property often corresponding to mathematically or physically important conservation laws. Their most prominent feature, i.e. the conservation of the eigenvalues of the matrix state variable, should therefore be retained when discretizing these systems. Recently, it was shown how isospectral Runge-Kutta methods can, in the Lie-Poisson case also considered in our work, be obtained through Hamiltonian reduction of symplectic Runge-Kutta methods on the cotangent bundle of a Lie group. We provide the Lagrangian analogue and, in the case of symplectic diagonal implicit Runge-Kutta methods, derive the methods through a discrete Euler-Poincare reduction. Our derivation relies on a formulation of diagonally implicit isospectral Runge-Kutta methods in terms of the Cayley transform, generalizing earlier work that showed this for the implicit midpoint rule. Our work is also a generalization of earlier variational Lie group integrators that, interestingly, appear when these are interpreted as update equations for intermediate time points. From a practical point of view, our results allow for a simple implementation of higher order isospectral methods and we demonstrate this with numerical experiments where both the isospectral property and energy are conserved to high accuracy.
As is well known, the stability of the 3-step backward differentiation formula (BDF3) on variable grids for a parabolic problem is analyzed in [Calvo and Grigorieff, \newblock BIT. \textbf{42} (2002) 689--701] under the condition $r_k:=\tau_k/\tau_{k-1}<1.199$, where $r_k$ is the adjacent time-step ratio. In this work, we establish the spectral norm inequality, which can be used to give a upper bound for the inverse matrix. Then the BDF3 scheme is unconditionally stable under a new condition $r_k\leq 1.405$. Meanwhile, we show that the upper bound of the ratio $r_k$ is less than $\sqrt{3}$ for BDF3 scheme. In addition, based on the idea of [Wang and Ruuth, J. Comput. Math. \textbf{26} (2008) 838--855; Chen, Yu, and Zhang, arXiv:2108.02910], we design a weighted and shifted BDF3 (WSBDF3) scheme for solving the parabolic problem. We prove that the WSBDF3 scheme is unconditionally stable under the condition $r_k\leq 1.771$, which is a significant improvement for the maximum time-step ratio. The error estimates are obtained by the stability inequality. Finally, numerical experiments are given to illustrate the theoretical results.
The prevalence of multivariate space-time data collected from monitoring networks and satellites, or generated from numerical models, has brought much attention to multivariate spatio-temporal statistical models, where the covariance function plays a key role in modeling, inference, and prediction. For multivariate space-time data, understanding the spatio-temporal variability, within and across variables, is essential in employing a realistic covariance model. Meanwhile, the complexity of generic covariances often makes model fitting very challenging, and simplified covariance structures, including symmetry and separability, can reduce the model complexity and facilitate the inference procedure. However, a careful examination of these properties is needed in real applications. In the work presented here, we formally define these properties for multivariate spatio-temporal random fields and use functional data analysis techniques to visualize them, hence providing intuitive interpretations. We then propose a rigorous rank-based testing procedure to conclude whether the simplified properties of covariance are suitable for the underlying multivariate space-time data. The good performance of our method is illustrated through synthetic data, for which we know the true structure. We also investigate the covariance of bivariate wind speed, a key variable in renewable energy, over a coastal and an inland area in Saudi Arabia. The Supplementary Material is available online, including the R code for our developed methods.
When are inferences (whether Direct-Likelihood, Bayesian, or Frequentist) obtained from partial data valid? This paper answers this question by offering a new theory about inference with missing data. It proves that as the sample size increases and the extent of missingness decreases, the mean-loglikelihood function generated by partial data and that ignores the missingness mechanism will almost surely converge uniformly to that which would have been generated by complete data; and if the data are Missing at Random (or "partially missing at random"), this convergence depends only on sample size. Thus, inferences from partial data, such as posterior modes, uncertainty estimates, confidence intervals, likelihood ratios, and indeed, all quantities or features derived from the partial-data loglikelihood function, will be consistently estimated. They will approximate their complete-data analogues. This adds to previous research which has only proved the consistency of the posterior mode. Practical implications of this result are discussed, and the theory is verified using a previous study of International Human Rights Law.
The Virtual Element Method (VEM) is a Galerkin approximation method that extends the Finite Element Method (FEM) to polytopal meshes. In this paper, we present a conforming formulation that generalizes the Scott-Vogelius finite element method for the numerical approximation of the Stokes problem to polygonal meshes in the framework of the virtual element method. In particular, we consider a straightforward application of the virtual element approximation space for scalar elliptic problems to the vector case and approximate the pressure variable through discontinuous polynomials. We assess the effectiveness of the numerical approximation by investigating the convergence on a manufactured solution problem and a set of representative polygonal meshes. We numerically show that this formulation is convergent with optimal convergence rates except for the lowest-order case on triangular meshes, where the method coincides with the $P_1-P_0$ Scott-Vogelius scheme, and on square meshes, which are situations that are well-known to be unstable.
Backtracking search algorithms are often used to solve the Constraint Satisfaction Problem (CSP). The efficiency of backtracking search depends greatly on the variable ordering heuristics. Currently, the most commonly used heuristics are hand-crafted based on expert knowledge. In this paper, we propose a deep reinforcement learning based approach to automatically discover new variable ordering heuristics that are better adapted for a given class of CSP instances. We show that directly optimizing the search cost is hard for bootstrapping, and propose to optimize the expected cost of reaching a leaf node in the search tree. To capture the complex relations among the variables and constraints, we design a representation scheme based on Graph Neural Network that can process CSP instances with different sizes and constraint arities. Experimental results on random CSP instances show that the learned policies outperform classical hand-crafted heuristics in terms of minimizing the search tree size, and can effectively generalize to instances that are larger than those used in training.
We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity. We establish universal lower bounds for this estimation problem, for general classes of continuous decision boundaries. For the class of locally Barron-regular decision boundaries, we find that the optimal estimation rates are essentially independent of the underlying dimension and can be realized by empirical risk minimization methods over a suitable class of deep neural networks. These results are based on novel estimates of the $L^1$ and $L^\infty$ entropies of the class of Barron-regular functions.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.
Because of continuous advances in mathematical programing, Mix Integer Optimization has become a competitive vis-a-vis popular regularization method for selecting features in regression problems. The approach exhibits unquestionable foundational appeal and versatility, but also poses important challenges. We tackle these challenges, reducing computational burden when tuning the sparsity bound (a parameter which is critical for effectiveness) and improving performance in the presence of feature collinearity and of signals that vary in nature and strength. Importantly, we render the approach efficient and effective in applications of realistic size and complexity - without resorting to relaxations or heuristics in the optimization, or abandoning rigorous cross-validation tuning. Computational viability and improved performance in subtler scenarios is achieved with a multi-pronged blueprint, leveraging characteristics of the Mixed Integer Programming framework and by means of whitening, a data pre-processing step.