This paper introduces a mathematical framework for explicit structural dynamics, employing approximate dual functionals and rowsum mass lumping. We demonstrate that the approach may be interpreted as a Petrov-Galerkin method that utilizes rowsum mass lumping or as a Galerkin method with a customized higher-order accurate mass matrix. Unlike prior work, our method correctly incorporates Dirichlet boundary conditions while preserving higher order accuracy. The mathematical analysis is substantiated by spectral analysis and a two-dimensional linear benchmark that involves a non-linear geometric mapping. Our results reveal that our approach achieves accuracy and robustness comparable to a traditional Galerkin method employing the consistent mass formulation, while retaining the explicit nature of the lumped mass formulation.
Distributed quantum computing, particularly distributed quantum machine learning, has gained substantial prominence for its capacity to harness the collective power of distributed quantum resources, transcending the limitations of individual quantum nodes. Meanwhile, the critical concern of privacy within distributed computing protocols remains a significant challenge, particularly in standard classical federated learning (FL) scenarios where data of participating clients is susceptible to leakage via gradient inversion attacks by the server. This paper presents innovative quantum protocols with quantum communication designed to address the FL problem, strengthen privacy measures, and optimize communication efficiency. In contrast to previous works that leverage expressive variational quantum circuits or differential privacy techniques, we consider gradient information concealment using quantum states and propose two distinct FL protocols, one based on private inner-product estimation and the other on incremental learning. These protocols offer substantial advancements in privacy preservation with low communication resources, forging a path toward efficient quantum communication-assisted FL protocols and contributing to the development of secure distributed quantum machine learning, thus addressing critical privacy concerns in the quantum computing era.
This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $\O(n)$ and $\O(n^2)$ respectively, as compared to the $\O(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation and matrix square root problems.
We extend the use of piecewise orthogonal collocation to computing periodic solutions of renewal equations, which are particularly important in modeling population dynamics. We prove convergence through a rigorous error analysis. Finally, we show some numerical experiments confirming the theoretical results, and a couple of applications in view of bifurcation analysis.
This paper presents a general methodology for deriving information-theoretic generalization bounds for learning algorithms. The main technical tool is a probabilistic decorrelation lemma based on a change of measure and a relaxation of Young's inequality in $L_{\psi_p}$ Orlicz spaces. Using the decorrelation lemma in combination with other techniques, such as symmetrization, couplings, and chaining in the space of probability measures, we obtain new upper bounds on the generalization error, both in expectation and in high probability, and recover as special cases many of the existing generalization bounds, including the ones based on mutual information, conditional mutual information, stochastic chaining, and PAC-Bayes inequalities. In addition, the Fernique-Talagrand upper bound on the expected supremum of a subgaussian process emerges as a special case.
We present a rigorous and precise analysis of the maximum degree and the average degree in a dynamic duplication-divergence graph model introduced by Sol\'e, Pastor-Satorras et al. in which the graph grows according to a duplication-divergence mechanism, i.e. by iteratively creating a copy of some node and then randomly alternating the neighborhood of a new node with probability $p$. This model captures the growth of some real-world processes e.g. biological or social networks. In this paper, we prove that for some $0 < p < 1$ the maximum degree and the average degree of a duplication-divergence graph on $t$ vertices are asymptotically concentrated with high probability around $t^p$ and $\max\{t^{2 p - 1}, 1\}$, respectively, i.e. they are within at most a polylogarithmic factor from these values with probability at least $1 - t^{-A}$ for any constant $A > 0$.
Steepest descent methods combining complex contour deformation with numerical quadrature provide an efficient and accurate approach for the evaluation of highly oscillatory integrals. However, unless the phase function governing the oscillation is particularly simple, their application requires a significant amount of a priori analysis and expert user input, to determine the appropriate contour deformation, and to deal with the non-uniformity in the accuracy of standard quadrature techniques associated with the coalescence of stationary points (saddle points) with each other, or with the endpoints of the original integration contour. In this paper we present a novel algorithm for the numerical evaluation of oscillatory integrals with general polynomial phase functions, which automates the contour deformation process and avoids the difficulties typically encountered with coalescing stationary points and endpoints. The inputs to the algorithm are simply the phase and amplitude functions, the endpoints and orientation of the original integration contour, and a small number of numerical parameters. By a series of numerical experiments we demonstrate that the algorithm is accurate and efficient over a large range of frequencies, even for examples with a large number of coalescing stationary points and with endpoints at infinity. As a particular application, we use our algorithm to evaluate cuspoid canonical integrals from scattering theory. A Matlab implementation of the algorithm is made available and is called PathFinder.
The prediction accuracy of machine learning methods is steadily increasing, but the calibration of their uncertainty predictions poses a significant challenge. Numerous works focus on obtaining well-calibrated predictive models, but less is known about reliably assessing model calibration. This limits our ability to know when algorithms for improving calibration have a real effect, and when their improvements are merely artifacts due to random noise in finite datasets. In this work, we consider detecting mis-calibration of predictive models using a finite validation dataset as a hypothesis testing problem. The null hypothesis is that the predictive model is calibrated, while the alternative hypothesis is that the deviation from calibration is sufficiently large. We find that detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions. When the conditional class probabilities are H\"older continuous, we propose T-Cal, a minimax optimal test for calibration based on a debiased plug-in estimator of the $\ell_2$-Expected Calibration Error (ECE). We further propose Adaptive T-Cal, a version that is adaptive to unknown smoothness. We verify our theoretical findings with a broad range of experiments, including with several popular deep neural net architectures and several standard post-hoc calibration methods. T-Cal is a practical general-purpose tool, which -- combined with classical tests for discrete-valued predictors -- can be used to test the calibration of virtually any probabilistic classification method.
The incompressibility method is a counting argument in the framework of algorithmic complexity that permits discovering properties that are satisfied by most objects of a class. This paper gives a preliminary insight into Kolmogorov's complexity of groupoids and some algebras. The incompressibility method shows that almost all the groupoids are asymmetric and simple: Only trivial or constant homomorphisms are possible. However, highly random groupoids allow subgroupoids with interesting restrictions that reveal intrinsic structural properties. We also study the issue of the algebraic varieties and wonder which equational identities allow randomness.
This paper explores an iterative coupling approach to solve linear thermo-poroelasticity problems, with its application as a high-fidelity discretization utilizing finite elements during the training of projection-based reduced order models. One of the main challenges in addressing coupled multi-physics problems is the complexity and computational expenses involved. In this study, we introduce a decoupled iterative solution approach, integrated with reduced order modeling, aimed at augmenting the efficiency of the computational algorithm. The iterative coupling technique we employ builds upon the established fixed-stress splitting scheme that has been extensively investigated for Biot's poroelasticity. By leveraging solutions derived from this coupled iterative scheme, the reduced order model employs an additional Galerkin projection onto a reduced basis space formed by a small number of modes obtained through proper orthogonal decomposition. The effectiveness of the proposed algorithm is demonstrated through numerical experiments, showcasing its computational prowess.
Probabilistic variants of Model Order Reduction (MOR) methods have recently emerged for improving stability and computational performance of classical approaches. In this paper, we propose a probabilistic Reduced Basis Method (RBM) for the approximation of a family of parameter-dependent functions. It relies on a probabilistic greedy algorithm with an error indicator that can be written as an expectation of some parameter-dependent random variable. Practical algorithms relying on Monte Carlo estimates of this error indicator are discussed. In particular, when using Probably Approximately Correct (PAC) bandit algorithm, the resulting procedure is proven to be a weak greedy algorithm with high probability. Intended applications concern the approximation of a parameter-dependent family of functions for which we only have access to (noisy) pointwise evaluations. As a particular application, we consider the approximation of solution manifolds of linear parameter-dependent partial differential equations with a probabilistic interpretation through the Feynman-Kac formula.