Covariance matrices of random vectors contain information that is crucial for modelling. Certain structures and patterns of the covariances (or correlations) may be used to justify parametric models, e.g., autoregressive models. Until now, there have been only few approaches for testing such covariance structures systematically and in a unified way. In the present paper, we propose such a unified testing procedure, and we will exemplify the approach with a large variety of covariance structure models. This includes common structures such as diagonal matrices, Toeplitz matrices, and compound symmetry but also the more involved autoregressive matrices. We propose hypothesis tests for these structures, and we use bootstrap techniques for better small-sample approximation. The structures of the proposed tests invite for adaptations to other covariance patterns by choosing the hypothesis matrix appropriately. We prove their correctness for large sample sizes. The proposed methods require only weak assumptions. With the help of a simulation study, we assess the small sample properties of the tests. We also analyze a real data set to illustrate the application of the procedure.
Immersed boundary methods are high-order accurate computational tools used to model geometrically complex problems in computational mechanics. While traditional finite element methods require the construction of high-quality boundary-fitted meshes, immersed boundary methods instead embed the computational domain in a background grid. Interpolation-based immersed boundary methods augment existing finite element software to non-invasively implement immersed boundary capabilities through extraction. Extraction interpolates the background basis as a linear combination of Lagrange polynomials defined on a foreground mesh, creating an interpolated basis that can be easily integrated by existing methods. This work extends the interpolation-based immersed boundary method to multi-material and multi-physics problems. Beginning from level-set descriptions of domain geometries, Heaviside enrichment is implemented to accommodate discontinuities in state variable fields across material interfaces. Adaptive refinement with truncated hierarchical B-splines is used to both improve interface geometry representations and resolve large solution gradients near interfaces. Multi-physics problems typically involve coupled fields where each field has unique discretization requirements. This work presents a novel discretization method for coupled problems through the application of extraction, using a single foreground mesh for all fields. Numerical examples illustrate optimal convergence rates for this method in both 2D and 3D, for heat conduction, linear elasticity, and a coupled thermo-mechanical problem. The utility of this method is demonstrated through image-based analysis of a composite sample, where in addition to circumventing typical meshing difficulties, this method reduces the required degrees of freedom compared to classical boundary-fitted finite element methods.
This paper addresses a production scheduling problem derived from an industrial use case, focusing on unrelated parallel machine scheduling with the personnel availability constraint. The proposed model optimizes the production plan over a multi-period scheduling horizon, accommodating variations in personnel shift hours within each time period. It assumes shared personnel among machines, with one personnel required per machine for setup and supervision during job processing. Available personnel are fewer than the machines, thus limiting the number of machines that can operate in parallel. The model aims to minimize the total production time considering machine-dependent processing times and sequence-dependent setup times. The model handles practical scenarios like machine eligibility constraints and production time windows. A Mixed Integer Linear Programming (MILP) model is introduced to formulate the problem, taking into account both continuous and district variables. A two-step solution approach enhances computational speed, first maximizing accepted jobs and then minimizing production time. Validation with synthetic problem instances and a real industrial case study of a food processing plant demonstrates the performance of the model and its usefulness in personnel shift planning. The findings offer valuable insights for practical managerial decision-making in the context of production scheduling.
While score-based generative models (SGMs) have achieved remarkable success in enormous image generation tasks, their mathematical foundations are still limited. In this paper, we analyze the approximation and generalization of SGMs in learning a family of sub-Gaussian probability distributions. We introduce a notion of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure. We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution in total variation with a dimension-independent rate. We illustrate our theory through examples, which include certain mixtures of Gaussians. An essential ingredient of our proof is to derive a dimension-free deep neural network approximation rate for the true score function associated with the forward process, which is interesting in its own right.
We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.
We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.
Mass lumping techniques are commonly employed in explicit time integration schemes for problems in structural dynamics and both avoid solving costly linear systems with the consistent mass matrix and increase the critical time step. In isogeometric analysis, the critical time step is constrained by so-called "outlier" frequencies, representing the inaccurate high frequency part of the spectrum. Removing or dampening these high frequencies is paramount for fast explicit solution techniques. In this work, we propose robust mass lumping and outlier removal techniques for nontrivial geometries, including multipatch and trimmed geometries. Our lumping strategies provably do not deteriorate (and often improve) the CFL condition of the original problem and are combined with deflation techniques to remove persistent outlier frequencies. Numerical experiments reveal the advantages of the method, especially for simulations covering large time spans where they may halve the number of iterations with little or no effect on the numerical solution.
We analyze the Schr\"odingerisation method for quantum simulation of a general class of non-unitary dynamics with inhomogeneous source terms. The Schr\"odingerisation technique, introduced in \cite{JLY22a,JLY23}, transforms any linear ordinary and partial differential equations with non-unitary dynamics into a system under unitary dynamics via a warped phase transition that maps the equations into a higher dimension, making them suitable for quantum simulation. This technique can also be applied to these equations with inhomogeneous terms modeling source or forcing terms or boundary and interface conditions, and discrete dynamical systems such as iterative methods in numerical linear algebra, through extra equations in the system. Difficulty airses with the presense of inhomogeneous terms since it can change the stability of the original system. In this paper, we systematically study--both theoretically and numerically--the important issue of recovering the original variables from the Schr\"odingerized equations, even when the evolution operator contains unstable modes. We show that even with unstable modes, one can still construct a stable scheme, yet to recover the original variable one needs to use suitable data in the extended space. We analyze and compare both the discrete and continuous Fourier transforms used in the extended dimension, and derive corresponding error estimates, which allows one to use the more appropriate transform for specific equations. We also provide a smoother initialization for the Schrod\"odingerized system to gain higher order accuracy in the extended space. We homogenize the inhomogeneous terms with a stretch transformation, making it easier to recover the original variable. Our recovering technique also provides a simple and generic framework to solve general ill-posed problems in a computationally stable way.
This paper delves into a nonparametric estimation approach for the interaction function within diffusion-type particle system models. We introduce two estimation methods based upon an empirical risk minimization. Our study encompasses an analysis of the stochastic and approximation errors associated with both procedures, along with an examination of certain minimax lower bounds. In particular, we show that there is a natural metric under which the corresponding minimax estimation error of the interaction function converges to zero with parametric rate. This result is rather suprising given complexity of the underlying estimation problem and rather large classes of interaction functions for which the above parametric rate holds.
The classical approach to analyzing extreme value data is the generalized Pareto distribution (GPD). When the GPD is used to explain a target variable with the large dimension of covariates, the shape and scale function of covariates included in GPD are sometimes modeled using the generalized additive models (GAM). In contrast to many results of application, there are no theoretical results on the hybrid technique of GAM and GPD, which motivates us to develop its asymptotic theory. We provide the rate of convergence of the estimator of shape and scale functions, as well as its local asymptotic normality.
We introduce an algebraic concept of the frame for abstract conditional independence (CI) models, together with basic operations with respect to which such a frame should be closed: copying and marginalization. Three standard examples of such frames are (discrete) probabilistic CI structures, semi-graphoids and structural semi-graphoids. We concentrate on those frames which are closed under the operation of set-theoretical intersection because, for these, the respective families of CI models are lattices. This allows one to apply the results from lattice theory and formal concept analysis to describe such families in terms of implications among CI statements. The central concept of this paper is that of self-adhesivity defined in algebraic terms, which is a combinatorial reflection of the self-adhesivity concept studied earlier in context of polymatroids and information theory. The generalization also leads to a self-adhesivity operator defined on the hyper-level of CI frames. We answer some of the questions related to this approach and raise other open questions. The core of the paper is in computations. The combinatorial approach to computation might overcome some memory and space limitation of software packages based on polyhedral geometry, in particular, if SAT solvers are utilized. We characterize some basic CI families over 4 variables in terms of canonical implications among CI statements. We apply our method in information-theoretical context to the task of entropic region demarcation over 5 variables.