We present novel improvements in the context of symbol-based multigrid procedures for solving large block structured linear systems. We study the application of an aggregation-based grid transfer operator that transforms the symbol of a block Toeplitz matrix from matrix-valued to scalar-valued at the coarser level. Our convergence analysis of the Two-Grid Method (TGM) reveals the connection between the features of the scalar-valued symbol at the coarser level and the properties of the original matrix-valued one. This allows us to prove the convergence of a V-cycle multigrid with standard grid transfer operators for scalar Toeplitz systems at the coarser levels. Consequently, we extend the class of suitable smoothers for block Toeplitz matrices, focusing on the efficiency of block strategies, particularly the relaxed block Jacobi method. General conditions on smoothing parameters are derived, with emphasis on practical applications where these parameters can be calculated with negligible computational cost. We test the proposed strategies on linear systems stemming from the discretization of differential problems with $\mathbb{Q}_{d} $ Lagrangian FEM or B-spline with non-maximal regularity. The numerical results show in both cases computational advantages compared to existing methods for block structured linear systems.
For solving large consistent linear systems by iteration methods, inspired by the maximum residual Kaczmarz method and the randomized block Kaczmarz method, we propose the maximum residual block Kaczmarz method, which is designed to preferentially eliminate the largest block in the residual vector $r_{k}$ at each iteration. At the same time, in order to further improve the convergence rate, we construct the maximum residual average block Kaczmarz method to avoid the calculation of pseudo-inverse in block iteration, which completes the iteration by projecting the iteration vector $x_{k}$ to each row of the constrained subset of $A$ and applying different extrapolation step sizes to average them. We prove the convergence of these two methods and give the upper bounds on their convergence rates, respectively. Numerical experiments validate our theory and show that our proposed methods are superior to some other block Kaczmarz methods.
We propose the novel p-branch-and-bound method for solving two-stage stochastic programming problems whose deterministic equivalents are represented by non-convex mixed-integer quadratically constrained quadratic programming (MIQCQP) models. The precision of the solution generated by the p-branch-and-bound method can be arbitrarily adjusted by altering the value of the precision factor p. The proposed method combines two key techniques. The first one, named p-Lagrangian decomposition, generates a mixed-integer relaxation of a dual problem with a separable structure for a primal non-convex MIQCQP problem. The second one is a version of the classical dual decomposition approach that is applied to solve the Lagrangian dual problem and ensures that integrality and non-anticipativity conditions are met once the optimal solution is obtained. This paper also presents a comparative analysis of the p-branch-and-bound method efficiency considering two alternative solution methods for the dual problems as a subroutine. These are the proximal bundle method and Frank-Wolfe progressive hedging. The latter algorithm relies on the interpolation of linearisation steps similar to those taken in the Frank-Wolfe method as an inner loop in the classic progressive hedging. The p-branch-and-bound method's efficiency was tested on randomly generated instances and demonstrated superior performance over commercial solver Gurobi.
Fractional Brownian trajectories (fBm) feature both randomness and strong scale-free correlations, challenging generative models to reproduce the intrinsic memory characterizing the underlying process. Here we test a diffusion probabilistic model on a specific dataset of corrupted images corresponding to incomplete Euclidean distance matrices of fBm at various memory exponents $H$. Our dataset implies uniqueness of the data imputation in the regime of low missing ratio, where the remaining partial graph is rigid, providing the ground truth for the inpainting. We find that the conditional diffusion generation stably reproduces the statistics of missing fBm-distributed distances for different values of $H$ exponent. Furthermore, while diffusion models have been recently shown to remember samples from the training database, we show that diffusion-based inpainting behaves qualitatively different from the database search with the increasing database size. Finally, we apply our fBm-trained diffusion model with $H=1/3$ for completion of chromosome distance matrices obtained in single-cell microscopy experiments, showing its superiority over the standard bioinformatics algorithms. Our source code is available on GitHub at //github.com/alobashev/diffusion_fbm.
We present a fast generative modeling approach for resistive memories that reproduces the complex statistical properties of real-world devices. To enable efficient modeling of analog circuits, the model is implemented in Verilog-A. By training on extensive measurement data of integrated 1T1R arrays (6,000 cycles of 512 devices), an autoregressive stochastic process accurately accounts for the cross-correlations between the switching parameters, while non-linear transformations ensure agreement with both cycle-to-cycle (C2C) and device-to-device (D2D) variability. Benchmarks show that this statistically comprehensive model achieves read/write throughputs exceeding those of even highly simplified and deterministic compact models.
We derive a novel formulation for the interaction potential between deformable fibers due to short-range fields arising from intermolecular forces. The formulation improves the existing section-section interaction potential law for in-plane beams by considering an offset between interacting cross sections. The new law is asymptotically consistent, which is particularly beneficial for computationally demanding scenarios involving short-range interactions like van der Waals and steric forces. The formulation is implemented within a framework of rotation-free Bernoulli-Euler beams utilizing the isogeometric paradigm. The improved accuracy of the novel law is confirmed through thorough numerical studies. We apply the developed formulation to investigate the complex behavior observed during peeling and pull-off of elastic fibers interacting via the Lennard-Jones potential.
We present a new approach to compute eigenvalues and eigenvectors of locally definite multiparameter eigenvalue problems by its signed multiindex. The method has the interpretation of a semismooth Newton method applied to certain functions that have a unique zero. We can therefore show local quadratic convergence, and for certain extreme eigenvalues even global linear convergence of the method. Local definiteness is a weaker condition than right and left definiteness, which is often considered for multiparameter eigenvalue problems. These conditions are naturally satisfied for multiparameter Sturm-Liouville problems that arise when separation of variables can be applied to multidimensional boundary eigenvalue problems.
In this paper, we consider using Schur complements to design preconditioners for twofold and block tridiagonal saddle point problems. One type of the preconditioners are based on the nested (or recursive) Schur complement, the other is based on an additive type Schur complement after permuting the original saddle point systems. We analyze different preconditioners incorporating the exact Schur complements. We show that some of them will lead to positively stable preconditioned systems if proper signs are selected in front of the Schur complements. These positive-stable preconditioners outperform other preconditioners if the Schur complements are further approximated inexactly. Numerical experiments for a 3-field formulation of the Biot model are provided to verify our predictions.
This work presents a nonintrusive physics-preserving method to learn reduced-order models (ROMs) of Lagrangian systems, which includes nonlinear wave equations. Existing intrusive projection-based model reduction approaches construct structure-preserving Lagrangian ROMs by projecting the Euler-Lagrange equations of the full-order model (FOM) onto a linear subspace. This Galerkin projection step requires complete knowledge about the Lagrangian operators in the FOM and full access to manipulate the computer code. In contrast, the proposed Lagrangian operator inference approach embeds the mechanics into the operator inference framework to develop a data-driven model reduction method that preserves the underlying Lagrangian structure. The proposed approach exploits knowledge of the governing equations (but not their discretization) to define the form and parametrization of a Lagrangian ROM which can then be learned from projected snapshot data. The method does not require access to FOM operators or computer code. The numerical results demonstrate Lagrangian operator inference on an Euler-Bernoulli beam model, the sine-Gordon (nonlinear) wave equation, and a large-scale discretization of a soft robot fishtail with 779,232 degrees of freedom. The learned Lagrangian ROMs generalize well, as they can accurately predict the physical solutions both far outside the training time interval, as well as for unseen initial conditions.
Domain decomposition (DD) methods are a natural way to take advantage of parallel computers when solving large scale linear systems. Their scalability depends on the design of the coarse space used in the two-level method. The analysis of adaptive coarse spaces we present here is quite general since it applies to symmetric and non symmetric problems, to symmetric preconditioners such the additive Schwarz method (ASM) and to the non-symmetric preconditioner restricted additive Schwarz (RAS), as well as to exact or inexact subdomain solves. The coarse space is built by solving generalized eigenvalues in the subdomains and applying a well-chosen operator to the selected eigenvectors.
In data assimilation, an ensemble provides a nonintrusive way to evolve a probability density described by a nonlinear prediction model. Although a large ensemble size is required for statistical accuracy, the ensemble size is typically limited to a small number due to the computational cost of running the prediction model, which leads to a sampling error. Several methods, such as localization, exist to mitigate the sampling error, often requiring problem-dependent fine-tuning and design. This work introduces another sampling error mitigation method using a smoothness constraint in the Fourier space. In particular, this work smoothes out the spectrum of the system to increase the stability and accuracy even under a small ensemble size. The efficacy of the new idea is validated through a suite of stringent test problems, including Lorenz 96 and Kuramoto-Sivashinsky turbulence models.