This paper focuses on the numerical solution of elliptic partial differential equations (PDEs) with Dirichlet and mixed boundary conditions, specifically addressing the challenges arising from irregular domains. Both finite element method (FEM) and finite difference method (FDM), face difficulties in dealing with arbitrary domains. The paper introduces a novel nodal symmetric ghost {method based on a variational formulation}, which combines the advantages of FEM and FDM. The method employs bilinear finite elements on a structured mesh and provides a detailed implementation description. A rigorous a priori convergence rate analysis is also presented. The convergence rates are validated with many numerical experiments, in both one and two space dimensions.
We consider estimators obtained by iterates of the conjugate gradient (CG) algorithm applied to the normal equation of prototypical statistical inverse problems. Stopping the CG algorithm early induces regularisation, and optimal convergence rates of prediction and reconstruction error are established in wide generality for an ideal oracle stopping time. Based on this insight, a fully data-driven early stopping rule $\tau$ is constructed, which also attains optimal rates, provided the error in estimating the noise level is not dominant. The error analysis of CG under statistical noise is subtle due to its nonlinear dependence on the observations. We provide an explicit error decomposition and identify two terms in the prediction error, which share important properties of classical bias and variance terms. Together with a continuous interpolation between CG iterates, this paves the way for a comprehensive error analysis of early stopping. In particular, a general oracle-type inequality is proved for the prediction error at $\tau$. For bounding the reconstruction error, a more refined probabilistic analysis, based on concentration of self-normalised Gaussian processes, is developed. The methodology also provides some new insights into early stopping for CG in deterministic inverse problems. A numerical study for standard examples shows good results in practice for early stopping at $\tau$.
In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD) problems, called JDSVD, a large symmetric and generally indefinite correction equation is solved iteratively at each outer iteration, which constitutes the inner iterations and dominates the overall efficiency of JDSVD. In this paper, a convergence analysis is made on the minimal residual (MINRES) method for the correction equation. Motivated by the results obtained, at each outer iteration a new correction equation is derived that extracts useful information from current subspaces to construct effective preconditioners for the correction equation and is proven to retain the same convergence of outer iterations of JDSVD.The resulting method is called inner preconditioned JDSVD (IPJDSVD) method; it is also a new JDSVD method, and any viable preconditioner for the correction equations in JDSVD is straightforwardly applicable to those in IPJDSVD. Convergence results show that MINRES for the new correction equation can converge much faster when there is a cluster of singular values closest to a given target. A new thick-restart IPJDSVD algorithm with deflation and purgation is proposed that simultaneously accelerates the outer and inner convergence of the standard thick-restart JDSVD and computes several singular triplets. Numerical experiments justify the theory and illustrate the considerable superiority of IPJDSVD to JDSVD, and demonstrate that a similar two-stage IPJDSVD algorithm substantially outperforms the most advanced PRIMME\_SVDS software nowadays for computing the smallest singular triplets.
Accelerated failure time (AFT) models are frequently used to model survival data, providing a direct quantification of the relationship between event times and covariates. These models allow for the acceleration or deceleration of failure times through a multiplicative factor that accounts for the effect of covariates. While existing literature provides numerous methods for fitting AFT models with time-fixed covariates, adapting these approaches to scenarios involving both time-varying covariates and partly interval-censored data remains challenging. Motivated by a randomised clinical trial dataset on advanced melanoma patients, we propose a maximum penalised likelihood approach for fitting a semiparametric AFT model to survival data with partly interval-censored failure times. This method also accommodates both time-fixed and time-varying covariates. We utilise Gaussian basis functions to construct a smooth approximation of the non-parametric baseline hazard and fit the model using a constrained optimisation approach. The effectiveness of our method is demonstrated through extensive simulations. Finally, we illustrate the relevance of our approach by applying it to a dataset from a randomised clinical trial involving patients with advanced melanoma.
This work studies the parameter-dependent diffusion equation in a two-dimensional domain consisting of locally mirror symmetric layers. It is assumed that the diffusion coefficient is a constant in each layer. The goal is to find approximate parameter-to-solution maps that have a small number of terms. It is shown that in the case of two layers one can find a solution formula consisting of three terms with explicit dependencies on the diffusion coefficient. The formula is based on decomposing the solution into orthogonal parts related to both of the layers and the interface between them. This formula is then expanded to an approximate one for the multi-layer case. We give an analytical formula for square layers and use the finite element formulation for more general layers. The results are illustrated with numerical examples and have applications for reduced basis methods by analyzing the Kolmogorov n-width.
We propose a novel partitioned scheme based on Eikonal equations to model the coupled propagation of the electrical signal in the His-Purkinje system and in the myocardium for cardiac electrophysiology. This scheme allows, for the first time in Eikonal-based modeling, to capture all possible signal reentries between the Purkinje network and the cardiac muscle that may occur under pathological conditions. As part of the proposed scheme, we introduce a new pseudo-time method for the Eikonal-diffusion problem in the myocardium, to correctly enforce electrical stimuli coming from the Purkinje network. We test our approach by performing numerical simulations of cardiac electrophysiology in a real biventricular geometry, under both pathological and therapeutic conditions, to demonstrate its flexibility, robustness, and accuracy.
This paper addresses the problem of adaptively controlling the bias parameter in nonlinear opinion dynamics (NOD) to allocate agents into groups of arbitrary sizes for the purpose of maximizing collective rewards. In previous work, an algorithm based on the coupling of NOD with an multi-objective behavior optimization was successfully deployed as part of a multi-robot system in an autonomous task allocation field experiment. Motivated by the field results, in this paper we propose and analyze a new task allocation model that synthesizes NOD with an evolutionary game framework. We prove sufficient conditions under which it is possible to control the opinion state in the group to a desired allocation of agents between two tasks through an adaptive bias using decentralized feedback. We then verify the theoretical results with a simulation study of a collaborative evolutionary division of labor game.
Conjugate heat transfer (CHT) analyses are vital for the design of many energy systems. However, high-fidelity CHT numerical simulations are computationally intensive, which limits their applications such as design optimization, where hundreds to thousands of evaluations are required. In this work, we develop a modular deep encoder-decoder hierarchical (DeepEDH) convolutional neural network, a novel deep-learning-based surrogate modeling methodology for computationally intensive CHT analyses. Leveraging convective temperature dependencies, we propose a two-stage temperature prediction architecture that couples velocity and temperature fields. The proposed DeepEDH methodology is demonstrated by modeling the pressure, velocity, and temperature fields for a liquid-cooled cold-plate-based battery thermal management system with variable channel geometry. A computational mesh and CHT formulation of the cold plate is created and solved using the finite element method (FEM), generating a dataset of 1,500 simulations. Our performance analysis covers the impact of the novel architecture, separate DeepEDH models for each field, output geometry masks, multi-stage temperature field predictions, and optimizations of the hyperparameters and architecture. Furthermore, we quantify the influence of the CHT analysis' thermal boundary conditions on surrogate model performance, highlighting improved temperature model performance with higher heat fluxes. Compared to other deep learning neural network surrogate models, such as U-Net and DenseED, the proposed DeepEDH architecture for CHT analyses exhibits up to a 65% enhancement in the coefficient of determination $R^{2}$. (*Due to the notification of arXiv "The Abstract field cannot be longer than 1,920 characters", the appeared Abstract is shortened. For the full Abstract, please download the Article.)
We propose a procedure for the numerical approximation of invariance equations arising in the moment matching technique associated with reduced-order modeling of high-dimensional dynamical systems. The Galerkin residual method is employed to find an approximate solution to the invariance equation using a Newton iteration on the coefficients of a monomial basis expansion of the solution. These solutions to the invariance equations can then be used to construct reduced-order models. We assess the ability of the method to solve the invariance PDE system as well as to achieve moment matching and recover a system's steady-state behaviour for linear and nonlinear signal generators with system dynamics up to $n=1000$ dimensions.
For many problems, quantum algorithms promise speedups over their classical counterparts. However, these results predominantly rely on asymptotic worst-case analysis, which overlooks significant overheads due to error correction and the fact that real-world instances often contain exploitable structure. In this work, we employ the hybrid benchmarking method to evaluate the potential of quantum Backtracking and Grover's algorithm against the 2023 SAT competition main track winner in solving random $k$-SAT instances with tunable structure, designed to represent industry-like scenarios, using both $T$-depth and $T$-count as cost metrics to estimate quantum run times. Our findings reproduce the results of Campbell, Khurana, and Montanaro (Quantum '19) in the unstructured case using hybrid benchmarking. However, we offer a more sobering perspective in practically relevant regimes: almost all quantum speedups vanish, even asymptotically, when minimal structure is introduced or when $T$-count is considered instead of $T$-depth. Moreover, when the requirement is for the algorithm to find a solution within a single day, we find that only Grover's algorithm has the potential to outperform classical algorithms, but only in a very limited regime and only when using $T$-depth. We also discuss how more sophisticated heuristics could restore the asymptotic scaling advantage for quantum backtracking, but our findings suggest that the potential for practical quantum speedups in more structured $k$-SAT solving will remain limited.
Many combinatorial optimization problems can be formulated as the search for a subgraph that satisfies certain properties and minimizes the total weight. We assume here that the vertices correspond to points in a metric space and can take any position in given uncertainty sets. Then, the cost function to be minimized is the sum of the distances for the worst positions of the vertices in their uncertainty sets. We propose two types of polynomial-time approximation algorithms. The first one relies on solving a deterministic counterpart of the problem where the uncertain distances are replaced with maximum pairwise distances. We study in details the resulting approximation ratio, which depends on the structure of the feasible subgraphs and whether the metric space is Ptolemaic or not. The second algorithm is a fully-polynomial time approximation scheme for the special case of $s-t$ paths.