We propose a framework where Fer and Wilcox expansions for the solution of differential equations are derived from two particular choices for the initial transformation that seeds the product expansion. In this scheme intermediate expansions can also be envisaged. Recurrence formulas are developed. A new lower bound for the convergence of the Wilcox expansion is provided as well as some applications of the results. In particular, two examples are worked out up to high order of approximation to illustrate the behavior of the Wilcox expansion.
We consider the equivalence between the two main categorical models for the type-theoretical operation of context comprehension, namely P. Dybjer's categories with families and B. Jacobs' comprehension categories, and generalise it to the non-discrete case. The classical equivalence can be summarised in the slogan: "terms as sections". By recognising "terms as coalgebras", we show how to use the structure-semantics adjunction to prove that a 2-category of comprehension categories is biequivalent to a 2-category of (non-discrete) categories with families. The biequivalence restricts to the classical one proved by Hofmann in the discrete case. It also provides a framework where to compare different morphisms of these structures that have appeared in the literature, varying on the degree of preservation of the relevant structure. We consider in particular morphisms defined by Claraimbault-Dybjer, Jacobs, Larrea, and Uemura.
As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate each term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the elaboration, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in a partial algorithm which uses a constraint-postponement strategy for trying to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.
Deep learning methods have access to be employed for solving physical systems governed by parametric partial differential equations (PDEs) due to massive scientific data. It has been refined to operator learning that focuses on learning non-linear mapping between infinite-dimensional function spaces, offering interface from observations to solutions. However, state-of-the-art neural operators are limited to constant and uniform discretization, thereby leading to deficiency in generalization on arbitrary discretization schemes for computational domain. In this work, we propose a novel operator learning algorithm, referred to as Dynamic Gaussian Graph Operator (DGGO) that expands neural operators to learning parametric PDEs in arbitrary discrete mechanics problems. The Dynamic Gaussian Graph (DGG) kernel learns to map the observation vectors defined in general Euclidean space to metric vectors defined in high-dimensional uniform metric space. The DGG integral kernel is parameterized by Gaussian kernel weighted Riemann sum approximating and using dynamic message passing graph to depict the interrelation within the integral term. Fourier Neural Operator is selected to localize the metric vectors on spatial and frequency domains. Metric vectors are regarded as located on latent uniform domain, wherein spatial and spectral transformation offer highly regular constraints on solution space. The efficiency and robustness of DGGO are validated by applying it to solve numerical arbitrary discrete mechanics problems in comparison with mainstream neural operators. Ablation experiments are implemented to demonstrate the effectiveness of spatial transformation in the DGG kernel. The proposed method is utilized to forecast stress field of hyper-elastic material with geometrically variable void as engineering application.
Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be consistently estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing runtimes between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.
We introduce a causal regularisation extension to anchor regression (AR) for improved out-of-distribution (OOD) generalisation. We present anchor-compatible losses, aligning with the anchor framework to ensure robustness against distribution shifts. Various multivariate analysis (MVA) algorithms, such as (Orthonormalized) PLS, RRR, and MLR, fall within the anchor framework. We observe that simple regularisation enhances robustness in OOD settings. Estimators for selected algorithms are provided, showcasing consistency and efficacy in synthetic and real-world climate science problems. The empirical validation highlights the versatility of anchor regularisation, emphasizing its compatibility with MVA approaches and its role in enhancing replicability while guarding against distribution shifts. The extended AR framework advances causal inference methodologies, addressing the need for reliable OOD generalisation.
Weakly modular graphs are defined as the class of graphs that satisfy the \emph{triangle condition ($TC$)} and the \emph{quadrangle condition ($QC$)}. We study an interesting subclass of weakly modular graphs that satisfies a stronger version of the triangle condition, known as the \emph{triangle diamond condition ($TDC$)}. and term this subclass of weakly modular graphs as the \emph{diamond-weakly modular graphs}. It is observed that this class contains the class of bridged graphs and the class of weakly bridged graphs. The interval function $I_G$ of a connected graph $G$ with vertex set $V$ is an important concept in metric graph theory and is one of the prime example of a transit function; a set function defined on the Cartesian product $V\times V$ to the power set of $V$ satisfying the expansive, symmetric and idempotent axioms. In this paper, we derive an interesting axiom denoted as $(J0')$, obtained from a well-known axiom introduced by Marlow Sholander in 1952, denoted as $(J0)$. It is proved that the axiom $(J0')$ is a characterizing axiom of the diamond-weakly modular graphs. We propose certain types of independent first-order betweenness axioms on an arbitrary transit function $R$ and prove that an arbitrary transit function becomes the interval function of a diamond-weakly modular graph if and only if $R$ satisfies these betweenness axioms. Similar characterizations are obtained for the interval function of bridged graphs and weakly bridged graphs.
Sylvester matrix equations are ubiquitous in scientific computing. However, few solution techniques exist for their generalized multiterm version, as they now arise in an increasingly large number of applications. In this work, we consider algebraic parameter-free preconditioning techniques for the iterative solution of generalized multiterm Sylvester equations. They consist in constructing low Kronecker rank approximations of either the operator itself or its inverse. While the former requires solving standard Sylvester equations in each iteration, the latter only requires matrix-matrix multiplications, which are highly optimized on modern computer architectures. Moreover, low Kronecker rank approximate inverses can be easily combined with sparse approximate inverse techniques, thereby enhancing their performance with little or no damage to their effectiveness.
We study the problem of adaptive variable selection in a Gaussian white noise model of intensity $\varepsilon$ under certain sparsity and regularity conditions on an unknown regression function $f$. The $d$-variate regression function $f$ is assumed to be a sum of functions each depending on a smaller number $k$ of variables ($1 \leq k \leq d$). These functions are unknown to us and only few of them are nonzero. We assume that $d=d_\varepsilon \to \infty$ as $\varepsilon \to 0$ and consider the cases when $k$ is fixed and when $k=k_\varepsilon \to \infty$, $k=o(d)$ as $\varepsilon \to 0$. In this work, we introduce an adaptive selection procedure that, under some model assumptions, identifies exactly all nonzero $k$-variate components of $f$. In addition, we establish conditions under which exact identification of the nonzero components is impossible. These conditions ensure that the proposed selection procedure is the best possible in the asymptotically minimax sense with respect to the Hamming risk.
We propose an algorithm which predicts each subsequent time step relative to the previous timestep of intractable short rate model (when adjusted for drift and overall distribution of previous percentile result) and show that the method achieves superior outcomes to the unbiased estimate both on the trained dataset and different validation data.
The theory of mixed finite element methods for solving different types of elliptic partial differential equations in saddle point formulation is well established since many decades. This topic was mostly studied for variational formulations defined upon the same product spaces of both shape- and test-pairs of primal variable-multiplier. Whenever either these spaces or the two bilinear forms involving the multiplier are distinct, the saddle point problem is asymmetric. The three inf-sup conditions to be satisfied by the product spaces stipulated in work on the subject, in order to guarantee well-posedness, are well known. However, the material encountered in the literature addressing the approximation of this class of problems left room for improvement and clarifications. After making a brief review of the existing contributions to the topic that justifies such an assertion, in this paper we set up finer global error bounds for the pair primal variable-multiplier solving an asymmetric saddle point problem. Besides well-posedness, the three constants in the aforementioned inf-sup conditions are identified as all that is needed for determining the stability constant appearing therein, whose expression is exhibited. As a complement, refined error bounds depending only on these three constants are given for both unknowns separately.