In this paper, we combine the Smolyak technique for multi-dimensional interpolation with the Filon-Clenshaw-Curtis (FCC) rule for one-dimensional oscillatory integration, to obtain a new Filon-Clenshaw-Curtis-Smolyak (FCCS) rule for oscillatory integrals with linear phase over the $d-$dimensional cube $[-1,1]^d$. By combining stability and convergence estimates for the FCC rule with error estimates for the Smolyak interpolation operator, we obtain an error estimate for the FCCS rule, consisting of the product of a Smolyak-type error estimate multiplied by a term that decreases with $\mathcal{O}(k^{-\tilde{d}})$, where $k$ is the wavenumber and $\tilde{d}$ is the number of oscillatory dimensions. If all dimensions are oscillatory, a higher negative power of $k$ appears in the estimate. As an application, we consider the forward problem of uncertainty quantification (UQ) for a one-space-dimensional Helmholtz problem with wavenumber $k$ and a random heterogeneous refractive index, depending in an affine way on $d$ i.i.d. uniform random variables. After applying a classical hybrid numerical-asymptotic approximation, expectations of functionals of the solution of this problem can be formulated as a sum of oscillatory integrals over $[-1,1]^d$, which we compute using the FCCS rule. We give numerical results for the FCCS rule and the UQ algorithm showing that accuracy improves when both $k$ and the order of the rule increase. We also give results for dimension-adaptive sparse grid FCCS quadrature showing its efficiency as dimension increases.
We explore the Ziv-Lempel and Crochemore factorizations of some classical automatic sequences making an extensive use of the theorem prover Walnut.
Multiple Instance Learning (MIL) is a weakly supervised paradigm that has been successfully applied to many different scientific areas and is particularly well suited to medical imaging. Probabilistic MIL methods, and more specifically Gaussian Processes (GPs), have achieved excellent results due to their high expressiveness and uncertainty quantification capabilities. One of the most successful GP-based MIL methods, VGPMIL, resorts to a variational bound to handle the intractability of the logistic function. Here, we formulate VGPMIL using P\'olya-Gamma random variables. This approach yields the same variational posterior approximations as the original VGPMIL, which is a consequence of the two representations that the Hyperbolic Secant distribution admits. This leads us to propose a general GP-based MIL method that takes different forms by simply leveraging distributions other than the Hyperbolic Secant one. Using the Gamma distribution we arrive at a new approach that obtains competitive or superior predictive performance and efficiency. This is validated in a comprehensive experimental study including one synthetic MIL dataset, two well-known MIL benchmarks, and a real-world medical problem. We expect that this work provides useful ideas beyond MIL that can foster further research in the field.
In this paper, we present a comparative analysis of various self-supervised Vision Transformers (ViTs), focusing on their local representative power. Inspired by large language models, we examine the abilities of ViTs to perform various computer vision tasks with little to no fine-tuning. We design evaluation framework to analyze the quality of local, i.e.\ patch-level, representations in the context of few-shot semantic segmentation, instance identification, object retrieval and tracking. We discover that contrastive learning based methods like DINO produce more universal patch representations that can be immediately applied for downstream tasks with no parameter tuning, compared to masked image modeling. The embeddings learned using the latter approach, e.g. in masked autoencoders, have high variance features that harm distance-based algorithms, such as k-NN, and do not contain useful information for most downstream tasks. Furthermore, we demonstrate that removing these high-variance features enhances k-NN for MAE, as well as for its recent extension Scale-MAE. Finally, we find an object instance retrieval setting where DINOv2, a model pretrained on two orders of magnitude more data, falls short of its less compute intensive counterpart DINO.
Inverse problems, particularly those governed by Partial Differential Equations (PDEs), are prevalent in various scientific and engineering applications, and uncertainty quantification (UQ) of solutions to these problems is essential for informed decision-making. This second part of a two-paper series builds upon the foundation set by the first part, which introduced CUQIpy, a Python software package for computational UQ in inverse problems using a Bayesian framework. In this paper, we extend CUQIpy's capabilities to solve PDE-based Bayesian inverse problems through a general framework that allows the integration of PDEs in CUQIpy, whether expressed natively or using third-party libraries such as FEniCS. CUQIpy offers concise syntax that closely matches mathematical expressions, streamlining the modeling process and enhancing the user experience. The versatility and applicability of CUQIpy to PDE-based Bayesian inverse problems are demonstrated on examples covering parabolic, elliptic and hyperbolic PDEs. This includes problems involving the heat and Poisson equations and application case studies in electrical impedance tomography and photo-acoustic tomography, showcasing the software's efficiency, consistency, and intuitive interface. This comprehensive approach to UQ in PDE-based inverse problems provides accessibility for non-experts and advanced features for experts.
The aim of this paper is to study the complexity of the model checking problem MC for inquisitive propositional logic InqB and for inquisitive modal logic InqM, that is, the problem of deciding whether a given finite structure for the logic satisfies a given formula. In recent years, this problem has been thoroughly investigated for several variations of dependence and teams logics, systems closely related to inquisitive logic. Building upon some ideas presented by Yang, we prove that the model checking problems for InqB and InqM are both AP-complete.
In this work, we present an efficient approach to solve nonlinear high-contrast multiscale diffusion problems. We incorporate the explicit-implicit-null (EIN) method to separate the nonlinear term into a linear term and a damping term, and then utilise the implicit and explicit time marching scheme for the two parts respectively. Due to the multiscale property of the linear part, we further introduce a temporal partially explicit splitting scheme and construct suitable multiscale subspaces to speed up the computation. The approximated solution is splitted into these subspaces associated with different physics. The temporal splitting scheme employs implicit discretization in the subspace with small dimension that representing the high-contrast property and uses explicit discretization for the other subspace. We exploit the stability of the proposed scheme and give the condition for the choice of the linear diffusion coefficient. The convergence of the proposed method is provided. Several numerical tests are performed to show the efficiency and accuracy of the proposed approach.
For nonlinear Cosserat elasticity, we consider multiscale methods in this paper. In particular, we explore the generalized multiscale finite element method (GMsFEM) to solve an isotropic Cosserat problem with strain-limiting property (ensuring bounded linearized strains even under high stresses). Such strain-limiting Cosserat model can find potential applications in solids and biological fibers. However, Cosserat media with naturally rotational degrees of freedom, nonlinear constitutive relations, high contrast, and heterogeneities may produce challenging multiscale characteristics in the solution, and upscaling by multiscale methods is necessary. Therefore, we utilize the offline and residual-based online (adaptive or uniform) GMsFEM in this context while handling the nonlinearity by Picard iteration. Through various two-dimensional experiments (for perforated, composite, and stochastically heterogeneous media with small and big strain-limiting parameters), our numerical results show the approaches' convergence, efficiency, and robustness. In addition, these results demonstrate that such approaches provide good accuracy, the online GMsFEM gives more accurate solutions than the offline one, and the online adaptive strategy has similar accuracy to the uniform one but with fewer degrees of freedom.
In this paper we present a new "external verifier" for the Lean theorem prover, written in Lean itself. This is the first complete verifier for Lean 4 other than the reference implementation in C++ used by Lean itself, and our new verifier is competitive with the original, running between 20% and 50% slower and usable to verify all of Lean's mathlib library, forming an additional step in Lean's aim to self-host the full elaborator and compiler. Moreover, because the verifier is written in a language which admits formal verification, it is possible to state and prove properties about the kernel itself, and we report on some initial steps taken in this direction to formalize the Lean type theory abstractly and show that the kernel correctly implements this theory, to eliminate the possibility of implementation bugs in the kernel and increase the trustworthiness of proofs conducted in it. This work is still ongoing but we plan to use this project to help justify any future changes to the kernel and type theory and ensure unsoundness does not sneak in through either the abstract theory or implementation bugs.
Lattices are architected metamaterials whose properties strongly depend on their geometrical design. The analogy between lattices and graphs enables the use of graph neural networks (GNNs) as a faster surrogate model compared to traditional methods such as finite element modelling. In this work, we generate a big dataset of structure-property relationships for strut-based lattices. The dataset is made available to the community which can fuel the development of methods anchored in physical principles for the fitting of fourth-order tensors. In addition, we present a higher-order GNN model trained on this dataset. The key features of the model are (i) SE(3) equivariance, and (ii) consistency with the thermodynamic law of conservation of energy. We compare the model to non-equivariant models based on a number of error metrics and demonstrate its benefits in terms of predictive performance and reduced training requirements. Finally, we demonstrate an example application of the model to an architected material design task. The methods which we developed are applicable to fourth-order tensors beyond elasticity such as piezo-optical tensor etc.
Mixed methods for linear elasticity with strongly symmetric stresses of lowest order are studied in this paper. On each simplex, the stress space has piecewise linear components with respect to its Alfeld split (which connects the vertices to barycenter), generalizing the Johnson-Mercier two-dimensional element to higher dimensions. Further reductions in the stress space in the three-dimensional case (to 24 degrees of freedom per tetrahedron) are possible when the displacement space is reduced to local rigid displacements. Proofs of optimal error estimates of numerical solutions and improved error estimates via postprocessing and the duality argument are presented.