Multivariate B-splines and Non-uniform rational B-splines (NURBS) lack adaptivity due to their tensor product structure. Truncated hierarchical B-splines (THB-splines) provide a solution for this. THB-splines organize the parameter space into a hierarchical structure, which enables efficient approximation and representation of functions with different levels of detail. The truncation mechanism ensures the partition of unity property of B-splines and defines a more scattered set of basis functions without overlapping on the multi-level spline space. Transferring these multi-level splines into B\'ezier elements representation facilitates straightforward incorporation into existing finite element (FE) codes. By separating the multi-level extraction of the THB-splines from the standard B\'ezier extraction, a more general independent framework applicable to any sequence of nested spaces is created. The operators for the multi-level structure of THB-splines and the operators of B\'ezier extraction are constructed in a local approach. Adjusting the operators for the multi-level structure from an element point of view and multiplying with the B\'ezier extraction operators of those elements, a direct map between B\'ezier elements and a hierarchical structure is obtained. The presented implementation involves the use of an open-source Octave/MATLAB isogeometric analysis (IGA) code called GeoPDEs. A basic Poisson problem is presented to investigate the performance of multi-level B\'ezier extraction compared to a standard THB-spline approach.
Discovering causal relationships from observational data is a fundamental yet challenging task. Invariant causal prediction (ICP, Peters et al., 2016) is a method for causal feature selection which requires data from heterogeneous settings and exploits that causal models are invariant. ICP has been extended to general additive noise models and to nonparametric settings using conditional independence tests. However, the latter often suffer from low power (or poor type I error control) and additive noise models are not suitable for applications in which the response is not measured on a continuous scale, but reflects categories or counts. Here, we develop transformation-model (TRAM) based ICP, allowing for continuous, categorical, count-type, and uninformatively censored responses (these model classes, generally, do not allow for identifiability when there is no exogenous heterogeneity). As an invariance test, we propose TRAM-GCM based on the expected conditional covariance between environments and score residuals with uniform asymptotic level guarantees. For the special case of linear shift TRAMs, we also consider TRAM-Wald, which tests invariance based on the Wald statistic. We provide an open-source R package 'tramicp' and evaluate our approach on simulated data and in a case study investigating causal features of survival in critically ill patients.
We couple the L1 discretization of the Caputo fractional derivative in time with the Galerkin scheme to devise a linear numerical method for the semilinear subdiffusion equation. Two important points that we make are: nonsmooth initial data and time-dependent diffusion coefficient. We prove the stability and convergence of the method under weak assumptions concerning regularity of the diffusivity. We find optimal pointwise in space and global in time errors, which are verified with several numerical experiments.
A spatial second-order scheme for the nonlinear radiative transfer equations is introduced in this paper. The discretization scheme is based on the filtered spherical harmonics ($FP_N$) method for the angular variable and the unified gas kinetic scheme (UGKS) framework for the spatial and temporal variables respectively. In order to keep the scheme positive and second-order accuracy, firstly, we use the implicit Monte Carlo linearization method [6] in the construction of the UGKS numerical boundary fluxes. Then, by carefully analyzing the constructed second-order fluxes involved in the macro-micro decomposition, which is induced by the $FP_N$ angular discretization, we establish the sufficient conditions that guarantee the positivity of the radiative energy density and material temperature. Finally, we employ linear scaling limiters for the angular variable in the $P_N$ reconstruction and for the spatial variable in the piecewise linear slopes reconstruction respectively, which are shown to be realizable and reasonable to enforce the sufficient conditions holding. Thus, the desired scheme, called the $PPFP_N$-based UGKS, is obtained. Furthermore, in the regime $\epsilon\ll 1$ and the regime $\epsilon=O(1)$, a simplified spatial second-order scheme, called the $PPFP_N$-based SUGKS, is presented, which possesses all the properties of the non-simplified one. Inheriting the merit of UGKS, the proposed schemes are asymptotic preserving. By employing the $FP_N$ method for the angular variable, the proposed schemes are almost free of ray effects. To our best knowledge, this is the first time that spatial second-order, positive, asymptotic preserving and almost free of ray effects schemes are constructed for the nonlinear radiative transfer equations without operator splitting. Various numerical experiments are included to validate the properties of the proposed schemes.
In this work we study different Implicit-Explicit (IMEX) schemes for incompressible flow problems with variable viscosity. Unlike most previous work on IMEX schemes, which focuses on the convective part, we here focus on treating parts of the diffusive term explicitly to reduce the coupling between the velocity components. We present different, both monolithic and fractional-step, IMEX alternatives for the variable-viscosity Navier--Stokes system, analysing their theoretical and algorithmic properties. Stability results are proven for all the methods presented, with all these results being unconditional, except for one of the discretisations using a fractional-step scheme, where a CFL condition (in terms of the problem data) is required for showing stability. Our analysis is supported by a series of numerical experiments.
This paper addresses the problem of designing the {\it continuous-discrete} unscented Kalman filter (UKF) implementation methods. More precisely, the aim is to propose the MATLAB-based UKF algorithms for {\it accurate} and {\it robust} state estimation of stochastic dynamic systems. The accuracy of the {\it continuous-discrete} nonlinear filters heavily depends on how the implementation method manages the discretization error arisen at the filter prediction step. We suggest the elegant and accurate implementation framework for tracking the hidden states by utilizing the MATLAB built-in numerical integration schemes developed for solving ordinary differential equations (ODEs). The accuracy is boosted by the discretization error control involved in all MATLAB ODE solvers. This keeps the discretization error below the tolerance value provided by users, automatically. Meanwhile, the robustness of the UKF filtering methods is examined in terms of the stability to roundoff. In contrast to the pseudo-square-root UKF implementations established in engineering literature, which are based on the one-rank Cholesky updates, we derive the stable square-root methods by utilizing the $J$-orthogonal transformations for calculating the Cholesky square-root factors.
We propose a finite element discretization for the steady, generalized Navier-Stokes equations for fluids with shear-dependent viscosity, completed with inhomogeneous Dirichlet boundary conditions and an inhomogeneous divergence constraint. We establish (weak) convergence of discrete solutions as well as a priori error estimates for the velocity vector field and the scalar kinematic pressure. Numerical experiments complement the theoretical findings.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem, possibly with a local or a boundary control, for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits excellent performances and robustness with respect to the parameters of interest.
This paper proposes a strategy to solve the problems of the conventional s-version of finite element method (SFEM) fundamentally. Because SFEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from its strengths. To solve these issues, we propose a novel strategy called B-spline based SFEM. To improve the accuracy of numerical integration, we employed cubic B-spline basis functions with $C^2$-continuity across element boundaries as the global basis functions. To avoid matrix singularity, we applied different basis functions to different meshes. Specifically, we employed the Lagrange basis functions as local basis functions. The numerical results indicate that using the proposed method, numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional SFEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional SFEM.
Markov chain Monte Carlo (MCMC) algorithms are based on the construction of a Markov Chain with transition probabilities $P_\mu(x,\cdot)$, where $\mu$ indicates an invariant distribution of interest. In this work, we look at these transition probabilities as functions of their invariant distributions, and we develop a notion of derivative in the invariant distribution of a MCMC kernel. We build around this concept a set of tools that we refer to as Markov Chain Monte Carlo Calculus. This allows us to compare Markov chains with different invariant distributions within a suitable class via what we refer to as mean value inequalities. We explain how MCMC Calculus provides a natural framework to study algorithms using an approximation of an invariant distribution, also illustrating how it suggests practical guidelines for MCMC algorithms efficiency. We conclude this work by showing how the tools developed can be applied to prove convergence of interacting and sequential MCMC algorithms, which arise in the context of particle filtering.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.