The Allen-Cahn equation (ACE) inherently possesses two crucial properties: the maximum principle and the energy dissipation law. Preserving these two properties at the discrete level is also necessary in the numerical methods for the ACE. In this paper, unlike the traditional top-down macroscopic numerical schemes which discretize the ACE directly, we first propose a novel bottom-up mesoscopic regularized lattice Boltzmann method based macroscopic numerical scheme for d(=1,2,3)-dimensional ACE, where the DdQ(2d+1) [(2d+1) discrete velocities in d-dimensional space] lattice structure is adopted. In particular, the proposed macroscopic numerical scheme has a second-order accuracy in space, and can also be viewd as an implicit-explicit finite-difference scheme for the ACE, in which the nonlinear term is discretized semi-implicitly, the temporal derivative and dissipation term of the ACE are discretized by using the explicit Euler method and second-order central difference method, respectively. Then we also demonstrate that the proposed scheme can preserve the maximum bound principle and the original energy dissipation law at the discrete level under some conditions. Finally, some numerical experiments are conducted to validate our theoretical analysis.
Under a generalised estimating equation analysis approach, approximate design theory is used to determine Bayesian D-optimal designs. For two examples, considering simple exchangeable and exponential decay correlation structures, we compare the efficiency of identified optimal designs to balanced stepped-wedge designs and corresponding stepped-wedge designs determined by optimising using a normal approximation approach. The dependence of the Bayesian D-optimal designs on the assumed correlation structure is explored; for the considered settings, smaller decay in the correlation between outcomes across time periods, along with larger values of the intra-cluster correlation, leads to designs closer to a balanced design being optimal. Unlike for normal data, it is shown that the optimal design need not be centro-symmetric in the binary outcome case. The efficiency of the Bayesian D-optimal design relative to a balanced design can be large, but situations are demonstrated in which the advantages are small. Similarly, the optimal design from a normal approximation approach is often not much less efficient than the Bayesian D-optimal design. Bayesian D-optimal designs can be readily identified for stepped-wedge cluster randomised trials with binary outcome data. In certain circumstances, principally ones with strong time period effects, they will indicate that a design unlikely to have been identified by previous methods may be substantially more efficient. However, they require a larger number of assumptions than existing optimal designs, and in many situations existing theory under a normal approximation will provide an easier means of identifying an efficient design for binary outcome data.
{We analyze a general Implicit-Explicit (IMEX) time discretization for the compressible Euler equations of gas dynamics, showing that they are asymptotic-preserving (AP) in the low Mach number limit. The analysis is carried out for a general equation of state (EOS). We consider both a single asymptotic length scale and two length scales. We then show that, when coupling these time discretizations with a Discontinuous Galerkin (DG) space discretization with appropriate fluxes, an all Mach number numerical method is obtained. A number of relevant benchmarks for ideal gases and their non-trivial extension to non-ideal EOS validate the performed analysis.
This work focuses on the numerical approximations of neutral stochastic delay differential equations with their drift and diffusion coefficients growing super-linearly with respect to both delay variables and state variables. Under generalized monotonicity conditions, we prove that the backward Euler method not only converges strongly in the mean square sense with order $1/2$, but also inherit the mean square exponential stability of the original equations. As a byproduct, we obtain the same results on convergence rate and exponential stability of the backward Euler method for stochastic delay differential equations with generalized monotonicity conditions. These theoretical results are finally supported by several numerical experiments.
We consider a model selection problem for structural equation modeling (SEM) with latent variables for diffusion processes based on high-frequency data. First, we propose the quasi-Akaike information criterion of the SEM and study the asymptotic properties. Next, we consider the situation where the set of competing models includes some misspecified parametric models. It is shown that the probability of choosing the misspecified models converges to zero. Furthermore, examples and simulation results are given.
Resonance based numerical schemes are those in which cancellations in the oscillatory components of the equation are taken advantage of in order to reduce the regularity required of the initial data to achieve a particular order of error and convergence. We investigate the potential for the derivation of resonance based schemes in the context of nonlinear stochastic PDEs. By comparing the regularity conditions required for error analysis to traditional exponential schemes we demonstrate that at orders less than $ \mathcal{O}(t^2) $, the techniques are successful and provide a significant gain on the regularity of the initial data, while at orders greater than $ \mathcal{O}(t^2) $, that the resonance based techniques does not achieve any gain. This is due to limitations in the explicit path-wise analysis of stochastic integrals. As examples of applications of the method, we present schemes for the Schr\"odinger equation and Manakov system accompanied by local error and stability analysis as well as proof of global convergence in both the strong and path-wise sense.
We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods (continuous generative flow networks). Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work. We also propose a novel exploration strategy for off-policy methods, based on local search in the target space with the use of a replay buffer, and show that it improves the quality of samples on a variety of target distributions. Our code for the sampling methods and benchmarks studied is made public at //github.com/GFNOrg/gfn-diffusion as a base for future work on diffusion models for amortized inference.
A new area of application of methods of algebra of logic and to valued logic, which has emerged recently, is the problem of recognizing a variety of objects and phenomena, medical or technical diagnostics, constructing modern machines, checking test problems, etc., which can be reduced to constructing an optimal extension of the logical function to the entire feature space. For example, in logical recognition systems, logical methods based on discrete analysis and propositional calculus based on it are used to build their own recognition algorithms. In the general case, the use of a logical recognition method provides for the presence of logical connections expressed by the optimal continuation of a k-valued function over the entire feature space, in which the variables are the logical features of the objects or phenomena being recognized. The goal of this work is to develop a logical method for object recognition consisting of a reference table with logical features and classes of non-intersecting objects, which are specified as vectors from a given feature space. The method consists of considering the reference table as a logical function that is not defined everywhere and constructing an optimal continuation of the logical function to the entire feature space, which determines the extension of classes to the entire space.
Forward uncertainty quantification (UQ) for partial differential equations is a many-query task that requires a significant number of model evaluations. The objective of this work is to mitigate the computational cost of UQ for a 3D-1D multiscale computational model of microcirculation. To this purpose, we present a deep learning enhanced multi-fidelity Monte Carlo (DL-MFMC) method that integrates the information of a multiscale full-order model (FOM) with that coming from a deep learning enhanced non-intrusive projection-based reduced order model (ROM). The latter is constructed by leveraging on proper orthogonal decomposition (POD) and mesh-informed neural networks (previously developed by the authors and co-workers), integrating diverse architectures that approximate POD coefficients while introducing fine-scale corrections for the microstructures. The DL-MFMC approach provides a robust estimator of specific quantities of interest and their associated uncertainties, with optimal management of computational resources. In particular, the computational budget is efficiently divided between training and sampling, ensuring a reliable estimation process suitably exploiting the ROM speed-up. Here, we apply the DL-MFMC technique to accelerate the estimation of biophysical quantities regarding oxygen transfer and radiotherapy outcomes. Compared to classical Monte Carlo methods, the proposed approach shows remarkable speed-ups and a substantial reduction of the overall computational cost.
Complex system design problems, such as those involved in aerospace engineering, require the use of numerically costly simulation codes in order to predict the performance of the system to be designed. In this context, these codes are often embedded into an optimization process to provide the best design while satisfying the design constraints. Recently, new approaches, called Quality-Diversity, have been proposed in order to enhance the exploration of the design space and to provide a set of optimal diversified solutions with respect to some feature functions. These functions are interesting to assess trade-offs. Furthermore, complex design problems often involve mixed continuous, discrete, and categorical design variables allowing to take into account technological choices in the optimization problem. Existing Bayesian Quality-Diversity approaches suited for intensive high-fidelity simulations are not adapted to mixed variables constrained optimization problems. In order to overcome these limitations, a new Quality-Diversity methodology based on mixed variables Bayesian optimization strategy is proposed in the context of limited simulation budget. Using adapted covariance models and dedicated enrichment strategy for the Gaussian processes in Bayesian optimization, this approach allows to reduce the computational cost up to two orders of magnitude, with respect to classical Quality-Diversity approaches while dealing with discrete choices and the presence of constraints. The performance of the proposed method is assessed on a benchmark of analytical problems as well as on two aerospace system design problems highlighting its efficiency in terms of speed of convergence. The proposed approach provides valuable trade-offs for decision-markers for complex system design.
Differentially private synthetic data provide a powerful mechanism to enable data analysis while protecting sensitive information about individuals. However, when the data lie in a high-dimensional space, the accuracy of the synthetic data suffers from the curse of dimensionality. In this paper, we propose a differentially private algorithm to generate low-dimensional synthetic data efficiently from a high-dimensional dataset with a utility guarantee with respect to the Wasserstein distance. A key step of our algorithm is a private principal component analysis (PCA) procedure with a near-optimal accuracy bound that circumvents the curse of dimensionality. Unlike the standard perturbation analysis, our analysis of private PCA works without assuming the spectral gap for the covariance matrix.