This paper is concerned with geometric exponential energy-preserving integrators for solving charged-particle dynamics in a magnetic field from normal to strong regimes. We firstly formulate the scheme of the methods for the system in a uniform magnetic field by using the idea of continuous-stage methods, and then discuss its energy-preserving property. Moreover, symmetric conditions and order conditions are analysed. Based on those conditions, we propose two practical symmetric continuous-stage exponential energy-preserving integrators of order up to four. Then we extend the obtained methods to the system in a nonuniform magnetic field and derive their properties including the symmetry, convergence and energy conservation. Numerical experiments demonstrate the efficiency of the proposed methods in comparison with some existing schemes in the literature.
In order to characterize the fluctuation between the ergodic limit and the time-averaging estimator of a full discretization in a quantitative way, we establish a central limit theorem for the full discretization of the parabolic stochastic partial differential equation. The theorem shows that the normalized time-averaging estimator converges to a normal distribution with the variance being the same as that of the continuous case, where the scale used for the normalization corresponds to the temporal strong convergence order of the considered full discretization. A key ingredient in the proof is to extract an appropriate martingale difference series sum from the normalized time-averaging estimator so that the convergence to the normal distribution of such a sum and the convergence to zero in probability of the remainder are well balanced. The main novelty of our method to balance the convergence lies in proposing an appropriately modified Poisson equation so as to possess the space-independent regularity estimates. As a byproduct, the full discretization is shown to fulfill the weak law of large numbers, namely, the time-averaging estimator converges to the ergodic limit in probability.
We use the augmented Lagrangian formalism to derive discontinuous Galerkin formulations for problems in nonlinear elasticity. In elasticity stress is typically a symmetric function of strain, leading to symmetric tangent stiffness matrices in Newtons method when conforming finite elements are used for discretization. By use of the augmented Lagrangian framework, we can also obtain symmetric tangent stiffness matrices in discontinuous Galerkin methods. We suggest two different approaches and give examples from plasticity and from large deformation hyperelasticity.
In traditional logistic regression models, the link function is often assumed to be linear and continuous in predictors. Here, we consider a threshold model that all continuous features are discretized into ordinal levels, which further determine the binary responses. Both the threshold points and regression coefficients are unknown and to be estimated. For high dimensional data, we propose a fusion penalized logistic threshold regression (FILTER) model, where a fused lasso penalty is employed to control the total variation and shrink the coefficients to zero as a method of variable selection. Under mild conditions on the estimate of unknown threshold points, we establish the non-asymptotic error bound for coefficient estimation and the model selection consistency. With a careful characterization of the error propagation, we have also shown that the tree-based method, such as CART, fulfill the threshold estimation conditions. We find the FILTER model is well suited in the problem of early detection and prediction for chronic disease like diabetes, using physical examination data. The finite sample behavior of our proposed method are also explored and compared with extensive Monte Carlo studies, which supports our theoretical discoveries.
We study algorithmic applications of a natural discretization for the hard-sphere model and the Widom-Rowlinson model in a region $\mathbb{V}\subset\mathbb{R}^d$. These models are used in statistical physics to describe mixtures of one or multiple particle types subjected to hard-core interactions. For each type, particles follow a Poisson point process with a type specific activity parameter (fugacity). The Gibbs distribution is characterized by the mixture of these point processes conditioned that no two particles are closer than a type-dependent distance threshold. A key part in better understanding the Gibbs distribution is its normalizing constant, called partition function. We give sufficient conditions that the partition function of a discrete hard-core model on a geometric graph based on a point set $X \subset \mathbb{V}$ closely approximates those of such continuous models. Previously, this was only shown for the hard-sphere model on cubic regions $\mathbb{V}=[0, \ell)^d$ when $X$ is exponential in the volume of the region $\nu(\mathbb{V})$, limiting algorithmic applications. In the same setting, our refined analysis only requires a quadratic number of points, which we argue to be tight. We use our improved discretization results to approximate the partition functions of the hard-sphere model and the Widom-Rowlinson efficiently in $\nu(\mathbb{V})$. For the hard-sphere model, we obtain the first quasi-polynomial deterministic approximation algorithm for the entire fugacity regime for which, so far, only randomized approximations are known. Furthermore, we simplify a recently introduced fully polynomial randomized approximation algorithm. Similarly, we obtain the best known deterministic and randomized approximation bounds for the Widom-Rowlinson model. Moreover, we obtain approximate sampling algorithms for the respective spin systems within the same fugacity regimes.
We present two strategies for designing passivity preserving higher order discretization methods for Maxwell's equations in nonlinear Kerr-type media. Both approaches are based on variational approximation schemes in space and time. This allows to rigorously prove energy conservation or dissipation, and thus passivity, on the fully discrete level. For linear media, the proposed methods coincide with certain combinations of mixed finite element and implicit Runge-Kutta schemes. The order optimal convergence rates, which can thus be expected for linear problems, are also observed for nonlinear problems in the numerical tests.
Learning mapping between two function spaces has attracted considerable research attention. However, learning the solution operator of partial differential equations (PDEs) remains a challenge in scientific computing. Therefore, in this study, we propose a novel pseudo-differential integral operator (PDIO) inspired by a pseudo-differential operator, which is a generalization of a differential operator and characterized by a certain symbol. We parameterize the symbol by using a neural network and show that the neural-network-based symbol is contained in a smooth symbol class. Subsequently, we prove that the PDIO is a bounded linear operator, and thus is continuous in the Sobolev space. We combine the PDIO with the neural operator to develop a pseudo-differential neural operator (PDNO) to learn the nonlinear solution operator of PDEs. We experimentally validate the effectiveness of the proposed model by using Burgers' equation, Darcy flow, and the Navier-Stokes equation. The results reveal that the proposed PDNO outperforms the existing neural operator approaches in most experiments.
Generalizing knowledge beyond source domains is a crucial prerequisite for many biomedical applications such as drug design and molecular property prediction. To meet this challenge, researchers have used optimal transport (OT) to perform representation alignment between the source and target domains. Yet existing OT algorithms are mainly designed for classification tasks. Accordingly, we consider regression tasks in the unsupervised and semi-supervised settings in this paper. To exploit continuous labels, we propose novel metrics to measure domain distances and introduce a posterior variance regularizer on the transport plan. Further, while computationally appealing, OT suffers from ambiguous decision boundaries and biased local data distributions brought by the mini-batch training. To address those issues, we propose to couple OT with metric learning to yield more robust boundaries and reduce bias. Specifically, we present a dynamic hierarchical triplet loss to describe the global data distribution, where the cluster centroids are progressively adjusted among consecutive iterations. We evaluate our method on both unsupervised and semi-supervised learning tasks in biochemistry. Experiments show the proposed method significantly outperforms state-of-the-art baselines across various benchmark datasets of small molecules and material crystals.
Singularity subtraction for linear weakly singular Fredholm integral equations of the second kind is generalized to nonlinear integral equations. Two approaches are presented: The Classical Approach discretizes the nonlinear problem, and uses some finite dimensional linearization process to solve numerically the discrete problem. Its convergence is proved under mild hypotheses on the nonlinearity and the quadrature rule of the singularity subtraction scheme. The New Approach is based on linearization of the problem in its infinite dimensional setting, and discretization of the sequence of linear problems by singularity subtraction. It is more efficient than the former, as two numerical experiments confirm.
Image segmentation is the process of partitioning the image into significant regions easier to analyze. Nowadays, segmentation has become a necessity in many practical medical imaging methods as locating tumors and diseases. Hidden Markov Random Field model is one of several techniques used in image segmentation. It provides an elegant way to model the segmentation process. This modeling leads to the minimization of an objective function. Conjugate Gradient algorithm (CG) is one of the best known optimization techniques. This paper proposes the use of the Conjugate Gradient algorithm (CG) for image segmentation, based on the Hidden Markov Random Field. Since derivatives are not available for this expression, finite differences are used in the CG algorithm to approximate the first derivative. The approach is evaluated using a number of publicly available images, where ground truth is known. The Dice Coefficient is used as an objective criterion to measure the quality of segmentation. The results show that the proposed CG approach compares favorably with other variants of Hidden Markov Random Field segmentation algorithms.
Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.