We explore the possibility of fully replacing a plasma physics kinetic simulator with a graph neural network-based simulator. We focus on this class of surrogate models given the similarity between their message-passing update mechanism and the traditional physics solver update, and the possibility of enforcing known physical priors into the graph construction and update. We show that our model learns the kinetic plasma dynamics of the one-dimensional plasma model, a predecessor of contemporary kinetic plasma simulation codes, and recovers a wide range of well-known kinetic plasma processes, including plasma thermalization, electrostatic fluctuations about thermal equilibrium, and the drag on a fast sheet and Landau damping. We compare the performance against the original plasma model in terms of run-time, conservation laws, and temporal evolution of key physical quantities. The limitations of the model are presented and possible directions for higher-dimensional surrogate models for kinetic plasmas are discussed.
In nonparameteric Bayesian approaches, Gaussian stochastic processes can serve as priors on real-valued function spaces. Existing literature on the posterior convergence rates under Gaussian process priors shows that it is possible to achieve optimal or near-optimal posterior contraction rates if the smoothness of the Gaussian process matches that of the target function. Among those priors, Gaussian processes with a parametric Mat\'ern covariance function is particularly notable in that its degree of smoothness can be determined by a dedicated smoothness parameter. Ma and Bhadra(2023) recently introduced a new family of covariance functions called the Confluent Hypergeometric (CH) class that simultaneously possess two parameters: one controls the tail index of the polynomially decaying covariance function, and the other parameter controls the degree of mean-squared smoothness analogous to the Mat\'ern class. In this paper, we show that with proper choice of rescaling parameters in the Mat\'ern and CH covariance functions, it is possible to obtain the minimax optimal posterior contraction rate for $\eta$-regular functions. Unlike the previous results for unrescaled cases, the smoothness parameter of the covariance function need not equal $\eta$ for achieving the optimal minimax rate, for either rescaled Mat\'ern or rescaled CH covariances, illustrating a key benefit for rescaling. The theoretical properties of the rescaled Mat\'ern and CH classes are further verified via extensive simulations and an illustration on a geospatial data set is presented.
We present a computational modelling approach which targets at capturing the specifics on how to virtually augment a Metaverse user's available social time capacity via using an independent and autonomous version of her digital representation in the Metaverse. We envision a Metaverse-focused extension of the traditional avatar concept: An avatar can be as well programmed to operate independently when its user is not controlling it directly, thus turning it into an agent-based digital human representation. This way, the user can virtually delegate on the avatar socializing time required for maintaining the existing contacts, so as to eventually maintain spare non-avatar-mediated socializing time which can be potentially invested in additional socialization activities. We model the setting and identify the characteristic variables via using selected concepts from social sciences: ego networks, social presence, and social cues. Then, we formulate the problem of maximizing the user's non-avatar-mediated spare time as a linear optimization. Finally, we analyze the feasible region of the problem and we present some initial insights on the spare time that can be achieved for different parameter values of the avatar-mediated interactions.
The purpose of this work is to present an effective tool for computing different QR-decompositions of a complex nonsingular square matrix. The concept of the discrete signal-induced heap transform (DsiHT, Grigoryan 2006) is used. This transform is fast, has a unique algorithm for any length of the input vector/signal and can be used with different complex basic 2x2 transforms. The DsiHT zeroes all components of the input signal while moving or heaping the energy of the signal into one component, such as the first. We describe three different types of QR-decompositions that use the basic transforms with the T, G, and M-type complex matrices we introduce, and also without matrices, but using analytical formulas. We also present the mixed QR-decomposition, when different type DsiHTs are used at different stages of the algorithm. The number of such decompositions is greater than 3^((N-1)), for an NxN complex matrix. Examples of the QR-decomposition are described in detail for the 4x4 and 6x6 complex matrices and compared with the known method of Householder transforms. The precision of the QR-decompositions of NxN matrices, when N are 6, 13, 17, 19, 21, 40, 64, 100, 128, 201, 256, and 400 is also compared. The MATLAB-based scripts of the codes for QR-decompositions by the described DsiHTs are given.
This paper considers master equations for Markovian kinetic schemes that possess the detailed balance property. Chemical kinetics, as a prime example, often yields large-scale, highly stiff equations. Based on chemical intuitions, Sumiya et al. (2015) presented the rate constant matrix contraction (RCMC) method that computes approximate solutions to such intractable systems. This paper aims to establish a mathematical foundation for the RCMC method. We present a reformulated RCMC method in terms of matrix computation, deriving the method from several natural requirements. We then perform a theoretical error analysis based on eigendecomposition and discuss implementation details caring about computational efficiency and numerical stability. Through numerical experiments on synthetic and real kinetic models, we validate the efficiency, numerical stability, and accuracy of the presented method.
Denoising diffusion models have found applications in image segmentation by generating segmented masks conditioned on images. Existing studies predominantly focus on adjusting model architecture or improving inference, such as test-time sampling strategies. In this work, we focus on improving the training strategy and propose a novel recycling method. During each training step, a segmentation mask is first predicted given an image and a random noise. This predicted mask, which replaces the conventional ground truth mask, is used for denoising task during training. This approach can be interpreted as aligning the training strategy with inference by eliminating the dependence on ground truth masks for generating noisy samples. Our proposed method significantly outperforms standard diffusion training, self-conditioning, and existing recycling strategies across multiple medical imaging data sets: muscle ultrasound, abdominal CT, prostate MR, and brain MR. This holds for two widely adopted sampling strategies: denoising diffusion probabilistic model and denoising diffusion implicit model. Importantly, existing diffusion models often display a declining or unstable performance during inference, whereas our novel recycling consistently enhances or maintains performance. We show that, under a fair comparison with the same network architectures and computing budget, the proposed recycling-based diffusion models achieved on-par performance with non-diffusion-based supervised training. By ensembling the proposed diffusion and the non-diffusion models, significant improvements to the non-diffusion models have been observed across all applications, demonstrating the value of this novel training method. This paper summarizes these quantitative results and discusses their values, with a fully reproducible JAX-based implementation, released at //github.com/mathpluscode/ImgX-DiffSeg.
With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline.
Coupled partial differential equations (PDEs) are key tasks in modeling the complex dynamics of many physical processes. Recently, neural operators have shown the ability to solve PDEs by learning the integral kernel directly in Fourier/Wavelet space, so the difficulty for solving the coupled PDEs depends on dealing with the coupled mappings between the functions. Towards this end, we propose a \textit{coupled multiwavelets neural operator} (CMWNO) learning scheme by decoupling the coupled integral kernels during the multiwavelet decomposition and reconstruction procedures in the Wavelet space. The proposed model achieves significantly higher accuracy compared to previous learning-based solvers in solving the coupled PDEs including Gray-Scott (GS) equations and the non-local mean field game (MFG) problem. According to our experimental results, the proposed model exhibits a $2\times \sim 4\times$ improvement relative $L$2 error compared to the best results from the state-of-the-art models.
We define the weighted combinatorial Laplacian operators on a simplicial complex and investigate their spectral properties. Eigenvalues close to zero and the corresponding eigenvectors of them are especially of our interest, and we show that they can detect almost $n$-dimensional holes in the given complex. Real-valued weights on simplices allow gradient descent based optimization, which in turn gives an efficient dynamic coverage repair algorithm for the sensor network of a mobile robot team.
Object detection is a fundamental task in computer vision and image processing. Current deep learning based object detectors have been highly successful with abundant labeled data. But in real life, it is not guaranteed that each object category has enough labeled samples for training. These large object detectors are easy to overfit when the training data is limited. Therefore, it is necessary to introduce few-shot learning and zero-shot learning into object detection, which can be named low-shot object detection together. Low-Shot Object Detection (LSOD) aims to detect objects from a few or even zero labeled data, which can be categorized into few-shot object detection (FSOD) and zero-shot object detection (ZSD), respectively. This paper conducts a comprehensive survey for deep learning based FSOD and ZSD. First, this survey classifies methods for FSOD and ZSD into different categories and discusses the pros and cons of them. Second, this survey reviews dataset settings and evaluation metrics for FSOD and ZSD, then analyzes the performance of different methods on these benchmarks. Finally, this survey discusses future challenges and promising directions for FSOD and ZSD.
Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.