We propose a new method to accurately approximate the Pompeiu-Hausdorff distance from a triangle soup A to another triangle soup B up to a given tolerance. Based on lower and upper bound computations, we discard triangles from A that do not contain the maximizer of the distance to B and subdivide the others for further processing. In contrast to previous methods, we use four upper bounds instead of only one, three of which newly proposed by us. Many triangles are discarded using the simpler bounds, while the most difficult cases are dealt with by the other bounds. Exhaustive testing determines the best ordering of the four upper bounds. A collection of experiments shows that our method is faster than all previous accurate methods in the literature.
Medical Imaging (MI) tasks, such as accelerated parallel Magnetic Resonance Imaging (MRI), often involve reconstructing an image from noisy or incomplete measurements. This amounts to solving ill-posed inverse problems, where a satisfactory closed-form analytical solution is not available. Traditional methods such as Compressed Sensing (CS) in MRI reconstruction can be time-consuming or prone to obtaining low-fidelity images. Recently, a plethora of Deep Learning (DL) approaches have demonstrated superior performance in inverse-problem solving, surpassing conventional methods. In this study, we propose vSHARP (variable Splitting Half-quadratic ADMM algorithm for Reconstruction of inverse Problems), a novel DL-based method for solving ill-posed inverse problems arising in MI. vSHARP utilizes the Half-Quadratic Variable Splitting method and employs the Alternating Direction Method of Multipliers (ADMM) to unroll the optimization process. For data consistency, vSHARP unrolls a differentiable gradient descent process in the image domain, while a DL-based denoiser, such as a U-Net architecture, is applied to enhance image quality. vSHARP also employs a dilated-convolution DL-based model to predict the Lagrange multipliers for the ADMM initialization. We evaluate vSHARP on tasks of accelerated parallel MRI Reconstruction using two distinct datasets and on accelerated parallel dynamic MRI Reconstruction using another dataset. Our comparative analysis with state-of-the-art methods demonstrates the superior performance of vSHARP in these applications.
Deep Video Quality Assessment (VQA) methods have shown impressive high-performance capabilities. Notably, no-reference (NR) VQA methods play a vital role in situations where obtaining reference videos is restricted or not feasible. Nevertheless, as more streaming videos are being created in ultra-high definition (e.g., 4K) to enrich viewers' experiences, the current deep VQA methods face unacceptable computational costs. Furthermore, the resizing, cropping, and local sampling techniques employed in these methods can compromise the details and content of original 4K videos, thereby negatively impacting quality assessment. In this paper, we propose a highly efficient and novel NR 4K VQA technology. Specifically, first, a novel data sampling and training strategy is proposed to tackle the problem of excessive resolution. This strategy allows the VQA Swin Transformer-based model to effectively train and make inferences using the full data of 4K videos on standard consumer-grade GPUs without compromising content or details. Second, a weighting and scoring scheme is developed to mimic the human subjective perception mode, which is achieved by considering the distinct impact of each sub-region within a 4K frame on the overall perception. Third, we incorporate the frequency domain information of video frames to better capture the details that affect video quality, consequently further improving the model's generalizability. To our knowledge, this is the first technology for the NR 4K VQA task. Thorough empirical studies demonstrate it not only significantly outperforms existing methods on a specialized 4K VQA dataset but also achieves state-of-the-art performance across multiple open-source NR video quality datasets.
Zigzag and other piecewise deterministic Markov process samplers have attracted significant interest for their non-reversibility and other appealing properties for Bayesian posterior computation. Hamiltonian Monte Carlo is another state-of-the-art sampler, exploiting fictitious momentum to guide Markov chains through complex target distributions. We establish an important connection between the zigzag sampler and a variant of Hamiltonian Monte Carlo based on Laplace-distributed momentum. The position and velocity component of the corresponding Hamiltonian dynamics travels along a zigzag path paralleling the Markovian zigzag process; however, the dynamics is non-Markovian in this position-velocity space as the momentum component encodes non-immediate pasts. This information is partially lost during a momentum refreshment step, in which we preserve its direction but re-sample magnitude. In the limit of increasingly frequent momentum refreshments, we prove that Hamiltonian zigzag converges strongly to its Markovian counterpart. This theoretical insight suggests that, when retaining full momentum information, Hamiltonian zigzag can better explore target distributions with highly correlated parameters by suppressing the diffusive behavior of Markovian zigzag. We corroborate this intuition by comparing performance of the two zigzag cousins on high-dimensional truncated multivariate Gaussians, including a 11,235-dimensional target arising from a Bayesian phylogenetic multivariate probit modeling of HIV virus data.
Image Edge detection (ED) is a base task in computer vision. While the performance of the ED algorithm has been improved greatly by introducing CNN-based models, current models still suffer from unsatisfactory precision rates especially when only a low error toleration distance is allowed. Therefore, model architecture for more precise predictions still needs an investigation. On the other hand, the unavoidable noise training data provided by humans would lead to unsatisfactory model predictions even when inputs are edge maps themselves, which also needs improvement. In this paper, more precise ED models are presented with cascaded skipping density blocks (CSDB). Our models obtain state-of-the-art(SOTA) predictions in several datasets, especially in average precision rate (AP), which is confirmed by extensive experiments. Moreover, our models do not include down-sample operations, demonstrating those widely believed operations are not necessary. Also, a novel modification on data augmentation for training is employed, which allows noiseless data to be employed in model training and thus improves the performance of models predicting on edge maps themselves.
Contrastive Learning (CL)-based recommender systems have gained prominence in the context of Heterogeneous Graph (HG) due to their capacity to enhance the consistency of representations across different views. Nonetheless, existing frameworks often neglect the fact that user-item interactions within HG are governed by diverse latent intents (for instance, preferences towards specific brands or the demographic characteristics of item audiences), which are pivotal in capturing fine-grained relations. The exploration of these underlying intents, particularly through the lens of meta-paths in HGs, presents us with two principal challenges: i) How to integrate CL mechanisms with latent intents; ii) How to mitigate the noise associated with these complicated intents.To address these challenges, we propose an innovative framework termed Intent-Guided Heterogeneous Graph Contrastive Learning (IHGCL), which designed to enhance CL-based recommendation by capturing the intents contained within meta-paths. Specifically, the IHGCL framework includes: i) it employs a meta-path-based dual contrastive learning approach to effectively integrate intents into the recommendation, constructing meta-path contrast and view contrast; ii) it uses an bottlenecked autoencoder that combines mask propagation with the information bottleneck principle to significantly reduce noise perturbations introduced by meta-paths. Empirical evaluations conducted across six distinct datasets demonstrate the superior performance of our IHGCL framework relative to conventional baseline methods. Our model implementation is available at //github.com/wangyu0627/IHGCL.
According to the Consultative Committee for Space Data Systems (CCSDS) recommendation for TeleCommand (TC) synchronization and coding, the Communications Link Transmission Unit (CLTU) consists of a start sequence, followed by coded data, and a tail sequence, which might be optional depending on the employed coding scheme. With regard to the latter, these transmissions traditionally use a modified Bose-Chaudhuri-Hocquenghem (BCH) code, to which two state-of-the-art Low-Density Parity-Check (LDPC) codes were later added. As a lightweight technique to detect the presence of the tail sequence, an approach based on decoding failure has traditionally been used, choosing a non-correctable string as the tail sequence. This works very well with the BCH code, for which bounded-distance decoders are employed. When the same approach is employed with LDPC codes, it is necessary to design the tail sequence as a non-correctable string for the case of iterative decoders based on belief propagation. Moreover, the tail sequence might be corrupted by noise, potentially converting it into a correctable pattern. It is therefore important that the tail sequence is chosen to be as much distant as possible, according to some metric, from any legitimate codeword. In this paper we study such problem, and analyze the TC rejection probability both theoretically and through simulations. Such a performance figure, being the rate at which the CLTU is discarded, should clearly be minimized. Our analysis is performed considering many different choices of the system parameters (e.g., length of the CLTU, decoding algorithm, maximum number of decoding iterations).
We present Neural Quantile Estimation (NQE), a novel Simulation-Based Inference (SBI) method based on conditional quantile regression. NQE autoregressively learns individual one dimensional quantiles for each posterior dimension, conditioned on the data and previous posterior dimensions. Posterior samples are obtained by interpolating the predicted quantiles using monotonic cubic Hermite spline, with specific treatment for the tail behavior and multi-modal distributions. We introduce an alternative definition for the Bayesian credible region using the local Cumulative Density Function (CDF), offering substantially faster evaluation than the traditional Highest Posterior Density Region (HPDR). In case of limited simulation budget and/or known model misspecification, a post-processing calibration step can be integrated into NQE to ensure the unbiasedness of the posterior estimation with negligible additional computational cost. We demonstrate that NQE achieves state-of-the-art performance on a variety of benchmark problems.
In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits. At each round, traditional TS requires samples from the current posterior distribution, which is usually intractable. To circumvent this issue, approximate inference techniques can be used and provide samples with distribution close to the posteriors. However, current approximate techniques yield to either poor estimation (Laplace approximation) or can be computationally expensive (MCMC methods, Ensemble sampling...). In this paper, we propose a new algorithm, Varational Inference Thompson sampling VITS, based on Gaussian Variational Inference. This scheme provides powerful posterior approximations which are easy to sample from, and is computationally efficient, making it an ideal choice for TS. In addition, we show that VITS achieves a sub-linear regret bound of the same order in the dimension and number of round as traditional TS for linear contextual bandit. Finally, we demonstrate experimentally the effectiveness of VITS on both synthetic and real world datasets.
In this study, we investigate the numerical stability of the covariant Baumgarte--Shapiro--Shibata--Nakamura (cBSSN) formulation against the Friedmann--Lema\^itre--Robertson--Walker spacetime. To evaluate the numerical stability, we calculate the constraint amplification factor by the eigenvalue analysis of the evolution of the constraint. We propose a modification to the time evolution equations of the cBSSN formulation for a higher numerical stability. Furthermore, we perform numerical simulations using the modified formulation to confirm its improved stability.
New differential-recurrence relations for B-spline basis functions are given. Using these relations, a recursive method for finding the Bernstein-B\'{e}zier coefficients of B-spline basis functions over a single knot span is proposed. The algorithm works for any knot sequence and has an asymptotically optimal computational complexity. Numerical experiments show that the new method gives results which preserve a high number of digits when compared to an approach which uses the well-known de Boor-Cox formula.