We study extensions of Sem\"enov arithmetic, the first-order theory of the structure $(\mathbb{N}, +, 2^x)$. It is well-knonw that this theory becomes undecidable when extended with regular predicates over tuples of number strings, such as the B\"uchi $V_2$-predicate. We therefore restrict ourselves to the existential theory of Sem\"enov arithmetic and show that this theory is decidable in EXPSPACE when extended with arbitrary regular predicates over tuples of number strings. Our approach relies on a reduction to the language emptiness problem for a restricted class of affine vector addition systems with states, which we show decidable in EXPSPACE. As an application of our results, we settle an open problem from the literature and show decidability of a class of string constraints involving length constraints.
Methods: In this study, a benchmark \emph{Abdominal Adipose Tissue CT Image Dataset} (AATTCT-IDS) containing 300 subjects is prepared and published. AATTCT-IDS publics 13,732 raw CT slices, and the researchers individually annotate the subcutaneous and visceral adipose tissue regions of 3,213 of those slices that have the same slice distance to validate denoising methods, train semantic segmentation models, and study radiomics. For different tasks, this paper compares and analyzes the performance of various methods on AATTCT-IDS by combining the visualization results and evaluation data. Thus, verify the research potential of this data set in the above three types of tasks. Results: In the comparative study of image denoising, algorithms using a smoothing strategy suppress mixed noise at the expense of image details and obtain better evaluation data. Methods such as BM3D preserve the original image structure better, although the evaluation data are slightly lower. The results show significant differences among them. In the comparative study of semantic segmentation of abdominal adipose tissue, the segmentation results of adipose tissue by each model show different structural characteristics. Among them, BiSeNet obtains segmentation results only slightly inferior to U-Net with the shortest training time and effectively separates small and isolated adipose tissue. In addition, the radiomics study based on AATTCT-IDS reveals three adipose distributions in the subject population. Conclusion: AATTCT-IDS contains the ground truth of adipose tissue regions in abdominal CT slices. This open-source dataset can attract researchers to explore the multi-dimensional characteristics of abdominal adipose tissue and thus help physicians and patients in clinical practice. AATCT-IDS is freely published for non-commercial purpose at: \url{//figshare.com/articles/dataset/AATTCT-IDS/23807256}.
We give a quantum approximation scheme (i.e., $(1 + \varepsilon)$-approximation for every $\varepsilon > 0$) for the classical $k$-means clustering problem in the QRAM model with a running time that has only polylogarithmic dependence on the number of data points. More specifically, given a dataset $V$ with $N$ points in $\mathbb{R}^d$ stored in QRAM data structure, our quantum algorithm runs in time $\tilde{O} \left( 2^{\tilde{O}(\frac{k}{\varepsilon})} \eta^2 d\right)$ and with high probability outputs a set $C$ of $k$ centers such that $cost(V, C) \leq (1+\varepsilon) \cdot cost(V, C_{OPT})$. Here $C_{OPT}$ denotes the optimal $k$-centers, $cost(.)$ denotes the standard $k$-means cost function (i.e., the sum of the squared distance of points to the closest center), and $\eta$ is the aspect ratio (i.e., the ratio of maximum distance to minimum distance). This is the first quantum algorithm with a polylogarithmic running time that gives a provable approximation guarantee of $(1+\varepsilon)$ for the $k$-means problem. Also, unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines and has a running time independent of parameters (e.g., condition number) that appear in such procedures.
When solving noisy linear systems Ax = b + c, the theoretical and empirical performance of stochastic iterative methods, such as the Randomized Kaczmarz algorithm, depends on the noise level. However, if there are a small number of highly corrupt measurements, one can instead use quantile-based methods to guarantee convergence to the solution x of the system, despite the presence of noise. Such methods require the computation of the entire residual vector, which may not be desirable or even feasible in some cases. In this work, we analyze the sub-sampled quantile Randomized Kaczmarz (sQRK) algorithm for solving large-scale linear systems which utilize a sub-sampled residual to approximate the quantile threshold. We prove that this method converges to the unique solution to the linear system and provide numerical experiments that support our theoretical findings. We additionally remark on the extremely small sample size case and demonstrate the importance of interplay between the choice of quantile and subset size.
We consider the problem of estimating a nested structure of two expectations taking the form $U_0 = E[\max\{U_1(Y), \pi(Y)\}]$, where $U_1(Y) = E[X\ |\ Y]$. Terms of this form arise in financial risk estimation and option pricing. When $U_1(Y)$ requires approximation, but exact samples of $X$ and $Y$ are available, an antithetic multilevel Monte Carlo (MLMC) approach has been well-studied in the literature. Under general conditions, the antithetic MLMC estimator obtains a root mean squared error $\varepsilon$ with order $\varepsilon^{-2}$ cost. If, additionally, $X$ and $Y$ require approximate sampling, careful balancing of the various aspects of approximation is required to avoid a significant computational burden. Under strong convergence criteria on approximations to $X$ and $Y$, randomised multilevel Monte Carlo techniques can be used to construct unbiased Monte Carlo estimates of $U_1$, which can be paired with an antithetic MLMC estimate of $U_0$ to recover order $\varepsilon^{-2}$ computational cost. In this work, we instead consider biased multilevel approximations of $U_1(Y)$, which require less strict assumptions on the approximate samples of $X$. Extensions to the method consider an approximate and antithetic sampling of $Y$. Analysis shows the resulting estimator has order $\varepsilon^{-2}$ asymptotic cost under the conditions required by randomised MLMC and order $\varepsilon^{-2}|\log\varepsilon|^3$ cost under more general assumptions.
Equipping the rototranslation group $SE(2)$ with a sub-Riemannian structure inspired by the visual cortex V1, we propose algorithms for image inpainting and enhancement based on hypoelliptic diffusion. We innovate on previous implementations of the methods by Citti, Sarti and Boscain et al., by proposing an alternative that prevents fading and capable of producing sharper results in a procedure that we call WaxOn-WaxOff. We also exploit the sub-Riemannian structure to define a completely new unsharp using $SE(2)$, analogous of the classical unsharp filter for 2D image processing, with applications to image enhancement. We demonstrate our method on blood vessels enhancement in retinal scans.
The goal of Causal Discovery is to find automated search methods for learning causal structures from observational data. In some cases all variables of the interested causal mechanism are measured, and the task is to predict the effects one measured variable has on another. In contrast, sometimes the variables of primary interest are not directly observable but instead inferred from their manifestations in the data. These are referred to as latent variables. One commonly known example is the psychological construct of intelligence, which cannot directly measured so researchers try to assess through various indicators such as IQ tests. In this case, casual discovery algorithms can uncover underlying patterns and structures to reveal the causal connections between the latent variables and between the latent and observed variables. This thesis focuses on two questions in causal discovery: providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, (ii) can be applied to non-Gaussian families of distributions, and (iii) under the assumption that the modified version of Strong Faithfulness holds, can be used to show the uniform consistency of a modified causal discovery algorithm; relaxing the sufficiency assumption to learn causal structures with latent variables. Given the importance of inferring cause-and-effect relationships for understanding and forecasting complex systems, the work in this thesis of relaxing various simplification assumptions is expected to extend the causal discovery method to be applicable in a wider range with diversified causal mechanism and statistical phenomena.
In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.
Leroux has proved that unreachability in Petri nets can be witnessed by a Presburger separator, i.e. if a marking $\vec{m}_\text{src}$ cannot reach a marking $\vec{m}_\text{tgt}$, then there is a formula $\varphi$ of Presburger arithmetic such that: $\varphi(\vec{m}_\text{src})$ holds; $\varphi$ is forward invariant, i.e., $\varphi(\vec{m})$ and $\vec{m} \rightarrow \vec{m}'$ imply $\varphi(\vec{m}'$); and $\neg \varphi(\vec{m}_\text{tgt})$ holds. While these separators could be used as explanations and as formal certificates of unreachability, this has not yet been the case due to their worst-case size, which is at least Ackermannian, and the complexity of checking that a formula is a separator, which is at least exponential (in the formula size). We show that, in continuous Petri nets, these two problems can be overcome. We introduce locally closed separators, and prove that: (a) unreachability can be witnessed by a locally closed separator computable in polynomial time; (b) checking whether a formula is a locally closed separator is in NC (so, simpler than unreachability, which is P-complete). We further consider the more general problem of (existential) set-to-set reachability, where two sets of markings are given as convex polytopes. We show that, while our approach does not extend directly, we can efficiently certify unreachability via an altered Petri net.
This paper presents an improved loss function for neural source code summarization. Code summarization is the task of writing natural language descriptions of source code. Neural code summarization refers to automated techniques for generating these descriptions using neural networks. Almost all current approaches involve neural networks as either standalone models or as part of a pretrained large language models e.g., GPT, Codex, LLaMA. Yet almost all also use a categorical cross-entropy (CCE) loss function for network optimization. Two problems with CCE are that 1) it computes loss over each word prediction one-at-a-time, rather than evaluating a whole sentence, and 2) it requires a perfect prediction, leaving no room for partial credit for synonyms. We propose and evaluate a loss function to alleviate this problem. In essence, we propose to use a semantic similarity metric to calculate loss over the whole output sentence prediction per training batch, rather than just loss for each word. We also propose to combine our loss with traditional CCE for each word, which streamlines the training process compared to baselines. We evaluate our approach over several baselines and report an improvement in the vast majority of conditions.
Graph Neural Networks (GNN) is an emerging field for learning on non-Euclidean data. Recently, there has been increased interest in designing GNN that scales to large graphs. Most existing methods use "graph sampling" or "layer-wise sampling" techniques to reduce training time. However, these methods still suffer from degrading performance and scalability problems when applying to graphs with billions of edges. This paper presents GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vectors and the training/testing nodes. Theoretical analysis shows that GBP is the first method that achieves sub-linear time complexity for both the precomputation and the training phases. An extensive empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time. Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1.8 billion edges in less than half an hour on a single machine.