Motivated by a class of nonlinear imaging inverse problems, for instance, multispectral computed tomography (MSCT), this paper studies the convergence theory of the nonlinear Kaczmarz method (NKM) for solving systems of nonlinear equations with component-wise convex mapping, namely, the function corresponding to each equation being convex. Although the tangential cone condition (TCC) is often used to prove the convergence of NKM, it may be impossible or difficult to verify/satisfy this condition for such kind of nonlinear systems. We propose a novel condition named relative gradient discrepancy condition (RGDC), and make use of it to prove the convergence and even the convergence rate of NKM with several general index selection strategies, where these strategies include the cyclic strategy and maximum residual strategy. Particularly, we investigate the application of NKM for solving nonlinear systems in MSCT image reconstruction. We prove that the nonlinear mapping of interest fulfills the proposed RGDC rather than the component-wise local TCC, and provide the global convergence of NKM based on the previously obtained results. Numerical experiments further illustrate the numerical convergence of NKM for MSCT image reconstruction.
This paper presents a novel method for the reconstruction of high-resolution temporal images in dynamic tomographic imaging, particularly for discrete objects with smooth boundaries that vary over time. Addressing the challenge of limited measurements per time point, we propose a technique that synergistically incorporates spatial and temporal information of the dynamic objects. This is achieved through the application of the level-set method for image segmentation and the representation of motion via a sinusoidal basis. The result is a computationally efficient and easily optimizable variational framework that enables the reconstruction of high-quality 2D or 3D image sequences with a single projection per frame. Compared to current methods, our proposed approach demonstrates superior performance on both synthetic and pseudo-dynamic real X-ray tomography datasets. The implications of this research extend to improved visualization and analysis of dynamic processes in tomographic imaging, finding potential applications in diverse scientific and industrial domains.
In the high performance computing (HPC) domain, performance variability is a major scalability issue for parallel computing applications with heavy synchronization and communication. In this paper, we present an experimental performance analysis of OpenMP benchmarks regarding the variation of execution time, and determine the potential factors causing performance variability. Our work offers some understanding of performance distributions and directions for future work on how to mitigate variability for OpenMP-based applications. Two representative OpenMP benchmarks from the EPCC OpenMP micro-benchmark suite and BabelStream are run across two x86 multicore platforms featuring up to 256 threads. From the obtained results, we characterize and explain the execution time variability as a function of thread-pinning, simultaneous multithreading (SMT) and core frequency variation.
This paper explores the integration of deep learning techniques for joint sensing and communications, with an extension to semantic communications. The integrated system comprises a transmitter and receiver operating over a wireless channel, subject to noise and fading effects. The transmitter employs a deep neural network, namely an encoder, for joint operations of source coding, channel coding, and modulation, while the receiver utilizes another deep neural network, namely a decoder, for joint operations of demodulation, channel decoding, and source decoding to reconstruct the data samples. The transmitted signal serves a dual purpose, supporting communication with the receiver and enabling sensing. When a target is present, the reflected signal is received, and another deep neural network decoder is utilized for sensing. This decoder is responsible for detecting the target's presence and determining its range. All these deep neural networks, including one encoder and two decoders, undergo joint training through multi-task learning, considering data and channel characteristics. This paper extends to incorporate semantic communications by introducing an additional deep neural network, another decoder at the receiver, operating as a task classifier. This decoder evaluates the fidelity of label classification for received signals, enhancing the integration of semantics within the communication process. The study presents results based on using the CIFAR-10 as the input data and accounting for channel effects like Additive White Gaussian Noise (AWGN) and Rayleigh fading. The results underscore the effectiveness of multi-task deep learning in achieving high-fidelity joint sensing and semantic communications.
We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.
In this paper, Bayesian optimisation is used to simultaneously search the optimal values of the shape parameter and the radius in radial basis function partition of unity interpolation problem. It is a probabilistic iterative approach that models the error function with a step-by-step self-updated Gaussian process, whereas partition of unity leverages a mesh-free method that allows us to reduce cost-intensive computations when the number of scattered data is very large, as the entire domain is decomposed into several smaller subdomains of variable radius. Numerical experiments on the scattered data interpolation problem show that the combination of these two tools sharply reduces the search time with respect to other techniques such as the leave one out cross validation.
The author introduced models of linear logic known as ''Interaction Graphs'' which generalise Girard's various geometry of interaction constructions. In this work, we establish how these models essentially rely on a deep connection between zeta functions and the execution of programs, expressed as a cocycle. This is first shown in the simple case of graphs, before begin lifted to dynamical systems. Focussing on probabilistic models, we then explain how the notion of graphings used in Interaction Graphs captures a natural class of sub-Markov processes. We then extend the realisability constructions and the notion of zeta function to provide a realisability model of second-order linear logic over the set of all (discrete-time) sub-Markov processes.
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.
Aiming at expanding few-shot relations' coverage in knowledge graphs (KGs), few-shot knowledge graph completion (FKGC) has recently gained more research interests. Some existing models employ a few-shot relation's multi-hop neighbor information to enhance its semantic representation. However, noise neighbor information might be amplified when the neighborhood is excessively sparse and no neighbor is available to represent the few-shot relation. Moreover, modeling and inferring complex relations of one-to-many (1-N), many-to-one (N-1), and many-to-many (N-N) by previous knowledge graph completion approaches requires high model complexity and a large amount of training instances. Thus, inferring complex relations in the few-shot scenario is difficult for FKGC models due to limited training instances. In this paper, we propose a few-shot relational learning with global-local framework to address the above issues. At the global stage, a novel gated and attentive neighbor aggregator is built for accurately integrating the semantics of a few-shot relation's neighborhood, which helps filtering the noise neighbors even if a KG contains extremely sparse neighborhoods. For the local stage, a meta-learning based TransH (MTransH) method is designed to model complex relations and train our model in a few-shot learning fashion. Extensive experiments show that our model outperforms the state-of-the-art FKGC approaches on the frequently-used benchmark datasets NELL-One and Wiki-One. Compared with the strong baseline model MetaR, our model achieves 5-shot FKGC performance improvements of 8.0% on NELL-One and 2.8% on Wiki-One by the metric Hits@10.
We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.
Due to the significance and value in human-computer interaction and natural language processing, task-oriented dialog systems are attracting more and more attention in both academic and industrial communities. In this paper, we survey recent advances and challenges in an issue-specific manner. We discuss three critical topics for task-oriented dialog systems: (1) improving data efficiency to facilitate dialog system modeling in low-resource settings, (2) modeling multi-turn dynamics for dialog policy learning to achieve better task-completion performance, and (3) integrating domain ontology knowledge into the dialog model in both pipeline and end-to-end models. We also review the recent progresses in dialog evaluation and some widely-used corpora. We believe that this survey can shed a light on future research in task-oriented dialog systems.