It was proved by Maksimova in 1977 that exactly eight varieties of Heyting algebras have the amalgamation property, and hence exactly eight axiomatic extensions of intuitionistic propositional logic have the deductive interpolation property. The prevalence of the deductive interpolation property for axiomatic extensions of substructural logics and the amalgamation property for varieties of pointed residuated lattices, their equivalent algebraic semantics, is far less well understood, however. Taking as our starting point a formulation of intuitionistic propositional logic as the full Lambek calculus with exchange, weakening, and contraction, we investigate the role of the exchange rule--algebraically, the commutativity law--in determining the scope of these properties. First, we show that there are continuum-many varieties of idempotent semilinear residuated lattices that have the amalgamation property and contain non-commutative members, and hence continuum-many axiomatic extensions of the corresponding logic that have the deductive interpolation property in which exchange is not derivable. We then show that, in contrast, exactly sixty varieties of commutative idempotent semilinear residuated lattices have the amalgamation property, and hence exactly sixty axiomatic extensions of the corresponding logic with exchange have the deductive interpolation property. From this latter result, it follows also that there are exactly sixty varieties of commutative idempotent semilinear residuated lattices whose first-order theories have a model completion.
Quantum counting is a key quantum algorithm that aims to determine the number of marked elements in a database. This algorithm is based on the quantum phase estimation algorithm and uses the evolution operator of Grover's algorithm because its non-trivial eigenvalues are dependent on the number of marked elements. Since Grover's algorithm can be viewed as a quantum walk on a complete graph, a natural way to extend quantum counting is to use the evolution operator of quantum-walk-based search on non-complete graphs instead of Grover's operator. In this paper, we explore this extension by analyzing the coined quantum walk on the complete bipartite graph with an arbitrary number of marked vertices. We show that some eigenvalues of the evolution operator depend on the number of marked vertices and using this fact we show that the quantum phase estimation can be used to obtain the number of marked vertices. The time complexity for estimating the number of marked vertices in the bipartite graph with our algorithm aligns closely with that of the original quantum counting algorithm.
A dominating set D in a graph G is a subset of its vertices such that every vertex of the graph which does not belong to set D is adjacent to at least one vertex from set D. A set of vertices of graph G is a global dominating set if it is a dominating set for both, graph G and its complement. The objective is to find a global dominating set with the minimum cardinality. The problem is known to be NP-hard. Neither exact nor approximation algorithm existed . We propose two exact solution methods, one of them being based on an integer linear program (ILP) formulation, three heuristic algorithms and a special purification procedure that further reduces the size of a global dominated set delivered by any of our heuristic algorithms. We show that the problem remains NP-hard for restricted types of graphs and specify some families of graphs for which the heuristics guarantee the optimality. The second exact algorithm turned out to be about twice faster than ILP for graphs with more than 230 vertices and up to 1080 vertices, which were the largest benchmark instances that were solved optimally. The heuristics were tested for the existing 2284 benchmark problem instances with up to 14000 vertices and delivered solutions for the largest instances in less than one minute. Remarkably, for about 52% of the 1000 instances with the obtained optimal solutions, at least one of the heuristics generated an optimal solution, where the average approximation error for the remaining instances was 1.07%.
Closure spaces, a generalisation of topological spaces, have shown to be a convenient theoretical framework for spatial model checking. The closure operator of closure spaces and quasi-discrete closure spaces induces a notion of neighborhood akin to that of topological spaces that build on open sets. For closure models and quasi-discrete closure models, in this paper we present three notions of bisimilarity that are logically characterised by corresponding modal logics with spatial modalities: (i) CM-bisimilarity for closure models (CMs) is shown to generalise Topo-bisimilarity for topological models. CM-bisimilarity corresponds to equivalence with respect to the infinitary modal logic IML that includes the modality ${\cal N}$ for ``being near''. (ii) CMC-bisimilarity, with `CMC' standing for CM-bisimilarity with converse, refines CM-bisimilarity for quasi-discrete closure spaces, carriers of quasi-discrete closure models. Quasi-discrete closure models come equipped with two closure operators, Direct ${\cal C}$ and Converse ${\cal C}$, stemming from the binary relation underlying closure and its converse. CMC-bisimilarity, is captured by the infinitary modal logic IMLC including two modalities, Direct ${\cal N}$ and Converse ${\cal N}$, corresponding to the two closure operators. (iii) CoPa-bisimilarity on quasi-discrete closure models, which is weaker than CMC-bisimilarity, is based on the notion of compatible paths. The logical counterpart of CoPa-bisimilarity is the infinitary modal logic ICRL with modalities Direct $\zeta$ and Converse $\zeta$, whose semantics relies on forward and backward paths, respectively. It is shown that CoPa-bisimilarity for quasi-discrete closure models relates to divergence-blind stuttering equivalence for Kripke structures.
Reshaping, a point operation that alters the characteristics of signals, has been shown capable of improving the compression ratio in video coding practices. Out-of-loop reshaping that directly modifies the input video signal was first adopted as the supplemental enhancement information~(SEI) for the HEVC/H.265 without the need of altering the core design of the video codec. VVC/H.266 further improves the coding efficiency by adopting in-loop reshaping that modifies the residual signal being processed in the hybrid coding loop. In this paper, we theoretically analyze the rate-distortion performance of the in-loop reshaping and use experiments to verify the theoretical result. We prove that the in-loop reshaping can improve coding efficiency when the entropy coder adopted in the coding pipeline is suboptimal, which is in line with the practical scenarios that video codecs operate in. We derive the PSNR gain in a closed form and show that the theoretically predicted gain is consistent with that measured from experiments using standard testing video sequences.
Quasiperiodic systems are important space-filling ordered structures, without decay and translational invariance. How to solve quasiperiodic systems accurately and efficiently is of great challenge. A useful approach, the projection method (PM) [J. Comput. Phys., 256: 428, 2014], has been proposed to compute quasiperiodic systems. Various studies have demonstrated that the PM is an accurate and efficient method to solve quasiperiodic systems. However, there is a lack of theoretical analysis of PM. In this paper, we present a rigorous convergence analysis of the PM by establishing a mathematical framework of quasiperiodic functions and their high-dimensional periodic functions. We also give a theoretical analysis of quasiperiodic spectral method (QSM) based on this framework. Results demonstrate that PM and QSM both have exponential decay, and the QSM (PM) is a generalization of the periodic Fourier spectral (pseudo-spectral) method. Then we analyze the computational complexity of PM and QSM in calculating quasiperiodic systems. The PM can use fast Fourier transform, while the QSM cannot. Moreover, we investigate the accuracy and efficiency of PM, QSM and periodic approximation method in solving the linear time-dependent quasiperiodic Schr\"{o}dinger equation.
A fitting algorithm for conjunctive queries (CQs) produces, given a set of positively and negatively labeled data examples, a CQ that fits these examples. In general, there may be many non-equivalent fitting CQs and thus the algorithm has some freedom in producing its output. Additional desirable properties of the produced CQ are that it generalizes well to unseen examples in the sense of PAC learning and that it is most general or most specific in the set of all fitting CQs. In this research note, we show that these desiderata are incompatible when we require PAC-style generalization from a polynomial sample: we prove that any fitting algorithm that produces a most-specific fitting CQ cannot be a sample-efficient PAC learning algorithm, and the same is true for fitting algorithms that produce a most-general fitting CQ (when it exists). Our proofs rely on a polynomial construction of relativized homomorphism dualities for path-shaped structures.
The field of algorithmic fairness has rapidly emerged over the past 15 years as algorithms have become ubiquitous in everyday lives. Algorithmic fairness traditionally considers statistical notions of fairness algorithms might satisfy in decisions based on noisy data. We first show that these are theoretically disconnected from welfare-based notions of fairness. We then discuss two individual welfare-based notions of fairness, envy freeness and prejudice freeness, and establish conditions under which they are equivalent to error rate balance and predictive parity, respectively. We discuss the implications of these findings in light of the recently discovered impossibility theorem in algorithmic fairness (Kleinberg, Mullainathan, & Raghavan (2016), Chouldechova (2017)).
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
Detection and recognition of text in natural images are two main problems in the field of computer vision that have a wide variety of applications in analysis of sports videos, autonomous driving, industrial automation, to name a few. They face common challenging problems that are factors in how text is represented and affected by several environmental conditions. The current state-of-the-art scene text detection and/or recognition methods have exploited the witnessed advancement in deep learning architectures and reported a superior accuracy on benchmark datasets when tackling multi-resolution and multi-oriented text. However, there are still several remaining challenges affecting text in the wild images that cause existing methods to underperform due to there models are not able to generalize to unseen data and the insufficient labeled data. Thus, unlike previous surveys in this field, the objectives of this survey are as follows: first, offering the reader not only a review on the recent advancement in scene text detection and recognition, but also presenting the results of conducting extensive experiments using a unified evaluation framework that assesses pre-trained models of the selected methods on challenging cases, and applies the same evaluation criteria on these techniques. Second, identifying several existing challenges for detecting or recognizing text in the wild images, namely, in-plane-rotation, multi-oriented and multi-resolution text, perspective distortion, illumination reflection, partial occlusion, complex fonts, and special characters. Finally, the paper also presents insight into the potential research directions in this field to address some of the mentioned challenges that are still encountering scene text detection and recognition techniques.
While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.