To realize the potential benefits and mitigate potential risks of AI, it is necessary to develop a framework of governance that conforms to ethics and fundamental human values. Although several organizations have issued guidelines and ethical frameworks for trustworthy AI, without a mediating governance structure, these ethical principles will not translate into practice. In this paper, we propose a multilevel governance approach that involves three groups of interdependent stakeholders: governments, corporations, and citizens. We examine their interrelationships through dimensions of trust, such as competence, integrity, and benevolence. The levels of governance combined with the dimensions of trust in AI provide practical insights that can be used to further enhance user experiences and inform public policy related to AI.
As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate a potentially impredicative term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the algorithm, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in an algorithm which uses a constraint-postponement strategy to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's arithmetic library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.
Over the past decades, cognitive neuroscientists and behavioral economists have recognized the value of describing the process of decision making in detail and modeling the emergence of decisions over time. For example, the time it takes to decide can reveal more about an agents true hidden preferences than only the decision itself. Similarly, data that track the ongoing decision process such as eye movements or neural recordings contain critical information that can be exploited, even if no decision is made. Here, we argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time and incorporate related process data to improve AI predictions in general and human-AI interactions in particular. First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence, and we present related empirical work in psychology, neuroscience, and economics. Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making. Finally, we outline how a more principled inclusion of the evidence-accumulation framework into the training and use of AI can help to improve human-AI interactions in the future.
Although there have been remarkable advances in dialogue systems through the dialogue systems technology competition (DSTC), it remains one of the key challenges to building a robust task-oriented dialogue system with a speech interface. Most of the progress has been made for text-based dialogue systems since there are abundant datasets with written corpora while those with spoken dialogues are very scarce. However, as can be seen from voice assistant systems such as Siri and Alexa, it is of practical importance to transfer the success to spoken dialogues. In this paper, we describe our engineering effort in building a highly successful model that participated in the speech-aware dialogue systems technology challenge track in DSTC11. Our model consists of three major modules: (1) automatic speech recognition error correction to bridge the gap between the spoken and the text utterances, (2) text-based dialogue system (D3ST) for estimating the slots and values using slot descriptions, and (3) post-processing for recovering the error of the estimated slot value. Our experiments show that it is important to use an explicit automatic speech recognition error correction module, post-processing, and data augmentation to adapt a text-based dialogue state tracker for spoken dialogue corpora.
Conjugate gradient is an efficient algorithm for solving large sparse linear systems. It has been utilized to accelerate the computation in Bayesian analysis for many large-scale problems. This article discusses the applications of conjugate gradient in Bayesian computation, with a focus on sparse regression and spatial analysis. A self-contained introduction of conjugate gradient is provided to facilitate potential applications in a broader range of problems.
Inverse problems arise anywhere we have indirect measurement. As, in general they are ill-posed, to obtain satisfactory solutions for them needs prior knowledge. Classically, different regularization methods and Bayesian inference based methods have been proposed. As these methods need a great number of forward and backward computations, they become costly in computation, in particular, when the forward or generative models are complex and the evaluation of the likelihood becomes very costly. Using Deep Neural Network surrogate models and approximate computation can become very helpful. However, accounting for the uncertainties, we need first understand the Bayesian Deep Learning and then, we can see how we can use them for inverse problems. In this work, we focus on NN, DL and more specifically the Bayesian DL particularly adapted for inverse problems. We first give details of Bayesian DL approximate computations with exponential families, then we will see how we can use them for inverse problems. We consider two cases: First the case where the forward operator is known and used as physics constraint, the second more general data driven DL methods. keyword: Neural Network, Variational Bayesian inference, Bayesian Deep Learning (DL), Inverse problems, Physics based DL.
This work provides a study of parameter estimators based on functions of Markov chains generated by some perturbations of the independence copula. We provide asymptotic distributions of maximum likelihood estimators and confidence intervals for copula parameters of several families of copulas introduced in Longla (2023). Another set of moment-like estimators is proposed along with a multivariate central limit theorem, that provides their asymptotic distributions. We investigate the particular case of Markov chains generated by sine copulas, sine-cosine copulas and the extended Farlie-Gumbel-Morgenstern copula family. Some tests of independence are proposed. A simulation study is provided for the three copula families of interest. This simulation proposes a comparative study of the two introduced estimators and the robust estimator of Longla and Peligrad (2021), showing advantages of the proposed work.
In the symbolic verification of cryptographic protocols, a central problem is deciding whether a protocol admits an execution which leaks a designated secret to the malicious intruder. Rusinowitch & Turuani (2003) show that, when considering finitely many sessions, this ``insecurity problem'' is NP-complete. Central to their proof strategy is the observation that any execution of a protocol can be simulated by one where the intruder only communicates terms of bounded size. However, when we consider models where, in addition to terms, one can also communicate logical statements about terms, the analysis of the insecurity problem becomes tricky when both these inference systems are considered together. In this paper we consider the insecurity problem for protocols with logical statements that include {\em equality on terms} and {\em existential quantification}. Witnesses for existential quantifiers may be unbounded, and obtaining small witness terms while maintaining equality proofs complicates the analysis considerably. We extend techniques from Rusinowitch & Turuani (2003) to show that this problem is also in NP.
This paper considers a Bayesian approach for inclusion detection in nonlinear inverse problems using two known and popular push-forward prior distributions: the star-shaped and level set prior distributions. We analyze the convergence of the corresponding posterior distributions in a small measurement noise limit. The methodology is general; it works for priors arising from any H\"older continuous transformation of Gaussian random fields and is applicable to a range of inverse problems. The level set and star-shaped prior distributions are examples of push-forward priors under H\"older continuous transformations that take advantage of the structure of inclusion detection problems. We show that the corresponding posterior mean converges to the ground truth in a proper probabilistic sense. Numerical tests on a two-dimensional quantitative photoacoustic tomography problem showcase the approach. The results highlight the convergence properties of the posterior distributions and the ability of the methodology to detect inclusions with sufficiently regular boundaries.
Graph Convolution Networks (GCNs) manifest great potential in recommendation. This is attributed to their capability on learning good user and item embeddings by exploiting the collaborative signals from the high-order neighbors. Like other GCN models, the GCN based recommendation models also suffer from the notorious over-smoothing problem - when stacking more layers, node embeddings become more similar and eventually indistinguishable, resulted in performance degradation. The recently proposed LightGCN and LR-GCN alleviate this problem to some extent, however, we argue that they overlook an important factor for the over-smoothing problem in recommendation, that is, high-order neighboring users with no common interests of a user can be also involved in the user's embedding learning in the graph convolution operation. As a result, the multi-layer graph convolution will make users with dissimilar interests have similar embeddings. In this paper, we propose a novel Interest-aware Message-Passing GCN (IMP-GCN) recommendation model, which performs high-order graph convolution inside subgraphs. The subgraph consists of users with similar interests and their interacted items. To form the subgraphs, we design an unsupervised subgraph generation module, which can effectively identify users with common interests by exploiting both user feature and graph structure. To this end, our model can avoid propagating negative information from high-order neighbors into embedding learning. Experimental results on three large-scale benchmark datasets show that our model can gain performance improvement by stacking more layers and outperform the state-of-the-art GCN-based recommendation models significantly.
In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.