This paper develops a novel minimal-state operational semantics for higher-order functional languages which uses only the call stack and two source program points as the complete state information: there is no environment, no substitution, no continuation, etc. We prove this form of operational semantics is equivalent to standard presentations. We then show how this approach can open the door to potential new applications: we define a program analysis as a direct finitization of this operational semantics. The program analysis that naturally emerges has a number of novel and interesting properties compared to standard program analyses for higher-order programs: for example, it can infer recurrences, and does not need value widening. We both give a formal definition of the analysis and describe our current implementation.
This paper explores the application of automated planning to automated theorem proving, which is a branch of automated reasoning concerned with the development of algorithms and computer programs to construct mathematical proofs. In particular, we investigate the use of planning to construct elementary proofs in abstract algebra, which provides a rigorous and axiomatic framework for studying algebraic structures such as groups, rings, fields, and modules. We implement basic implications, equalities, and rules in both deterministic and non-deterministic domains to model commutative rings and deduce elementary results about them. The success of this initial implementation suggests that the well-established techniques seen in automated planning are applicable to the relatively newer field of automated theorem proving. Likewise, automated theorem proving provides a new, challenging domain for automated planning.
In modern days, the ability to carry out computations in parallel is key to efficient implementations of computationally intensive algorithms. This paper investigates the applicability of the previously proposed Augmented Island Resampling Particle Filter (AIRPF) -- an algorithm designed for parallel implementations -- to particle Markov Chain Monte Carlo (PMCMC). We show that AIRPF produces a non-negative unbiased estimator of the marginal likelihood and hence is suitable for PMCMC. We also prove stability properties, similar to those of the $\alpha$SMC algorithm, for AIRPF. This implies that the error of AIRPF can be bounded uniformly in time by controlling the effective number of filters, which in turn can be done by adaptively constraining the interactions between filters. We demonstrate the superiority of AIRPF over independent Bootstrap Particle Filters, not only numerically, but also theoretically. To this end, we extend the previously proposed collision analysis approach to derive an explicit expression for the variance of the marginal likelihood estimate. This expression admits exact evaluation of the variance in some simple scenarios as we shall also demonstrate.
Visualization of extremely large datasets in static or dynamic form is a huge challenge because most traditional methods cannot deal with big data problems. A new visualization method for big data is proposed based on Projection Pursuit, Guided Tour and Data Nuggets methods, that will help display interesting hidden structures such as clusters, outliers, and other nonlinear structures in big data. The Guided Tour is a dynamic graphical tool for high-dimensional data combining Projection Pursuit and Grand Tour methods. It displays a dynamic sequence of low-dimensional projections obtained by using Projection Pursuit (PP) index functions to navigate the data space. Different PP indices have been developed to detect interesting structures of multivariate data but there are computational problems for big data using the original guided tour with these indices. A new PP index is developed to be computable for big data, with the help of a data compression method called Data Nuggets that reduces large datasets while maintaining the original data structure. Simulation studies are conducted and a real large dataset is used to illustrate the proposed methodology. Static and dynamic graphical tools for big data can be developed based on the proposed PP index to detect nonlinear structures.
We present a general framework for preconditioning Hermitian positive definite linear systems based on the Bregman log determinant divergence. This divergence provides a measure of discrepancy between a preconditioner and a target matrix. Given an approximate factorisation of a target matrix, the proposed framework tells us how to construct a low-rank approximation of the typically indefinite factorisation error. The resulting preconditioner is therefore a sum of a Hermitian positive definite matrix given by an approximate factorisation plus a low-rank matrix. Notably, the low-rank term is not generally obtained as a truncated singular value decomposition. This framework leads to a new truncation where principal directions are not based on the magnitude of the singular values. We describe a procedure for determining these \emph{Bregman directions} and prove that preconditioners constructed in this way are minimisers of the aforementioned divergence. Finally, we demonstrate using several numerical examples how the proposed preconditioner performs in terms of convergence of the preconditioned conjugate gradient method (PCG). For the examples we consider, an incomplete Cholesky preconditioner can be greatly improved in this way, and in some cases only a modest low-rank compensation term is required to obtain a considerable improvement in convergence. We also consider matrices arising from interior point methods for linear programming that do not admit such an incomplete factorisation by default, and present a robust incomplete Cholesky preconditioner based on the proposed methodology. The results highlight that the choice of truncation is critical for ill-conditioned matrices. We show numerous examples where PCG converges to a small tolerance by using the proposed preconditioner, whereas PCG with a SVD-based preconditioner fails to do so.
Recently, text watermarking algorithms for large language models (LLMs) have been proposed to mitigate the potential harms of text generated by LLMs, including fake news and copyright issues. However, current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting during public detection. To address this limitation, we propose an unforgeable publicly verifiable watermark algorithm that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages. Meanwhile, the token embedding parameters are shared between the generation and detection networks, which makes the detection network achieve a high accuracy very efficiently. Experiments demonstrate that our algorithm attains high detection accuracy and computational efficiency through neural networks with a minimized number of parameters. Subsequent analysis confirms the high complexity involved in forging the watermark from the detection network. Our code and data are available at \href{//github.com/THU-BPM/unforgeable_watermark}{//github.com/THU-BPM/unforgeable\_watermark}.
This paper proposes the use of causal modeling to detect and mitigate algorithmic bias that is nonlinear in the protected attribute. We provide a general overview of our approach. We use the German Credit data set, which is available for download from the UC Irvine Machine Learning Repository, to develop (1) a prediction model, which is treated as a black box, and (2) a causal model for bias mitigation. In this paper, we focus on age bias and the problem of binary classification. We show that the probability of getting correctly classified as "low risk" is lowest among young people. The probability increases with age nonlinearly. To incorporate the nonlinearity into the causal model, we introduce a higher order polynomial term. Based on the fitted causal model, the de-biased probability estimates are computed, showing improved fairness with little impact on overall classification accuracy. Causal modeling is intuitive and, hence, its use can enhance explicability and promotes trust among different stakeholders of AI.
Modern communication systems need to fulfill multiple and often conflicting objectives at the same time. In particular, new applications require high reliability while operating at low transmit powers. Moreover, reliability constraints may vary over time depending on the current state of the system. One solution to address this problem is to use joint transmissions from a number of base stations (BSs) to meet the reliability requirements. However, this approach is inefficient when considering the overall total transmit power. In this work, we propose a reinforcement learning-based power allocation scheme for an unmanned aerial vehicle (UAV) communication system with varying communication reliability requirements. In particular, the proposed scheme aims to minimize the total transmit power of all BSs while achieving an outage probability that is less than a tolerated threshold. This threshold varies over time, e.g., when the UAV enters a critical zone with high-reliability requirements. Our results show that the proposed learning scheme uses dynamic power allocation to meet varying reliability requirements, thus effectively conserving energy.
Linear arrangements of graphs are a well-known type of graph labeling and are found at the heart of many important computational problems, such as the Minimum Linear Arrangement Problem (minLA). A linear arrangement is usually defined as a permutation of the $n$ vertices of a graph. An intuitive geometric setting is that of vertices lying on consecutive integer positions in the real line, starting at 1; edges are typically drawn as semicircles above the real line. In this paper we study the Maximum Linear Arrangement problem (MaxLA), the maximization variant of minLA and a less studied problem than minLA. We a devise new characterization of maximum arrangements of general graphs, and prove that MaxLA can be solved for cycle graphs in constant time, and for $k$-linear trees ($k\le2$) in time $O(n)$. We present a simple algorithm that solves a constrained variant of MaxLA, which we call bipartite MaxLA, in time $O(n)$. This algorithm has two promising characteristics. First, it solves MaxLA for most trees consisting of a few tenths of nodes. Second, it produces a high quality approximation to MaxLA for trees where the algorithm fails to solve MaxLA. Furthermore, we conjecture this algorithm solves MaxLA for at least $50\%$ of all free trees.
We propose an approach based on machine learning to solve two-stage linear adaptive robust optimization (ARO) problems with binary here-and-now variables and polyhedral uncertainty sets. We encode the optimal here-and-now decisions, the worst-case scenarios associated with the optimal here-and-now decisions, and the optimal wait-and-see decisions into what we denote as the strategy. We solve multiple similar ARO instances in advance using the column and constraint generation algorithm and extract the optimal strategies to generate a training set. We train a machine learning model that predicts high-quality strategies for the here-and-now decisions, the worst-case scenarios associated with the optimal here-and-now decisions, and the wait-and-see decisions. We also introduce an algorithm to reduce the number of different target classes the machine learning algorithm needs to be trained on. We apply the proposed approach to the facility location, the multi-item inventory control and the unit commitment problems. Our approach solves ARO problems drastically faster than the state-of-the-art algorithms with high accuracy.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.