For typical first-order logical theories, satisfying assignments have a straightforward finite representation that can directly serve as a certificate that a given assignment satisfies the given formula. For non-linear real arithmetic with transcendental functions, however, no general finite representation of satisfying assignments is available. Hence, in this paper, we introduce a different form of satisfiability certificate for this theory, formulate the satisfiability verification problem as the problem of searching for such a certificate, and show how to perform this search in a systematic fashion. This does not only ease the independent verification of results, but also allows the systematic design of new, efficient search techniques. Computational experiments document that the resulting method is able to prove satisfiability of a substantially higher number of benchmark problems than existing methods.
Given a (machine learning) classifier and a collection of unlabeled data, how can we efficiently identify misclassification patterns presented in this dataset? To address this problem, we propose a human-machine collaborative framework that consists of a team of human annotators and a sequential recommendation algorithm. The recommendation algorithm is conceptualized as a stochastic sampler that, in each round, queries the annotators a subset of samples for their true labels and obtains the feedback information on whether the samples are misclassified. The sampling mechanism needs to balance between discovering new patterns of misclassification (exploration) and confirming the potential patterns of classification (exploitation). We construct a determinantal point process, whose intensity balances the exploration-exploitation trade-off through the weighted update of the posterior at each round to form the generator of the stochastic sampler. The numerical results empirically demonstrate the competitive performance of our framework on multiple datasets at various signal-to-noise ratios.
The paper studies a scalar auxiliary variable (SAV) method to solve the Cahn-Hilliard equation with degenerate mobility posed on a smooth closed surface {\Gamma}. The SAV formulation is combined with adaptive time stepping and a geometrically unfitted trace finite element method (TraceFEM), which embeds {\Gamma} in R3. The stability is proven to hold in an appropriate sense for both first- and second-order in time variants of the method. The performance of our SAV method is illustrated through a series of numerical experiments, which include systematic comparison with a stabilized semi-explicit method.
We propose a spectral method for the 1D-1V Vlasov-Poisson system where the discretization in velocity space is based on asymmetrically-weighted Hermite functions, dynamically adapted via a scaling $\alpha$ and shifting $u$ of the velocity variable. Specifically, at each time instant an adaptivity criterion selects new values of $\alpha$ and $u$ based on the numerical solution of the discrete Vlasov-Poisson system obtained at that time step. Once the new values of the Hermite parameters $\alpha$ and $u$ are fixed, the Hermite expansion is updated and the discrete system is further evolved for the next time step. The procedure is applied iteratively over the desired temporal interval. The key aspects of the adaptive algorithm are: the map between approximation spaces associated with different values of the Hermite parameters that preserves total mass, momentum and energy; and the adaptivity criterion to update $\alpha$ and $u$ based on physics considerations relating the Hermite parameters to the average velocity and temperature of each plasma species. For the discretization of the spatial coordinate, we rely on Fourier functions and use the implicit midpoint rule for time stepping. The resulting numerical method possesses intrinsically the property of fluid-kinetic coupling, where the low-order terms of the expansion are akin to the fluid moments of a macroscopic description of the plasma, while kinetic physics is retained by adding more spectral terms. Moreover, the scheme features conservation of total mass, momentum and energy associated in the discrete, for periodic boundary conditions. A set of numerical experiments confirms that the adaptive method outperforms the non-adaptive one in terms of accuracy and stability of the numerical solution.
Domination problems in general can capture situations in which some entities have an effect on other entities (and sometimes on themselves). The usual goal is to select a minimum number of entities that can influence a target group of entities or to influence a maximum number of target entities with a certain number of available influencers. In this work, we focus on the distinction between \textit{internal} and \textit{external} domination in the respective maximization problem. In particular, a dominator can dominate its entire neighborhood in a graph, internally dominating itself, while those of its neighbors which are not dominators themselves are externally dominated. We study the problem of maximizing the external domination that a given number of dominators can yield and we present a 0.5307-approximation algorithm for this problem. Moreover, our methods provide a framework for approximating a number of problems that can be cast in terms of external domination. In particular, we observe that an interesting interpretation of the maximum coverage problem can capture a new problem in elections, in which we want to maximize the number of \textit{externally represented} voters. We study this problem in two different settings, namely Non-Secrecy and Rational-Candidate, and provide approximability analysis for two alternative approaches; our analysis reveals, among other contributions, that an earlier resource allocation algorithm is, in fact, a 0.462-approximation algorithm for maximum external domination in directed graphs.
We find two series expansions for Legendre's second incomplete elliptic integral $E(\lambda, k)$ in terms of recursively computed elementary functions. Both expansions converge at every point of the unit square in the $(\lambda, k)$ plane. Partial sums of the proposed expansions form a sequence of approximations to $E(\lambda,k)$ which are asymptotic when $\lambda$ and/or $k$ tend to unity, including when both approach the logarithmic singularity $\lambda=k=1$ from any direction. Explicit two-sided error bounds are given at each approximation order. These bounds yield a sequence of increasingly precise asymptotically correct two-sided inequalities for $E(\lambda, k)$. For the reader's convenience we further present explicit expressions for low-order approximations and numerical examples to illustrate their accuracy. Our derivations are based on series rearrangements, hypergeometric summation algorithms and extensive use of the properties of the generalized hypergeometric functions including some recent inequalities.
It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method.
Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.
A barrier certificate, defined over the states of a dynamical system, is a real-valued function whose zero level set characterizes an inductively verifiable state invariant separating reachable states from unsafe ones. When combined with powerful decision procedures such as sum-of-squares programming (SOS) or satisfiability-modulo-theory solvers (SMT) barrier certificates enable an automated deductive verification approach to safety. The barrier certificate approach has been extended to refute omega-regular specifications by separating consecutive transitions of omega-automata in the hope of denying all accepting runs. Unsurprisingly, such tactics are bound to be conservative as refutation of recurrence properties requires reasoning about the well-foundedness of the transitive closure of the transition relation. This paper introduces the notion of closure certificates as a natural extension of barrier certificates from state invariants to transition invariants. We provide SOS and SMT based characterization for automating the search of closure certificates and demonstrate their effectiveness via a paradigmatic case study.
Graph machine learning has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To tackle the challenge, automated graph machine learning, which aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design, is gaining an increasing number of attentions from the research community. In this paper, we extensively discuss automated graph machine approaches, covering hyper-parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning. We briefly overview existing libraries designed for either graph machine learning or automated machine learning respectively, and further in depth introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning. Last but not least, we share our insights on future research directions for automated graph machine learning. This paper is the first systematic and comprehensive discussion of approaches, libraries as well as directions for automated graph machine learning.
Link prediction on knowledge graphs (KGs) is a key research topic. Previous work mainly focused on binary relations, paying less attention to higher-arity relations although they are ubiquitous in real-world KGs. This paper considers link prediction upon n-ary relational facts and proposes a graph-based approach to this task. The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention. The fully-connected attention captures universal inter-vertex interactions, while with edge-aware attentive biases to particularly encode the graph structure and its heterogeneity. In this fashion, our approach fully models global and local dependencies in each n-ary fact, and hence can more effectively capture associations therein. Extensive evaluation verifies the effectiveness and superiority of our approach. It performs substantially and consistently better than current state-of-the-art across a variety of n-ary relational benchmarks. Our code is publicly available.