We demonstrate a new, hybrid symbolic-numerical method for the automatic synthesis of all families of translation operators required for the execution of the Fast Multipole Method (FMM). Our method is applicable in any dimensionality and to any translation-invariant kernel. The Fast Multipole Method, of course, is the leading approach for attaining linear complexity in the evaluation of long-range (e.g. Coulomb) many-body interactions. Low complexity in translation operators for the Fast Multipole Method (FMM) is usually achieved by algorithms specialized for a potential obeying a specific partial differential equation (PDE). Absent a PDE or specialized algorithms, Taylor series based FMMs or kernel-independent FMM have been used, at asymptotically higher expense. When symbolically provided with a constant-coefficient elliptic PDE obeyed by the potential, our algorithm can automatically synthesize translation operators requiring $\mathrm{O}(p^d)$ operations, where $p$ is the expansion order and $d$ is dimension, compared with $\mathrm{O}(p^{2d})$ operations in a naive approach carried out on (Cartesian) Taylor expansions. This is achieved by using a compression scheme that asymptotically reduces the number of terms in the Taylor expansion and then operating directly on this ``compressed'' representation. Judicious exploitation of shared subexpressions permits formation, translation, and evaluation of local and multipole expansions to be performed in $\mathrm{O}(p^{d})$ operations, while an FFT-based scheme permits multipole-to-local translations in $\mathrm{O}(p^{d-1}\log(p))$ operations. We demonstrate computational scaling of code generation and evaluation as well as numerical accuracy through numerical experiments on a number of potentials from classical physics.
Cycles are one of the fundamental subgraph patterns and being able to enumerate them in graphs enables important applications in a wide variety of fields, including finance, biology, chemistry, and network science. However, to enable cycle enumeration in real-world applications, efficient parallel algorithms are required. In this work, we propose scalable parallelisation of state-of-the-art sequential algorithms for enumerating simple, temporal, and hop-constrained cycles. First, we focus on the simple cycle enumeration problem and parallelise the algorithms by Johnson and by Read and Tarjan in a fine-grained manner. We theoretically show that our resulting fine-grained parallel algorithms are scalable, with the fine-grained parallel Read-Tarjan algorithm being strongly scalable. In contrast, we show that straightforward coarse-grained parallel versions of these simple cycle enumeration algorithms that exploit edge- or vertex-level parallelism are not scalable. Next, we adapt our fine-grained approach to enable the enumeration of cycles under time-window, temporal, and hop constraints. Our evaluation on a cluster with 256 CPU cores that can execute up to 1024 simultaneous threads demonstrates a near-linear scalability of our fine-grained parallel algorithms when enumerating cycles under the aforementioned constraints. On the same cluster, our fine-grained parallel algorithms achieve, on average, one order of magnitude speedup compared to the respective coarse-grained parallel versions of the state-of-the-art algorithms for cycle enumeration. The performance gap between the fine-grained and the coarse-grained parallel algorithms increases as we use more CPU cores.
Five Cells is a logic puzzle consisting of a rectangular grid, with some cells containg a number. The player has to partition the grid into blocks, each consisting of five cells, such that the number in each cell must be equal to the number of edges of that cell that are borders of blocks. In this paper, we propose a physical zero-knowledge proof protocol for Five Cells using a deck of playing cards, which allows a prover to physically show that he/she knows a solution of the puzzle without revealing it. More importantly, in the optimization we develop a technique to verify a graph coloring that no two adjacent vertices have the same color without revealing any information about the coloring. This technique reduces the number of required cards in our protocol from quadratic to linear in the number of cells, and can also be used in other protocols related to graph coloring.
In this paper, we develop a MultiTask Learning (MTL) model to achieve dense predictions for comics panels to, in turn, facilitate the transfer of comics from one publication channel to another by assisting authors in the task of reconfiguring their narratives. Our MTL method can successfully identify the semantic units as well as the embedded notion of 3D in comic panels. This is a significantly challenging problem because comics comprise disparate artistic styles, illustrations, layouts, and object scales that depend on the authors creative process. Typically, dense image-based prediction techniques require a large corpus of data. Finding an automated solution for dense prediction in the comics domain, therefore, becomes more difficult with the lack of ground-truth dense annotations for the comics images. To address these challenges, we develop the following solutions: 1) we leverage a commonly-used strategy known as unsupervised image-to-image translation, which allows us to utilize a large corpus of real-world annotations; 2) we utilize the results of the translations to develop our multitasking approach that is based on a vision transformer backbone and a domain transferable attention module; 3) we study the feasibility of integrating our MTL dense-prediction method with an existing retargeting method, thereby reconfiguring comics.
Programming by example (PBE) is an emerging programming paradigm that automatically synthesizes programs specified by user-provided input-output examples. Despite the convenience for end-users, implementing PBE tools often requires strong expertise in programming language and synthesis algorithms. Such a level of knowledge is uncommon among software developers. It greatly limits the broad adoption of PBE by the industry. To facilitate the adoption of PBE techniques, we propose a PBE framework called Bee, which leverages an "entity-action" model based on relational tables to ease PBE development for a wide but restrained range of domains. Implementing PBE tools with Bee only requires adapting domain-specific data entities and user actions to tables, with no need to design a domain-specific language or an efficient synthesis algorithm. The synthesis algorithm of Bee exploits bidirectional searching and constraint-solving techniques to address the challenge of value computation nested in table transformation. We evaluated Bee's effectiveness on 64 PBE tasks from three different domains and usability with a human study of 12 participants. Evaluation results show that Bee is easier to learn and use than the state-of-the-art PBE framework, and the bidirectional algorithm achieves comparable performance to domain-specifically optimized synthesizers.
This work considers the problem of the noisy binary search in a sorted array. The noise is modeled by a parameter $p$ that dictates that a comparison can be incorrect with probability $p$, independently of other queries. We state two types of upper bounds on the number of queries: the worst-case and expected query complexity scenarios. The bounds improve the ones known to date, i.e., our algorithms require fewer queries. Additionally, they have simpler statements, and work for the full range of parameters. All query complexities for the expected query scenarios are tight up to lower order terms. For the problem where target prior is uniform over all possible inputs, we provide algorithm with expected complexity upperbounded by $(\log_2 n + \log_2 \delta^{-1} + 3)/I(p)$, where $n$ is the domain size, $0\le p < 1/2$ is the noise ratio, and $\delta>0$ is the failure probability, and $I(p)$ is the information gain function. As a side-effect, we close some correctness issues regarding previous work. Also, en route, we obtain new and improved query complexities for the search generalized to arbitrary graphs. This paper continues and improves upon the lines of research of Burnashev-Zigangirov [Prob. Per. Informatsii, 1974], Ben-Or and Hassidim [FOCS 2008], Gu and Xu [STOC 2023], and Emamjomeh-Zadeh et al. [STOC 2016], Dereniowski et al. [SOSA@SODA 2019].
Modern computer systems are characterized by deep memory hierarchies, composed of main memory, multiple layers of cache, and other specialized types of memory. In parallel and distributed systems, additional memory layers are added to this hierarchy. Achieving good performance for computational science applications, in terms of execution time, depends on the efficient use of this diverse and hierarchical memory. This paper revisits the use of space-filling curves to specify the ordering in memory of data structures used in representative scientific applications executing on parallel machines containing clusters of multicore CPUs with attached GPUs. This work examines the hypothesis that space-filling curves, such as Hilbert and Morton ordering, can improve data locality and hence result in more efficient data movement than row or column-based orderings. First, performance results are presented that show for what application parameterizations and machine characteristics this is the case, and are interpreted in terms of how an application interacts with the computer hardware and low-level software. This research particularly focuses on the use of stencil-based applications that form the basis of many scientific computations. Second, how space-filling curves impact data sharing in nearest-neighbour and stencil-based codes is considered.
The criticality problem in nuclear engineering asks for the principal eigen-pair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step withing judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigen-pair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.
Off-policy evaluation (OPE) aims to estimate the benefit of following a counterfactual sequence of actions, given data collected from executed sequences. However, existing OPE estimators often exhibit high bias and high variance in problems involving large, combinatorial action spaces. We investigate how to mitigate this issue using factored action spaces i.e. expressing each action as a combination of independent sub-actions from smaller action spaces. This approach facilitates a finer-grained analysis of how actions differ in their effects. In this work, we propose a new family of "decomposed" importance sampling (IS) estimators based on factored action spaces. Given certain assumptions on the underlying problem structure, we prove that the decomposed IS estimators have less variance than their original non-decomposed versions, while preserving the property of zero bias. Through simulations, we empirically verify our theoretical results, probing the validity of various assumptions. Provided with a technique that can derive the action space factorisation for a given problem, our work shows that OPE can be improved "for free" by utilising this inherent problem structure.
Variance reduction is a crucial idea for Monte Carlo simulation and the stochastic Lanczos quadrature method is a dedicated method to approximate the trace of a matrix function. Inspired by their advantages, we combine these two techniques to approximate the log-determinant of large-scale symmetric positive definite matrices. Key questions to be answered for such a method are how to construct or choose an appropriate projection subspace and derive guaranteed theoretical analysis. This paper applies some probabilistic approaches including the projection-cost-preserving sketch and matrix concentration inequalities to construct a suboptimal subspace. Furthermore, we provide some insights on choosing design parameters in the underlying algorithm by deriving corresponding approximation error and probabilistic error estimations. Numerical experiments demonstrate our method's effectiveness and illustrate the quality of the derived error bounds.
In this paper we present a layered approach for multi-agent control problem, decomposed into three stages, each building upon the results of the previous one. First, a high-level plan for a coarse abstraction of the system is computed, relying on parametric timed automata augmented with stopwatches as they allow to efficiently model simplified dynamics of such systems. In the second stage, the high-level plan, based on SMT-formulation, mainly handles the combinatorial aspects of the problem, provides a more dynamically accurate solution. These stages are collectively referred to as the SWA-SMT solver. They are correct by construction but lack a crucial feature: they cannot be executed in real time. To overcome this, we use SWA-SMT solutions as the initial training dataset for our last stage, which aims at obtaining a neural network control policy. We use reinforcement learning to train the policy, and show that the initial dataset is crucial for the overall success of the method.