This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a single clustering solution that is simultaneously $O(1)$-approximate for all $\ell_p$-norms of the disagreement vector. This proves that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. Our algorithm is the first combinatorial approximation algorithm for the $\ell_2$-norm objective, and more generally the first combinatorial algorithm for the $\ell_p$-norm objective when $2 \leq p < \infty$. It is also faster than all previous algorithms that minimize the $\ell_p$-norm of the disagreement vector, with run-time $O(n^\omega)$, where $O(n^\omega)$ is the time for matrix multiplication on $n \times n$ matrices. When the maximum positive degree in the graph is at most $\Delta$, this can be improved to a run-time of $O(n\Delta^2 \log n)$.
Process mining (PM) aims to construct, from event logs, process maps that can help discover, automate, improve, and monitor organizational processes. Robotic process automation (RPA) uses software robots to perform some tasks usually executed by humans. It is usually difficult to determine what processes and steps to automate, especially with RPA. PM is seen as one way to address such difficulty. This paper aims to assess the applicability of process mining in accelerating and improving the implementation of RPA, along with the challenges encountered throughout project lifecycle. A systematic literature review was conducted to examine the approaches where PM techniques were used to understand the as-is processes that can be automated with software robots. Seven databases were used to identify papers on this topic. A total of 32 papers, all published since 2018, were selected from 605 unique candidate papers and then analyzed. There is a steady increase in the number of publications in this domain, especially during the year 2022, which suggests a raising interest in the combined use of PM with RPA. The literature mainly focuses on the methods to record the events that occur at the level of user interactions with the application, and on the preprocessing methods that are needed to discover routines with the steps that can be automated. Important challenges are faced with preprocessing such event logs, and many lifecycle steps of automation projects are weakly supported by existing approaches suggesting corresponding research areas in need of further attention.
We present a self-supervised variational autoencoder (VAE) to jointly learn disentangled and dependent hidden factors and then enhance disentangled representation learning by a self-supervised classifier to eliminate coupled representations in a contrastive manner. To this end, a Contrastive Copula VAE (C$^2$VAE) is introduced without relying on prior knowledge about data in the probabilistic principle and involving strong modeling assumptions on the posterior in the neural architecture. C$^2$VAE simultaneously factorizes the posterior (evidence lower bound, ELBO) with total correlation (TC)-driven decomposition for learning factorized disentangled representations and extracts the dependencies between hidden features by a neural Gaussian copula for copula coupled representations. Then, a self-supervised contrastive classifier differentiates the disentangled representations from the coupled representations, where a contrastive loss regularizes this contrastive classification together with the TC loss for eliminating entangled factors and strengthening disentangled representations. C$^2$VAE demonstrates a strong effect in enhancing disentangled representation learning. C$^2$VAE further contributes to improved optimization addressing the TC-based VAE instability and the trade-off between reconstruction and representation.
We define an extension of the posterior predictive $p$-value for multiple test statistics and establish a bound on its frequency under the assumption of model correctness. We argue that the conservativity of the posterior predictive $p$-value increases with model dimension, and we demonstrate the ability of the joint $p$-value to overcome this problem in many cases. We also compare the joint $p$-values to other alternative $p$-values designed to have higher power and show that the joint $p$-value can achieve similar performance for model rejection while maintaining more favorable computational and interpretive properties.
Omnidirectional camera is a cost-effective and information-rich sensor highly suitable for many marine applications and the ocean scientific community, encompassing several domains such as augmented reality, mapping, motion estimation, visual surveillance, and simultaneous localization and mapping. However, designing and constructing such a high-quality 360$^{\circ}$ real-time streaming camera system for underwater applications is a challenging problem due to the technical complexity in several aspects including sensor resolution, wide field of view, power supply, optical design, system calibration, and overheating management. This paper presents a novel and comprehensive system that addresses the complexities associated with the design, construction, and implementation of a fully functional 360$^{\circ}$ real-time streaming camera system specifically tailored for underwater environments. Our proposed system, UWA360CAM, can stream video in real time, operate in 24/7, and capture 360$^{\circ}$ underwater panorama images. Notably, our work is the pioneering effort in providing a detailed and replicable account of this system. The experiments provide a comprehensive analysis of our proposed system.
In this paper, we present an advanced analysis of near optimal deterministic algorithms using a small space budget to solve the frequency estimation, heavy hitters, frequent items, and top-k approximation in the bounded deletion model. We define the family of SpaceSaving$\pm$ algorithms and explain why the original SpaceSaving$\pm$ algorithm only works when insertions and deletions are not interleaved. Next, we introduce the new DoubleSpaceSaving$\pm$ and the IntegratedSpaceSaving$\pm$ and prove their correctness. They show similar characteristics and both extend the popular space-efficient SpaceSaving algorithm. However, these two algorithms represent different trade-offs, in which DoubleSpaceSaving$\pm$ distributes the operations to two independent summaries while Integrated-SpaceSaving$\pm$ fully synchronizes deletions with insertions. Since data streams are often skewed, we present an improved analysis of these two algorithms and show that errors do not depend on the hot items and are only dependent on the cold and warm items. We also demonstrate how to achieve the relative error guarantee under mild assumptions. Moreover, we establish that the important mergeability property exists on these two algorithms which is desirable in distributed settings.
In the Continuous Steiner Tree problem (CST), we are given as input a set of points (called terminals) in a metric space and ask for the minimum-cost tree connecting them. Additional points (called Steiner points) from the metric space can be introduced as nodes in the solution. In the Discrete Steiner Tree problem (DST), we are given in addition to the terminals, a set of facilities, and any solution tree connecting the terminals can only contain the Steiner points from this set of facilities. Trevisan [SICOMP'00] showed that CST and DST are APX-hard when the input lies in the $\ell_1$-metric (and Hamming metric). Chleb\'ik and Chleb\'ikov\'a [TCS'08] showed that DST is NP-hard to approximate to factor of $96/95\approx 1.01$ in the graph metric (and consequently $\ell_\infty$-metric). Prior to this work, it was unclear if CST and DST are APX-hard in essentially every other popular metric! In this work, we prove that DST is APX-hard in every $\ell_p$-metric. We also prove that CST is APX-hard in the $\ell_{\infty}$-metric. Finally, we relate CST and DST, showing a general reduction from CST to DST in $\ell_p$-metrics. As an immediate consequence, this yields a $1.39$-approximation polynomial time algorithm for CST in $\ell_p$-metrics.
This paper is concerned with the expressivity and denotational semantics of a functional higher-order reversible programming language based on Theseus. In this language, pattern-matching is used to ensure the reversibility of functions. We show how one can encode any Reversible Turing Machine in said language. We then build a sound and adequate categorical semantics based on join inverse categories, with additional structures to capture pattern-matching. We then derive a full completeness result, stating that any computable, partial injective function is the image of a term in the language.
The persistent and switchable polarization of ferroelectric materials based on HfO$_2$-based ferroelectric compounds, compatible with large-scale integration, are attractive synaptic elements for neuromorphic computing. To achieve a record current density of 0.01 A/cm$^2$ (at a read voltage of 80 mV) as well as ideal memristive behavior (linear current-voltage relation and analog resistive switching), devices based on an ultra-thin (2.7 nm thick), polycrystalline HfZrO$_4$ ferroelectric layer are fabricated by Atomic Layer Deposition. The use of a semiconducting oxide interlayer (WO$_{x<3}$) at one of the interfaces, induces an asymmetric energy profile upon ferroelectric polarization reversal and thus the long-term potentiation / depression (conductance increase / decrease) of interest. Moreover, it favors the stable retention of both the low and the high resistive states. Thanks to the low operating voltage (<3.5 V), programming requires less than 10${^-12}$ J for 20 ns long pulses. Remarkably, the memristors show no wake-up or fatigue effect.
It is well-known that parallel manipulators are prone to singularities. However, there is still a lack of distance evaluation functions, referred to as metrics, for computing the distance between two 3-RPR configurations. The proposed extrinsic metrics take the combinatorial structure of the manipulator into account as well as different design options. Utilizing these extrinsic metrics, we formulate constrained optimization problems. These problems are aimed at identifying the closest singular configurations for a given non-singular configuration. The solution to the associated system of polynomial equations relies on algorithms from numerical algebraic geometry implemented in the software package \texttt{Bertini}. Furthermore, we have developed a computational pipeline for determining the distance to singularity during a one-parametric motion of the manipulator. To facilitate these computations for the user, an open-source interface is developed between software packages \texttt{Maple}, \texttt{Bertini}, and \texttt{Paramotopy}. The effectiveness of the presented approach is demonstrated based on numerical examples and compared with existing indices evaluating the singularity closeness.
Link prediction on knowledge graphs (KGs) is a key research topic. Previous work mainly focused on binary relations, paying less attention to higher-arity relations although they are ubiquitous in real-world KGs. This paper considers link prediction upon n-ary relational facts and proposes a graph-based approach to this task. The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention. The fully-connected attention captures universal inter-vertex interactions, while with edge-aware attentive biases to particularly encode the graph structure and its heterogeneity. In this fashion, our approach fully models global and local dependencies in each n-ary fact, and hence can more effectively capture associations therein. Extensive evaluation verifies the effectiveness and superiority of our approach. It performs substantially and consistently better than current state-of-the-art across a variety of n-ary relational benchmarks. Our code is publicly available.