We present a new approach leveraging the Sliding Frank--Wolfe algorithm to address the challenge of line recovery in degraded images. Building upon advances in conditional gradient methods for sparse inverse problems with differentiable measurement models, we propose two distinct models tailored for line detection tasks within the realm of blurred line deconvolution and ridge detection of linear chirps in spectrogram images.
This system description introduces an enhancement to the Yices2 SMT solver, enabling it to reason over non-linear polynomial systems over finite fields. Our reasoning approach fits into the model-constructing satisfiability (MCSat) framework and is based on zero decomposition techniques, which find finite basis explanations for theory conflicts over finite fields. As the MCSat solver within Yices2 can support (and combine) several theories via theory plugins, we implemented our reasoning approach as a new plugin for finite fields and extended Yices2's frontend to parse finite field problems, making our implementation the first MCSat-based reasoning engine for finite fields. We present its evaluation on finite field benchmarks, comparing it against cvc5. Additionally, our work leverages the modular architecture of the MCSat solver in Yices2 to provide a foundation for the rapid implementation of further reasoning techniques for this theory.
This article discusses the implementation of a software joint velocity limitation dedicated to a Spherical Parallel Manipulator (SPM) with coaxial input shafts (CoSPM) using a speed control loop. Such an algorithm takes as input the current joint positions as well as the joint reference velocities computed by the speed controller and limit the latter in order to avoid any known singular configuration. This limitation takes into account the workspace properties of the mechanism and the physical characteristics of its actuators. In particular, one takes advantage of the coaxiality of the input shafts of the CoSPM and the resulting unlimited bearing.
The 6G mobile networks are differentiated from 5G by two new usage scenarios - distributed sensing and edge AI. Their natural integration, termed integrated sensing and edge AI (ISEA), promised to create a platform for enabling environment perception to make intelligent decisions and take real-time actions. A basic operation in ISEA is for a fusion center to acquire and fuse features of spatial sensing data distributed at many agents. To overcome its communication bottleneck due to multiple access by numerous agents over hostile wireless channels, we propose a novel framework, called Spatial Over-the-Air Fusion (Spatial AirFusion), which exploits radio waveform superposition to aggregate spatially sparse features over the air. The technology is more sophisticated than conventional Over-the-Air Computing (AirComp) as it supports simultaneous aggregation over multiple voxels, which partition the 3D sensing region, and across multiple subcarriers. Its efficiency and robustness are derived from exploitation of both spatial feature sparsity and multiuser channel diversity to intelligently pair voxel-level aggregation tasks and subcarriers to maximize the minimum receive SNR among voxels under instantaneous power constraints. To optimally solve the mixed-integer Voxel-Carrier Pairing and Power Allocation (VoCa-PPA) problem, the proposed approach hinges on two useful results: (1) deriving the optimal power allocation as a closed-form function of voxel-carrier pairing and (2) discovering a useful property of VoCa-PPA that dramatically reduces the solution-space dimensionality. Both a low-complexity greedy algorithm and an optimal tree-search based approach are designed for VoCa-PPA. Extensive simulations using real datasets show that Spatial AirFusion achieves significant error reduction and accuracy improvement compared with conventional AirComp without awareness of spatial sparsity.
Due to the popularity of the FaaS programming model, there is now a wide variety of commercial and open-source FaaS systems. Hence, for comparison of different FaaS systems and their configuration options, FaaS application developers rely on FaaS benchmarking frameworks. Existing frameworks, however, tend to evaluate only single isolated aspects, a more holistic application-centric benchmarking framework is still missing. In previous work, we proposed BeFaaS, an extensible application-centric benchmarking framework for FaaS environments that focuses on the evaluation of FaaS platforms through realistic and typical examples of FaaS applications. In this extended paper, we (i) enhance our benchmarking framework with additional features for distributed FaaS setups, (ii) design application benchmarks reflecting typical FaaS use cases, and (iii) use them to run extensive experiments with commercial cloud FaaS platforms (AWS Lambda, Azure Functions, Google Cloud Functions) and the tinyFaaS edge serverless platform. BeFaaS now includes four FaaS application-centric benchmarks, is extensible for additional workload profiles and platforms, and supports federated benchmark runs in which the benchmark application is distributed over multiple FaaS systems while collecting fine-grained measurement results for drill-down analysis. Our experiment results show that (i) network transmission is a major contributor to response latency for function chains, (ii) this effect is exacerbated in hybrid edge-cloud deployments, (iii) the trigger delay between a published event and the start of the triggered function ranges from about 100ms for AWS Lambda to 800ms for Google Cloud Functions, and (iv) Azure Functions shows the best cold start behavior for our workloads.
We identify potential merits of faster-than-Nyquist (FTN) signaling in the finite blocklength (FBL) regime. A unique aspect of FTN signaling is that it can increase the blocklength by packing more data symbols within the same time and frequency to yield strictly higher number of independent signaling dimensions than that of Nyquist rate signaling. Using the finite-blocklength information theory, we provide tight bounds on the maximum channel coding rate (MCCR) of FTN signaling for any finite time-bandwidth product. The merits are categorized into two operating regions of FTN, i.e., when the time-acceleration factor of FTN, $\tau$, is above or below a certain threshold $\tau_{0}$. When $\tau > \tau_{0}$, FTN has both higher channel capacity and MCCR than that of Nyquist rate signaling, when the utilized pulse shape is non-sinc. Since the issues associated with the ideal sinc pulse only get exacerbated when packets are short, the benefit of FTN becomes more significant in the FBL regime. On the other hand, when $\tau < \tau_{0}$, the channel capacity is fixed but MCCR of FTN can continue to increase to a certain degree, thereby reducing the gap between the capacity and MCCR. This benefit is present regardless of the utilized pulse shape, including the ideal sinc-pulse, and is unique to the FBL regime. Instead of increasing MCCR for fixed block error rates, FTN can alternatively lower the block error rates for fixed channel coding rates. These results imply that FTN can lower the penalty from limited channel coding over short blocklength and can improve the performance and reliability of short packet communications.
We propose a new notion of uniqueness for the adversarial Bayes classifier in the setting of binary classification. Analyzing this notion of uniqueness produces a simple procedure for computing all adversarial Bayes classifiers for a well-motivated family of one dimensional data distributions. This characterization is then leveraged to show that as the perturbation radius increases, certain notions of regularity improve for adversarial Bayes classifiers. We demonstrate with various examples that the boundary of the adversarial Bayes classifier frequently lies near the boundary of the Bayes classifier.
FPGAs are rarely mentioned when discussing the implementation of large machine learning applications, such as Large Language Models (LLMs), in the data center. There has been much evidence showing that single FPGAs can be competitive with GPUs in performance for some computations, especially for low latency, and often much more efficient when power is considered. This suggests that there is merit to exploring the use of multiple FPGAs for large machine learning applications. The challenge with using multiple FPGAs is that there is no commonly-accepted flow for developing and deploying multi-FPGA applications, i.e., there are no tools to describe a large application, map it to multiple FPGAs and then deploy the application on a multi-FPGA platform. In this paper, we explore the feasibility of implementing large transformers using multiple FPGAs by developing a scalable multi-FPGA platform and some tools to map large applications to the platform. We validate our approach by designing an efficient multi-FPGA version of the I-BERT transformer and implement one encoder using six FPGAs as a working proof-of-concept to show that our platform and tools work. Based on our proof-of-concept prototype and the estimations of performance using the latest FPGAs compared to GPUs, we conclude that there can be a place for FPGAs in the world of large machine learning applications. We demonstrate a promising first step that shows that with the right infrastructure and tools it is reasonable to continue to explore the possible benefits of using FPGAs for applications such as LLMs.
This paper presents a novel approach for minimizing the number of teleportations in Distributed Quantum Computing (DQC) using formal methods. Quantum teleportation plays a major role in communicating quantum information. As such, it is desirable to perform as few teleportations as possible when distributing a quantum algorithm on a network of quantum machines. Contrary to most existing methods which rely on graph-theoretic or heuristic search techniques, we propose a drastically different approach for minimizing the number of teleportations through utilizing formal methods. Specifically, the contributions of this paper include: the formal specification of the teleportation minimization problem in Alloy, the generalizability of the proposed Alloy specifications to quantum circuits with $n$-ary gates, the reusability of the Alloy specifications for different quantum circuits and networks, the simplicity of specifying and solving other problems such as load balancing and heterogeneity, and the compositionality of the proposed approach. We also develop a software tool, called qcAlloy, that takes as input the textual description of a quantum circuit, generates the corresponding Alloy model, and finally solves the minimization problem using the Alloy analyzer. We have experimentally evaluated qcAlloy for some of the circuits in the RevLib benchmark with more than 100 qubits and 1200 layers, and have demonstrated that qcAlloy outperforms one of the most efficient existing methods for most benchmark circuits in terms of minimizing the number of teleportations.
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.