We present a novel approach to predicting the pressure and flow rate of flexible electrohydrodynamic pumps using the Kolmogorov-Arnold Network. Inspired by the Kolmogorov-Arnold representation theorem, KAN replaces fixed activation functions with learnable spline-based activation functions, enabling it to approximate complex nonlinear functions more effectively than traditional models like Multi-Layer Perceptron and Random Forest. We evaluated KAN on a dataset of flexible EHD pump parameters and compared its performance against RF, and MLP models. KAN achieved superior predictive accuracy, with Mean Squared Errors of 12.186 and 0.001 for pressure and flow rate predictions, respectively. The symbolic formulas extracted from KAN provided insights into the nonlinear relationships between input parameters and pump performance. These findings demonstrate that KAN offers exceptional accuracy and interpretability, making it a promising alternative for predictive modeling in electrohydrodynamic pumping.
This paper analyzes the effect of adding a pre-decoder processing function to a receiver that contains a fixed mismatched decoder at the output of a discrete memoryless channel. We study properties of the symbolwise pre-processing function and show that it is a simple yet very powerful tool which enables to obtain reliable transmission at a positive rate for almost every metric. We present lower and upper bounds on the capacity of a channel with mismatched decoding and symbolwise(scalar-to-scalar) pre-processing, and show that the optimal pre-processing function for random coding is deterministic. We also characterize achievable error exponents. Finally, we prove that a separation principle holds for vectorwise(vector-to-vector) pre-processing functions and further, that deterministic functions maximize the reliably transmitted rate in this case.
This paper introduces a novel formulation of the clustering problem, namely the Minimum Sum-of-Squares Clustering of Infinitely Tall Data (MSSC-ITD), and presents HPClust, an innovative set of hybrid parallel approaches for its effective solution. By utilizing modern high-performance computing techniques, HPClust enhances key clustering metrics: effectiveness, computational efficiency, and scalability. In contrast to vanilla data parallelism, which only accelerates processing time through the MapReduce framework, our approach unlocks superior performance by leveraging the multi-strategy competitive-cooperative parallelism and intricate properties of the objective function landscape. Unlike other available algorithms that struggle to scale, our algorithm is inherently parallel in nature, improving solution quality through increased scalability and parallelism, and outperforming even advanced algorithms designed for small and medium-sized datasets. Our evaluation of HPClust, featuring four parallel strategies, demonstrates its superiority over traditional and cutting-edge methods by offering better performance in the key metrics. These results also show that parallel processing not only enhances the clustering efficiency, but the accuracy as well. Additionally, we explore the balance between computational efficiency and clustering quality, providing insights into optimal parallel strategies based on dataset specifics and resource availability. This research advances our understanding of parallelism in clustering algorithms, demonstrating that a judicious hybridization of advanced parallel approaches yields optimal results for MSSC-ITD. Experiments on synthetic data further confirm HPClust's exceptional scalability and robustness to noise.
We present a novel speaker-independent acoustic-to-articulatory inversion (AAI) model, overcoming the limitations observed in conventional AAI models that rely on acoustic features derived from restricted datasets. To address these challenges, we leverage representations from a pre-trained self-supervised learning (SSL) model to more effectively estimate the global, local, and kinematic pattern information in Electromagnetic Articulography (EMA) signals during the AAI process. We train our model using an adversarial approach and introduce an attention-based Multi-duration phoneme discriminator (MDPD) designed to fully capture the intricate relationship among multi-channel articulatory signals. Our method achieves a Pearson correlation coefficient of 0.847, marking state-of-the-art performance in speaker-independent AAI models. The implementation details and code can be found online.
This paper presents novel techniques for improving the error correction performance and reducing the complexity of coarsely quantized 5G-LDPC decoders. The proposed decoder design supports arbitrary message-passing schedules on a base-matrix level by modeling exchanged messages with entry-specific discrete random variables. Variable nodes (VNs) and check nodes (CNs) involve compression operations designed using the information bottleneck method to maximize preserved mutual information between code bits and quantized messages. We introduce alignment regions that assign the messages to groups with aligned reliability levels to decrease the number of individual design parameters. Group compositions with degree-specific separation of messages improve performance by up to 0.4 dB. Further, we generalize our recently proposed CN-aware quantizer design to irregular LDPC codes and layered schedules. The method optimizes the VN quantizer to maximize preserved mutual information at the output of the subsequent CN update, enhancing performance by up to 0.2 dB. A schedule optimization modifies the order of layer updates, reducing the average iteration count by up to 35 %. We integrate all new techniques in a rate-compatible decoder design by extending the alignment regions along a rate-dimension. Our complexity analysis for 2-bit decoding estimates up to 64 % higher throughput versus 4-bit decoding at similar performance.
I show how to express the question of whether a polyform tiles the plane isohedrally as a Boolean formula that can be tested using a SAT solver. This approach is adaptable to a wide range of polyforms, requires no special-case code for different isohedral tiling types, and integrates seamlessly with existing software for computing Heesch numbers of polyforms.
We study the asymptotic frequentist coverage of credible sets based on a novel Bayesian approach for a multiple linear regression model under variable selection. We initially ignore the issue of variable selection, which allows us to put a conjugate normal prior on the coefficient vector. The variable selection step is incorporated directly in the posterior through a sparsity-inducing map and uses the induced prior for making an inference instead of the natural conjugate posterior. The sparsity-inducing map minimizes the sum of the squared l2-distance weighted by the data matrix and a suitably scaled l1-penalty term. We obtain the limiting coverage of various credible regions and demonstrate that a modified credible interval for a component has the exact asymptotic frequentist coverage if the corresponding predictor is asymptotically uncorrelated with other predictors. Through extensive simulation, we provide a guideline for choosing the penalty parameter as a function of the credibility level appropriate for the corresponding coverage. We also show finite-sample numerical results that support the conclusions from the asymptotic theory. We also provide the credInt package that implements the method in R to obtain the credible intervals along with the posterior samples.
This paper introduces a "proof of concept" for a new approach to assistive robotics, integrating edge computing with Natural Language Processing (NLP) and computer vision to enhance the interaction between humans and robotic systems. Our "proof of concept" demonstrates the feasibility of using large language models (LLMs) and vision systems in tandem for interpreting and executing complex commands conveyed through natural language. This integration aims to improve the intuitiveness and accessibility of assistive robotic systems, making them more adaptable to the nuanced needs of users with disabilities. By leveraging the capabilities of edge computing, our system has the potential to minimize latency and support offline capability, enhancing the autonomy and responsiveness of assistive robots. Experimental results from our implementation on a robotic arm show promising outcomes in terms of accurate intent interpretation and object manipulation based on verbal commands. This research lays the groundwork for future developments in assistive robotics, focusing on creating highly responsive, user-centric systems that can significantly improve the quality of life for individuals with disabilities.
This paper introduces a novel method for the stability analysis of positive feedback systems with a class of fully connected feedforward neural networks (FFNN) controllers. By establishing sector bounds for fully connected FFNNs without biases, we present a stability theorem that demonstrates the global exponential stability of linear systems under fully connected FFNN control. Utilizing principles from positive Lur'e systems and the positive Aizerman conjecture, our approach effectively addresses the challenge of ensuring stability in highly nonlinear systems. The crux of our method lies in maintaining sector bounds that preserve the positivity and Hurwitz property of the overall Lur'e system. We showcase the practical applicability of our methodology through its implementation in a linear system managed by a FFNN trained on output feedback controller data, highlighting its potential for enhancing stability in dynamic systems.
Optical imaging and sensing systems based on diffractive elements have seen massive advances over the last several decades. Earlier generations of diffractive optical processors were, in general, designed to deliver information to an independent system that was separately optimized, primarily driven by human vision or perception. With the recent advances in deep learning and digital neural networks, there have been efforts to establish diffractive processors that are jointly optimized with digital neural networks serving as their back-end. These jointly optimized hybrid (optical+digital) processors establish a new "diffractive language" between input electromagnetic waves that carry analog information and neural networks that process the digitized information at the back-end, providing the best of both worlds. Such hybrid designs can process spatially and temporally coherent, partially coherent, or incoherent input waves, providing universal coverage for any spatially varying set of point spread functions that can be optimized for a given task, executed in collaboration with digital neural networks. In this article, we highlight the utility of this exciting collaboration between engineered and programmed diffraction and digital neural networks for a diverse range of applications. We survey some of the major innovations enabled by the push-pull relationship between analog wave processing and digital neural networks, also covering the significant benefits that could be reaped through the synergy between these two complementary paradigms.
This paper introduces a novel solution to the manual control challenge for indoor blimps. The problem's complexity arises from the conflicting demands of executing human commands while maintaining stability through automatic control for underactuated robots. To tackle this challenge, we introduced an assisted piloting hybrid controller with a preemptive mechanism, that seamlessly switches between executing human commands and activating automatic stabilization control. Our algorithm ensures that the automatic stabilization controller operates within the time delay between human observation and perception, providing assistance to the driver in a way that remains imperceptible.