This paper introduces a full system modeling strategy for a syringe pump and soft pneumatic actuators(SPAs). The soft actuator is conceptualized as a beam structure, utilizing a second-order bending model. The equation of natural frequency is derived from Euler's bending theory, while the damping ratio is estimated by fitting step responses of soft pneumatic actuators. Evaluation of model uncertainty underscores the robustness of our modeling methodology. To validate our approach, we deploy it across four prototypes varying in dimensional parameters. Furthermore, a syringe pump is designed to drive the actuator, and a pressure model is proposed to construct a full system model. By employing this full system model, the Linear-Quadratic Regulator (LQR) controller is implemented to control the soft actuator, achieving high-speed responses and high accuracy in both step response and square wave function response tests. Both the modeling method and the LQR controller are thoroughly evaluated through experiments. Lastly, a gripper, consisting of two actuators with a feedback controller, demonstrates stable grasping of delicate objects with a significantly higher success rate.
We study a higher-order surface finite-element (SFEM) penalty-based discretization of the tangential surface Stokes problem. Several discrete formulations are investigated which are equivalent in the continuous setting. The impact of the choice of discretization of the diffusion term and of the divergence term on numerical accuracy and convergence, as well as on implementation advantages, is discussed. We analyze the inf-sup stability of the discrete scheme in a generic approach by lifting stable finite-element pairs known from the literature. A discretization error analysis in tangential norms then shows optimal order convergence of an isogeometric setting that requires only geometric knowledge of the discrete surface.
The Image Captioning (IC) technique is widely used to describe images in natural language. Recently, some IC system testing methods have been proposed. However, these methods still rely on pre-annotated information and hence cannot really alleviate the oracle problem in testing. Besides, their method artificially manipulates objects, which may generate unreal images as test cases and thus lead to less meaningful testing results. Thirdly, existing methods have various requirements on the eligibility of source test cases, and hence cannot fully utilize the given images to perform testing. To tackle these issues, in this paper, we propose REIC to perform metamorphic testing for IC systems with some image-level reduction transformations like image cropping and stretching. Instead of relying on the pre-annotated information, REIC uses a localization method to align objects in the caption with corresponding objects in the image, and checks whether each object is correctly described or deleted in the caption after transformation. With the image-level reduction transformations, REIC does not artificially manipulate any objects and hence can avoid generating unreal follow-up images. Besides, it eliminates the requirement on the eligibility of source test cases in the metamorphic transformation process, as well as decreases the ambiguity and boosts the diversity among the follow-up test cases, which consequently enables testing to be performed on any test image and reveals more distinct valid violations. We employ REIC to test five popular IC systems. The results demonstrate that REIC can sufficiently leverage the provided test images to generate follow-up cases of good reality, and effectively detect a great number of distinct violations, without the need for any pre-annotated information.
This paper presents an effective method of identifying elephant rumbles in infrasonic seismic signals. The design and implementation of electronic circuitry to amplify, filter, and digitize the seismic signals captured through geophones are presented. A collection of seismic infrasonic elephant rumbles was collected at a free-ranging area of an elephant orphanage in Sri Lanka. The seismic rumbles were converted to spectrograms, and several methods were used for spectral feature extraction. Using LasyPredict, the features extracted using different methods were fed into their corresponding machine-learning algorithms to train them for automatic seismic rumble identification. It was found that the Mel frequency cepstral coefficient (MFCC) together with the Ridge classifier machine learning algorithm produced the best performance in identifying seismic elephant rumbles. A novel method for denoising the spectrum that leads to enhanced accuracy in identifying seismic rumbles is also presented.
Visual inspection is a crucial yet time-consuming task across various industries. Numerous established methods employ machine learning in inspection tasks, necessitating specific training data that includes predefined inspection poses and training images essential for the training of models. The acquisition of such data and their integration into an inspection framework is challenging due to the variety in objects and scenes involved and due to additional bottlenecks caused by the manual collection of training data by humans, thereby hindering the automation of visual inspection across diverse domains. This work proposes a solution for automatic path planning using a single depth camera mounted on a robot manipulator. Point clouds obtained from the depth images are processed and filtered to extract object profiles and transformed to inspection target paths for the robot end-effector. The approach relies on the geometry of the object and generates an inspection path that follows the shape normal to the surface. Depending on the object size and shape, inspection paths can be defined as single or multi-path plans. Results are demonstrated in both simulated and real-world environments, yielding promising inspection paths for objects with varying sizes and shapes. Code and video are open-source available at: //github.com/CuriousLad1000/Auto-Path-Planner
Multifunctional metamaterials (MMM) bear promise as next-generation material platforms supporting miniaturization and customization. Despite many proof-of-concept demonstrations and the proliferation of deep learning assisted design, grand challenges of inverse design for MMM, especially those involving heterogeneous fields possibly subject to either mutual meta-atom coupling or long-range interactions, remain largely under-explored. To this end, we present a data-driven design framework, which streamlines the inverse design of MMMs involving heterogeneous fields. A core enabler is implicit Fourier neural operator (IFNO), which predicts heterogeneous fields distributed across a metamaterial array, thus in general at odds with homogenization assumptions, in a parameter-/sample-efficient fashion. Additionally, we propose a standard formulation of inverse problem covering a broad class of MMMs, and gradient-based multitask concurrent optimization identifying a set of Pareto-optimal architecture-stimulus (A-S) pairs. Fourier multiclass blending is proposed to synthesize inter-class meta-atoms anchored on a set of geometric motifs, while enjoying training-free dimension reduction and built-it reconstruction. Interlocking the three pillars, the framework is validated for light-bylight programmable plasmonic nanoantenna, whose design involves vast space jointly spanned by quasi-freeform supercells, maneuverable incident phase distributions, and conflicting figure-of-merits involving on-demand localization patterns. Accommodating all the challenges without a-priori simplifications, our framework could propel future advancements of MMM.
Our study assesses the adversarial robustness of LiDAR-camera fusion models in 3D object detection. We introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a car, can make the car undetectable by the fusion model. Experimental results reveal that even without changes to the image data channel, the fusion model can be deceived solely by manipulating the LiDAR data channel. This finding raises safety concerns in the field of autonomous driving. Further, we explore how the quantity of adversarial points, the distance between the front-near car and the LiDAR-equipped car, and various angular factors affect the attack success rate. We believe our research can contribute to the understanding of multi-sensor robustness, offering insights and guidance to enhance the safety of autonomous driving.
This paper aims to define, quantify, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method disentangles feature components of different complexity orders from the feature. We further design a set of metrics to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, we successfully discover a close relationship between the feature complexity and the performance of DNNs. As a generic mathematical tool, the feature complexity and the proposed metrics can also be used to analyze the success of network compression and knowledge distillation.
This paper introduces a method for adapting LoRA adapters in smaller-sized language models to arbitrary downstream tasks. Unlike standard mixture-of-expert architectures, our method employs a gradient-free routing function to choose a weighted combination of experts without increasing the compute requirements for training or inference. The results show that token-level adaptation of LoRA adapters outperforms the base Llama-2-7b model across mathematical (GSM8K), scientific (ARC-Challenge), reading comprehension (SQuAD), and coding (CodeAlpaca-20k) tasks. Further evaluations also show that the average performance of token-level adaptation outperforms individual models fine-tuned for each of the tasks with the best performance observed in adaptation of every-other token during inference. The code for this study is made available through a public repository.
We present a new algorithm to learn a deep neural network model robust against adversarial attacks. Previous algorithms demonstrate an adversarially trained Bayesian Neural Network (BNN) provides improved robustness. We recognize the adversarial learning approach for approximating the multi-modal posterior distribution of a Bayesian model can lead to mode collapse; consequently, the model's achievements in robustness and performance are sub-optimal. Instead, we first propose preventing mode collapse to better approximate the multi-modal posterior distribution. Second, based on the intuition that a robust model should ignore perturbations and only consider the informative content of the input, we conceptualize and formulate an information gain objective to measure and force the information learned from both benign and adversarial training instances to be similar. Importantly. we prove and demonstrate that minimizing the information gain objective allows the adversarial risk to approach the conventional empirical risk. We believe our efforts provide a step toward a basis for a principled method of adversarially training BNNs. Our model demonstrate significantly improved robustness--up to 20%--compared with adversarial training and Adv-BNN under PGD attacks with 0.035 distortion on both CIFAR-10 and STL-10 datasets.
Large language models of code (Code-LLMs) have recently brought tremendous advances to code completion, a fundamental feature of programming assistance and code intelligence. However, most existing works ignore the possible presence of bugs in the code context for generation, which are inevitable in software development. Therefore, we introduce and study the buggy-code completion problem, inspired by the realistic scenario of real-time code suggestion where the code context contains potential bugs -- anti-patterns that can become bugs in the completed program. To systematically study the task, we introduce two datasets: one with synthetic bugs derived from semantics-altering operator changes (buggy-HumanEval) and one with realistic bugs derived from user submissions to coding problems (buggy-FixEval). We find that the presence of potential bugs significantly degrades the generation performance of the high-performing Code-LLMs. For instance, the passing rates of CODEGEN-2B-MONO on test cases of buggy-HumanEval drop more than 50% given a single potential bug in the context. Finally, we investigate several post-hoc methods for mitigating the adverse effect of potential bugs and find that there remains a significant gap in post-mitigation performance.