Humans possess a remarkable ability to react to unpredictable perturbations through immediate mechanical responses, which harness the visco-elastic properties of muscles to maintain balance. Inspired by this behaviour, we propose a novel design of a robotic leg utilising fibre jammed structures as passive compliant mechanisms to achieve variable joint stiffness and damping. We developed multi-material fibre jammed tendons with tunable mechanical properties, which can be 3D printed in one-go without need for assembly. Through extensive numerical simulations and experimentation, we demonstrate the usefulness of these tendons for shock absorbance and maintaining joint stability. We investigate how they could be used effectively in a multi-joint robotic leg by evaluating the relative contribution of each tendon to the overall stiffness of the leg. Further, we showcase the potential of these jammed structures for legged locomotion, highlighting how morphological properties of the tendons can be used to enhance stability in robotic legs.
Recent work has developed optimization procedures to find token sequences, called adversarial triggers, which can elicit unsafe responses from aligned language models. These triggers are believed to be universally transferable, i.e., a trigger optimized on one model can jailbreak other models. In this paper, we concretely show that such adversarial triggers are not universal. We extensively investigate trigger transfer amongst 13 open models and observe inconsistent transfer. Our experiments further reveal a significant difference in robustness to adversarial triggers between models Aligned by Preference Optimization (APO) and models Aligned by Fine-Tuning (AFT). We find that APO models are extremely hard to jailbreak even when the trigger is optimized directly on the model. On the other hand, while AFT models may appear safe on the surface, exhibiting refusals to a range of unsafe instructions, we show that they are highly susceptible to adversarial triggers. Lastly, we observe that most triggers optimized on AFT models also generalize to new unsafe instructions from five diverse domains, further emphasizing their vulnerability. Overall, our work highlights the need for more comprehensive safety evaluations for aligned language models.
Near-field propagation, particularly that enabled by reconfigurable intelligent surfaces (RIS), has emerged as a promising research topic in recent years. However, a comprehensive literature review on RIS-based near-field technologies is still lacking. This article aims to fill this gap by providing a brief overview of near-field concepts and a systematic survey of the state-of-the-art RIS-based near-field technologies. The focus is on three key aspects: the construction of ubiquitous near-field wireless propagation environments using RIS, the enabling of new near-field paradigms for 6G networks through RIS, and the challenges faced by RIS-based near-field technologies. This technical review intends to facilitate the development and innovation of RIS-based near-field technologies.
Myocardial Infarction is a main cause of mortality globally, and accurate risk prediction is crucial for improving patient outcomes. Machine Learning techniques have shown promise in identifying high-risk patients and predicting outcomes. However, patient data often contain vast amounts of information and missing values, posing challenges for feature selection and imputation methods. In this article, we investigate the impact of the data preprocessing task and compare three ensembles boosted tree methods to predict the risk of mortality in patients with myocardial infarction. Further, we use the Tree Shapley Additive Explanations method to identify relationships among all the features for the performed predictions, leveraging the entirety of the available data in the analysis. Notably, our approach achieved a superior performance when compared to other existing machine learning approaches, with an F1-score of 91,2% and an accuracy of 91,8% for LightGBM without data preprocessing.
The constraint satisfaction problem (CSP) on a finite relational structure B is to decide, given a set of constraints on variables where the relations come from B, whether or not there is a assignment to the variables satisfying all of the constraints; the surjective CSP is the variant where one decides the existence of a surjective satisfying assignment onto the universe of B. We present an algebraic framework for proving hardness results on surjective CSPs; essentially, this framework computes global gadgetry that permits one to present a reduction from a classical CSP to a surjective CSP. We show how to derive a number of hardness results for surjective CSP in this framework, including the hardness of the disconnected cut problem, of the no-rainbow 3-coloring problem, and of the surjective CSP on all 2-element structures known to be intractable (in this setting). Our framework thus allows us to unify these hardness results, and reveal common structure among them; we believe that our hardness proof for the disconnected cut problem is more succinct than the original. In our view, the framework also makes very transparent a way in which classical CSPs can be reduced to surjective CSPs.
We consider a voting problem in which a set of agents have metric preferences over a set of alternatives, and are also partitioned into disjoint groups. Given information about the preferences of the agents and their groups, our goal is to decide an alternative to approximately minimize an objective function that takes the groups of agents into account. We consider two natural group-fair objectives known as Max-of-Avg and Avg-of-Max which are different combinations of the max and the average cost in and out of the groups. We show tight bounds on the best possible distortion that can be achieved by various classes of mechanisms depending on the amount of information they have access to. In particular, we consider group-oblivious full-information mechanisms that do not know the groups but have access to the exact distances between agents and alternatives in the metric space, group-oblivious ordinal-information mechanisms that again do not know the groups but are given the ordinal preferences of the agents, and group-aware mechanisms that have full knowledge of the structure of the agent groups and also ordinal information about the metric space.
Recent studies reveal the connection between GNNs and the diffusion process, which motivates many diffusion-based GNNs to be proposed. However, since these two mechanisms are closely related, one fundamental question naturally arises: Is there a general diffusion framework that can formally unify these GNNs? The answer to this question can not only deepen our understanding of the learning process of GNNs, but also may open a new door to design a broad new class of GNNs. In this paper, we propose a general diffusion equation framework with the fidelity term, which formally establishes the relationship between the diffusion process with more GNNs. Meanwhile, with this framework, we identify one characteristic of graph diffusion networks, i.e., the current neural diffusion process only corresponds to the first-order diffusion equation. However, by an experimental investigation, we show that the labels of high-order neighbors actually exhibit monophily property, which induces the similarity based on labels among high-order neighbors without requiring the similarity among first-order neighbors. This discovery motives to design a new high-order neighbor-aware diffusion equation, and derive a new type of graph diffusion network (HiD-Net) based on the framework. With the high-order diffusion equation, HiD-Net is more robust against attacks and works on both homophily and heterophily graphs. We not only theoretically analyze the relation between HiD-Net with high-order random walk, but also provide a theoretical convergence guarantee. Extensive experimental results well demonstrate the effectiveness of HiD-Net over state-of-the-art graph diffusion networks.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.
Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.