In the control of lower-limb exoskeletons with feet, the phase in the gait cycle can be identified by monitoring the weight distribution at the feet. This phase information can be used in the exoskeleton's controller to compensate the dynamics of the exoskeleton and to assign impedance parameters. Typically the weight distribution is calculated using data from sensors such as treadmill force plates or insole force sensors. However, these solutions increase both the setup complexity and cost. For this reason, we propose a deep-learning approach that uses a short time window of joint kinematics to predict the weight distribution of an exoskeleton in real time. The model was trained on treadmill walking data from six users wearing a four-degree-of-freedom exoskeleton and tested in real time on three different users wearing the same device. This test set includes two users not present in the training set to demonstrate the model's ability to generalize across individuals. Results show that the proposed method is able to fit the actual weight distribution with R2=0.9 and is suitable for real-time control with prediction times less than 1 ms. Experiments in closed-loop exoskeleton control show that deep-learning-based weight distribution estimation can be used to replace force sensors in overground and treadmill walking.
The use of neural networks for solving differential equations is practically difficult due to the exponentially increasing runtime of autodifferentiation when computing high-order derivatives. We propose $n$-TangentProp, the natural extension of the TangentProp formalism \cite{simard1991tangent} to arbitrarily many derivatives. $n$-TangentProp computes the exact derivative $d^n/dx^n f(x)$ in quasilinear, instead of exponential time, for a densely connected, feed-forward neural network $f$ with a smooth, parameter-free activation function. We validate our algorithm empirically across a range of depths, widths, and number of derivatives. We demonstrate that our method is particularly beneficial in the context of physics-informed neural networks where \ntp allows for significantly faster training times than previous methods and has favorable scaling with respect to both model size and loss-function complexity as measured by the number of required derivatives. The code for this paper can be found at //github.com/kyrochi/n\_tangentprop.
Robotic tasks such as planning and navigation require a hierarchical semantic understanding of a scene, which could include multiple floors and rooms. Current methods primarily focus on object segmentation for 3D scene understanding. However, such methods struggle to segment out topological regions like "kitchen" in the scene. In this work, we introduce a two-step pipeline to solve this problem. First, we extract a topological map, i.e., floorplan of the indoor scene using a novel multi-channel occupancy representation. Then, we generate CLIP-aligned features and semantic labels for every room instance based on the objects it contains using a self-attention transformer. Our language-topology alignment supports natural language querying, e.g., a "place to cook" locates the "kitchen". We outperform the current state-of-the-art on room segmentation by ~20% and room classification by ~12%. Our detailed qualitative analysis and ablation studies provide insights into the problem of joint structural and semantic 3D scene understanding. Project Page: quest-maps.github.io
The potential of Wi-Fi backscatter communications systems is immense, yet challenges such as signal instability and energy constraints impose performance limits. This paper introduces FlexScatter, a Wi-Fi backscatter system using a designed scheduling strategy based on excitation prediction and rateless coding to enhance system performance. Initially, a Wi-Fi traffic prediction model is constructed by analyzing the variability of the excitation source. Then, an adaptive transmission scheduling algorithm is proposed to address the low energy consumption demands of backscatter tags, adjusting the transmission strategy according to predictive analytics and taming channel conditions. Furthermore, leveraging the benefits of low-density parity-check (LDPC) and fountain codes, a novel coding and decoding algorithm is developed, which is tailored for dynamic channel conditions. Experimental validation shows that FlexScatter reduces bit error rates (BER) by up to 30%, improves energy efficiency by 7%, and increases overall system utility by 11%, compared to conventional methods. FlexScatter's ability to balance energy consumption and communication efficiency makes it a robust solution for future IoT applications that rely on unpredictable Wi-Fi traffic.
We introduce a novel, data-driven approach for reconstructing temporally coherent 3D motion from unstructured and potentially partial observations of non-rigidly deforming shapes. Our goal is to achieve high-fidelity motion reconstructions for shapes that undergo near-isometric deformations, such as humans wearing loose clothing. The key novelty of our work lies in its ability to combine implicit shape representations with explicit mesh-based deformation models, enabling detailed and temporally coherent motion reconstructions without relying on parametric shape models or decoupling shape and motion. Each frame is represented as a neural field decoded from a feature space where observations over time are fused, hence preserving geometric details present in the input data. Temporal coherence is enforced with a near-isometric deformation constraint between adjacent frames that applies to the underlying surface in the neural field. Our method outperforms state-of-the-art approaches, as demonstrated by its application to human and animal motion sequences reconstructed from monocular depth videos.
Predicting the evolution of complex systems governed by partial differential equations (PDEs) remains challenging, especially for nonlinear, chaotic behaviors. This study introduces Koopman-inspired Fourier Neural Operators (kFNO) and Convolutional Neural Networks (kCNN) to learn solution advancement operators for flame front instabilities. By transforming data into a high-dimensional latent space, these models achieve more accurate multi-step predictions compared to traditional methods. Benchmarking across one- and two-dimensional flame front scenarios demonstrates the proposed approaches' superior performance in short-term accuracy and long-term statistical reproduction, offering a promising framework for modeling complex dynamical systems.
We consider the numerical approximation of the stochastic complex Ginzburg-Landau equation with additive noise on the one dimensional torus. The complex nature of the equation means that many of the standard approaches developed for stochastic partial differential equations can not be directly applied. We use an energy approach to prove an existence and uniqueness result as well to obtain moment bounds on the stochastic PDE before introducing our numerical discretization. For such a well studied deterministic equation it is perhaps surprising that its numerical approximation in the stochastic setting has not been considered before. Our method is based on a spectral discretization in space and a Lie-Trotter splitting method in time. We obtain moment bounds for the numerical method before proving our main result: strong convergence on a set of arbitrarily large probability. From this we obtain a result on convergence in probability. We conclude with some numerical experiments that illustrate the effectiveness of our method.
Vehicle telematics provides granular data for dynamic driving risk assessment, but current methods often rely on aggregated metrics (e.g., harsh braking counts) and do not fully exploit the rich time-series structure of telematics data. In this paper, we introduce a flexible framework using continuous-time hidden Markov model (CTHMM) to model and analyze trip-level telematics data. Unlike existing methods, the CTHMM models raw time-series data without predefined thresholds on harsh driving events or assumptions about accident probabilities. Moreover, our analysis is based solely on telematics data, requiring no traditional covariates such as driver or vehicle characteristics. Through unsupervised anomaly detection based on pseudo-residuals, we identify deviations from normal driving patterns -- defined as the prevalent behaviour observed in a driver's history or across the population -- which are linked to accident risk. Validated on both controlled and real-world datasets, the CTHMM effectively detects abnormal driving behaviour and trips with increased accident likelihood. In real-data analysis, higher anomaly levels in longitudinal and lateral accelerations consistently correlate with greater accident risk, with classification models using this information achieving ROC-AUC values as high as 0.86 for trip-level analysis and 0.78 for distinguishing drivers with claims. Furthermore, the methodology reveals significant behavioural differences between drivers with and without claims, offering valuable insights for insurance applications, accident analysis, and prevention.
The Knaster-Tarski theorem, also known as Tarski's theorem, guarantees that every monotone function defined on a complete lattice has a fixed point. We analyze the query complexity of finding such a fixed point on the $k$-dimensional grid of side length $n$ under the $\leq$ relation. Specifically, there is an unknown monotone function $f: \{0,1,\ldots, n-1\}^k \to \{0,1,\ldots, n-1\}^k$ and an algorithm must query a vertex $v$ to learn $f(v)$. A key special case of interest is the Boolean hypercube $\{0,1\}^k$, which is isomorphic to the power set lattice -- the original setting of the Knaster-Tarski theorem. Our lower bound characterizes the randomized and deterministic query complexity of the Tarski search problem on the Boolean hypercube as $\Theta(k)$. More generally, we prove a randomized lower bound of $\Omega\left( k + \frac{k \cdot \log{n}}{\log{k}} \right)$ for the $k$-dimensional grid of side length $n$, which is asymptotically tight in high dimensions when $k$ is large relative to $n$.
Blind estimation of intersymbol interference channels based on the Baum-Welch (BW) algorithm, a specific implementation of the expectation-maximization (EM) algorithm for training hidden Markov models, is robust and does not require labeled data. However, it is known for its extensive computation cost, slow convergence, and frequently converges to a local maximum. In this paper, we modified the trellis structure of the BW algorithm by associating the channel parameters with two consecutive states. This modification enables us to reduce the number of required states by half while maintaining the same performance. Moreover, to improve the convergence rate and the estimation performance, we construct a joint turbo-BW-equalization system by exploiting the extrinsic information produced by the turbo decoder to refine the BW-based estimator at each EM iteration. Our experiments demonstrate that the joint system achieves convergence in just 4 EM iterations, which is 8 iterations less than a separate system design for a signal-to-noise ratio (SNR) of 6 dB. Additionally, the joint system provides improved estimation accuracy with a mean square error (MSE) of $10^{-4}$. We also identify scenarios where a joint design is not preferable, especially when the channel is noisy (e.g., SNR=2 dB) and the turbo decoder is unable to provide reliable extrinsic information for a BW-based estimator.
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.