There is an increasing need to comprehensively characterize the kinematic performances of different Micromobility Vehicles (MMVs). This study aims to: 1) characterize the kinematic behaviors of different MMVs during emergency maneuvers; 2) explore the influence of different MMV power sources on the device performances; 3) investigate if piecewise linear models are suitable for modeling MMV trajectories. A test track experiment where 40 frequent riders performed emergency braking and swerving maneuvers riding a subset of electric MMVs, their traditional counterparts, and, in some cases, behaving as running pedestrians. A second experiment was conducted to determine the MMVs swerving lower boundaries. Device power source resulted having a statistically significant influence on kinematic capabilities of the MMVs: while e-MMVs displayed superior braking capabilities compared to their traditional counterparts, the opposite was observed in terms of swerving performance. Furthermore, performances varied significantly across the different MMV typologies, with handlebar-based devices consistently outperforming the handlebar-less devices across the metrics considered. The piecewise linear models used for braking profiles fit well for most MMVs, except for skateboards and pedestrians due to foot-ground engagement. These findings underscore that the effectiveness of steering or braking in preventing collisions may vary depending on the type and power source of the device. This study also demonstrates the applicability of piecewise linear models for generating parameterized functions that accurately model braking trajectories, providing a valuable resource for automated systems developers. The model, however, also reveals that the single brake ramp assumption does not apply for certain types of MMVs or for pedestrians, indicating the necessity for further improvements.
Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, incorporating an expanded range of variables. Specifically, we introduce a new hyperparameter, granularity, whose adjustment enables precise control over the size of the experts. Building on this, we establish scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Leveraging these laws, we derive the optimal training configuration for a given computational budget. Our findings not only show that MoE models consistently outperform dense Transformers but also highlight that the efficiency gap between dense and MoE models widens as we scale up the model size and training budget. Furthermore, we demonstrate that the common practice of setting the size of experts in MoE to mirror the feed-forward layer is not optimal at almost any computational budget.
Learned Sparse Retrieval (LSR) is a group of neural methods designed to encode queries and documents into sparse lexical vectors. These vectors can be efficiently indexed and retrieved using an inverted index. While LSR has shown promise in text retrieval, its potential in multi-modal retrieval remains largely unexplored. Motivated by this, in this work, we explore the application of LSR in the multi-modal domain, i.e., we focus on Multi-Modal Learned Sparse Retrieval (MLSR). We conduct experiments using several MLSR model configurations and evaluate the performance on the image suggestion task. We find that solving the task solely based on the image content is challenging. Enriching the image content with its caption improves the model performance significantly, implying the importance of image captions to provide fine-grained concepts and context information of images. Our approach presents a practical and effective solution for training LSR retrieval models in multi-modal settings.
The biaffine parser of Dozat and Manning (2017) was successfully extended to semantic dependency parsing (SDP) (Dozat and Manning, 2018). Its performance on graphs is surprisingly high given that, without the constraint of producing a tree, all arcs for a given sentence are predicted independently from each other (modulo a shared representation of tokens). To circumvent such an independence of decision, while retaining the O(n^2) complexity and highly parallelizable architecture, we propose to use simple auxiliary tasks that introduce some form of interdependence between arcs. Experiments on the three English acyclic datasets of SemEval 2015 task 18 (Oepen et al., 2015), and on French deep syntactic cyclic graphs (Ribeyre et al., 2014) show modest but systematic performance gains on a near state-of-the-art baseline using transformer-based contextualized representations. This provides a simple and robust method to boost SDP performance.
The Fokker-Planck (FP) equation is a foundational PDE in stochastic processes. However, curse of dimensionality (CoD) poses challenge when dealing with high-dimensional FP PDEs. Although Monte Carlo and vanilla Physics-Informed Neural Networks (PINNs) have shown the potential to tackle CoD, both methods exhibit numerical errors in high dimensions when dealing with the probability density function (PDF) associated with Brownian motion. The point-wise PDF values tend to decrease exponentially as dimension increases, surpassing the precision of numerical simulations and resulting in substantial errors. Moreover, due to its massive sampling, Monte Carlo fails to offer fast sampling. Modeling the logarithm likelihood (LL) via vanilla PINNs transforms the FP equation into a difficult HJB equation, whose error grows rapidly with dimension. To this end, we propose a novel approach utilizing a score-based solver to fit the score function in SDEs. The score function, defined as the gradient of the LL, plays a fundamental role in inferring LL and PDF and enables fast SDE sampling. Three fitting methods, Score Matching (SM), Sliced SM (SSM), and Score-PINN, are introduced. The proposed score-based SDE solver operates in two stages: first, employing SM, SSM, or Score-PINN to acquire the score; and second, solving the LL via an ODE using the obtained score. Comparative evaluations across these methods showcase varying trade-offs. The proposed method is evaluated across diverse SDEs, including anisotropic OU processes, geometric Brownian, and Brownian with varying eigenspace. We also test various distributions, including Gaussian, Log-normal, Laplace, and Cauchy. The numerical results demonstrate the score-based SDE solver's stability, speed, and performance across different settings, solidifying its potential as a solution to CoD for high-dimensional FP equations.
This paper presents a methodology for the discretization and reduction of a class of one-dimensional Partial Differential Equations (PDEs) with inputs and outputs collocated at the spatial boundaries. The class of system that we consider is known as Boundary-Controlled Port-Hamiltonian Systems (BC-PHSs) and covers a wide class of Hyperbolic PDEs with a large type of boundary inputs and outputs. This is for instance the case of waves and beams with Neumann or Dirichlet boundary conditions at both sides and mixed boundary conditions. In addition, we recall the Loewner framework to reduce the discretized model. We show that if the initial PDE is {\it passive}, the discretized model is also. Moreover, if the initial PDE is {\it impedance energy preserving}, the discretized model is also. The {\it passive} structure is also preserved in the reduced-order if the selected frequency data has positive real part. We use the one-dimensional wave equation and the Timoshenko beam as examples to show the versatility of the proposed approach.
Pareto Set Learning (PSL) is a promising approach for approximating the entire Pareto front in multi-objective optimization (MOO) problems. However, existing derivative-free PSL methods are often unstable and inefficient, especially for expensive black-box MOO problems where objective function evaluations are costly. In this work, we propose to address the instability and inefficiency of existing PSL methods with a novel controllable PSL method, called Co-PSL. Particularly, Co-PSL consists of two stages: (1) warm-starting Bayesian optimization to obtain quality Gaussian Processes priors and (2) controllable Pareto set learning to accurately acquire a parametric mapping from preferences to the corresponding Pareto solutions. The former is to help stabilize the PSL process and reduce the number of expensive function evaluations. The latter is to support real-time trade-off control between conflicting objectives. Performances across synthesis and real-world MOO problems showcase the effectiveness of our Co-PSL for expensive multi-objective optimization tasks.
Instruction tuning on a mixture of tasks has improved zero-shot capabilities in natural language processing (NLP). Nevertheless, existing methods often learn features that exhibit correlations between instruction-formatted samples and target labels, rather than causal relationships. Termed as ``spurious correlation'' in statistics, such a correlation may change drastically in a new task, making the effect from the learned features to be misleading. To this end, we develop a meta Structural Causal Model (meta-SCM) to integrate different NLP tasks under a single causal structure of the data. Specifically, the meta-SCM introduces multiple latent factors that represent properties of source context, only some of which causally influence the target labels for a specific task. The key idea is to learn task-required causal factors and only use those to make predictions for a given task. Theoretically, we prove the causal factor can be identified without mixing information from others. Guided by the identifiability, we propose a Structural Instruction Tuning (SIT) method to learn the task-required causal representations that can mimic the causal factors for each task. The utility of our approach is verified by improvements of zero-shot ability on a range of unseen datasets and tasks.
Quantum error correcting codes are of primary interest for the evolution towards quantum computing and quantum Internet. We analyze the performance of stabilizer codes, one of the most important classes for practical implementations, on both symmetric and asymmetric quantum channels. To this aim, we first derive the weight enumerator (WE) for the undetectable errors based on the quantum MacWilliams identities. The WE is then used to evaluate tight upper bounds on the error rate of CSS quantum codes with minimum weight decoding. For surface codes we also derive a simple closed form expression of the bounds over the depolarizing channel. Finally, we introduce a novel approach that combines the knowledge of WE with a logical operator analysis. This method allows the derivation of the exact asymptotic performance for short codes. For example, on a depolarizing channel with physical error rate $\rho \to 0$ it is found that the logical error rate $\rho_\mathrm{L}$ is asymptotically $\rho_\mathrm{L} \approx 16 \rho^2$ for the $[[9,1,3]]$ Shor code, $\rho_\mathrm{L} \approx 16.3 \rho^2$ for the $[[7,1,3]]$ Steane code, $\rho_\mathrm{L} \approx 18.7 \rho^2$ for the $[[13,1,3]]$ surface code, and $\rho_\mathrm{L} \approx 149.3 \rho^3$ for the $[[41,1,5]]$ surface code. For larger codes our bound provides $\rho_\mathrm{L} \approx 1215 \rho^4$ and $\rho_\mathrm{L} \approx 663 \rho^5$ for the $[[85,1,7]]$ and the $[[181,1,10]]$ surface codes, respectively.
Node Importance Estimation (NIE) is crucial for integrating external information into Large Language Models through Retriever-Augmented Generation. Traditional methods, focusing on static, single-graph characteristics, lack adaptability to new graphs and user-specific requirements. CADReN, our proposed method, addresses these limitations by introducing a Contextual Anchor (CA) mechanism. This approach enables the network to assess node importance relative to the CA, considering both structural and semantic features within Knowledge Graphs (KGs). Extensive experiments show that CADReN achieves better performance in cross-graph NIE task, with zero-shot prediction ability. CADReN is also proven to match the performance of previous models on single-graph NIE task. Additionally, we introduce and opensource two new datasets, RIC200 and WK1K, specifically designed for cross-graph NIE research, providing a valuable resource for future developments in this domain.
Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.