As the use of large language models becomes more widespread, techniques like parameter-efficient fine-tuning and other methods for controlled generation are gaining traction for customizing models and managing their outputs. However, the challenge of precisely controlling how prompts influence these models is an area ripe for further investigation. In response, we introduce ControlPE (Continuously Controllable Prompt Engineering). ControlPE enables finer adjustments to prompt effects, complementing existing prompt engineering, and effectively controls continuous targets. This approach harnesses the power of LoRA (Low-Rank Adaptation) to create an effect akin to prompt weighting, enabling fine-tuned adjustments to the impact of prompts. Our methodology involves generating specialized datasets for prompt distillation, incorporating these prompts into the LoRA model, and carefully adjusting LoRA merging weight to regulate the influence of prompts. This provides a dynamic and adaptable tool for prompt control. Through our experiments, we have validated the practicality and efficacy of ControlPE. It proves to be a promising solution for control a variety of prompts, ranging from generating short responses prompts, refusal prompts to chain-of-thought prompts.
To enable the electrification of transportation systems, it is important to understand how technologies such as grid storage, solar photovoltaic systems, and control strategies can aid the deployment of electric vehicle charging at scale. In this work, we present EV-EcoSim, a co-simulation platform that couples electric vehicle charging, battery systems, solar photovoltaic systems, grid transformers, control strategies, and power distribution systems, to perform cost quantification and analyze the impacts of electric vehicle charging on the grid. This python-based platform can run a receding horizon control scheme for real-time operation and a one-shot control scheme for planning problems, with multi-timescale dynamics for different systems to simulate realistic scenarios. We demonstrate the utility of EV-EcoSim through a case study focused on economic evaluation of battery size to reduce electricity costs while considering impacts of fast charging on the power distribution grid. We present qualitative and quantitative evaluations on the battery size in tabulated results. The tabulated results delineate the trade-offs between candidate battery sizing solutions, providing comprehensive insights for decision-making under uncertainty. Additionally, we demonstrate the implications of the battery controller model fidelity on the system costs and show that the fidelity of the battery controller can completely change decisions made when planning an electric vehicle charging site.
A general theory of efficient estimation for ergodic diffusion processes sampled at high frequency with an infinite time horizon is presented. High frequency sampling is common in many applications, with finance as a prominent example. The theory is formulated in term of approximate martingale estimating functions and covers a large class of estimators including most of the previously proposed estimators for diffusion processes. Easily checked conditions ensuring that an estimating function is an approximate martingale are derived, and general conditions ensuring consistency and asymptotic normality of estimators are given. Most importantly, simple conditions are given that ensure rate optimality and efficiency. Rate optimal estimators of parameters in the diffusion coefficient converge faster than estimators of drift coefficient parameters because they take advantage of the information in the quadratic variation. The conditions facilitate the choice among the multitude of estimators that have been proposed for diffusion models. Optimal martingale estimating functions in the sense of Godambe and Heyde and their high frequency approximations are, under weak conditions, shown to satisfy the conditions for rate optimality and efficiency. This provides a natural feasible method of constructing explicit rate optimal and efficient estimating functions by solving a linear equation.
This article aims to study efficient/trace optimal designs for crossover trials, with multiple response recorded from each subject in each time period. A multivariate fixed effect model is proposed with direct and carryover effects corresponding to the multiple responses and the error dispersion matrix allowing for correlations to exist between and within responses. Two correlation structures, namely the proportional and the generalized markov covariances are studied. The corresponding information matrices for direct effects under the two covariances are used to determine efficient designs. Efficiency of orthogonal array designs of Type $I$ and strength $2$ is investigated for the two covariance forms. To motivate the multivariate crossover designs, a gene expression data in a $3 \times 3$ framework is utilized.
We propose a method to numerically compute fractional derivatives (or the fractional Laplacian) on the whole real line via Riesz fractional integrals. The compactified real line is divided into a number of intervals, thus amounting to a multi-domain approach; after transformations in accordance with the underlying $Z_{q}$ curve ensuring analyticity of the respective integrands, the integrals over the different domains are computed with a Clenshaw-Curtis algorithm. As an example, we consider solitary waves for fractional Korteweg-de Vries equations and compare these to results obtained with a discrete Fourier transform.
A technique that allows a formation-enforcing control (FEC) derived from graph rigidity theory to interface with a realistic relative localization system onboard lightweight Unmanned Aerial Vehicles (UAVs) is proposed in this paper. The proposed methodology enables reliable real-world deployment of UAVs in tight formation using real relative localization systems burdened by non-negligible sensory noise, which is typically not fully taken into account in FEC algorithms. The proposed solution is based on decomposition of the gradient descent-based FEC command into interpretable elements, and then modifying these individually based on the estimated distribution of sensory noise, such that the resulting action limits the probability of overshooting the desired formation. The behavior of the system has been analyzed and the practicality of the proposed solution has been compared to pure gradient-descent in real-world experiments where it presented significantly better performance in terms of oscillations, deviation from the desired state and convergence time.
Within Bayesian nonparametrics, dependent Dirichlet process mixture models provide a highly flexible approach for conducting inference about the conditional density function. However, several formulations of this class make either rather restrictive modelling assumptions or involve intricate algorithms for posterior inference, thus preventing their widespread use. In response to these challenges, we present a flexible, versatile, and computationally tractable model for density regression based on a single-weights dependent Dirichlet process mixture of normal distributions model for univariate continuous responses. We assume an additive structure for the mean of each mixture component and incorporate the effects of continuous covariates through smooth nonlinear functions. The key components of our modelling approach are penalised B-splines and their bivariate tensor product extension. Our proposed method also seamlessly accommodates parametric effects of categorical covariates, linear effects of continuous covariates, interactions between categorical and/or continuous covariates, varying coefficient terms, and random effects, which is why we refer our model as a Dirichlet process mixture of normal structured additive regression models. A noteworthy feature of our method is its efficiency in posterior simulation through Gibbs sampling, as closed-form full conditional distributions for all model parameters are available. Results from a simulation study demonstrate that our approach successfully recovers true conditional densities and other regression functionals in various challenging scenarios. Applications to a toxicology, disease diagnosis, and agricultural study are provided and further underpin the broad applicability of our modelling framework. An R package, \texttt{DDPstar}, implementing the proposed method is publicly available at \url{//bitbucket.org/mxrodriguez/ddpstar}.
Recently, there has been a surge in the popularity of pre trained large language models (LLMs) (such as GPT-4), sweeping across the entire Natural Language Processing (NLP) and Computer Vision (CV) communities. These LLMs have demonstrated advanced multi-modal understanding capabilities and showcased strong performance across various benchmarks. The LLM has started to embody traits of artificial general intelligence, which holds vital guidance for enhancing brain-like characteristics within visual encoding models. Hence, This paper proposes a new multi-modal training paradigm, aligning with LLM, for encoding fMRI activity in visual cortex. Based on this paradigm, we trained an encoding model in fMRI data named the LLM-Visual Encoding Model (LLM-VEM). Specifically, we utilize LLM (miniGPT4) to generate descriptive text for all stimulus images, forming a high-quality textual description set. Moreover, we use the pre-trained text encoder (CLIP) to process these detailed descriptions, obtaining the text embedding features. Next, we use the contrast loss function to minimize the distance between the image embedding features and the text embedding features to complete the alignment operation of the stimulus image and text information. With the assistance of the pre-trained LLM, this alignment process facilitates better learning of the visual encoding model, resulting in higher precision. The final experimental results indicate that our training paradigm has significantly aided in enhancing the performance of the visual encoding model.
In a topology optimization setting, design-dependent fluidic pressure loads pose several challenges as their direction, magnitude, and location alter with topology evolution. This paper offers a compact 100-line MATLAB code, TOPress, for topology optimization of structures subjected to fluidic pressure loads using the method of moving asymptotes. The code is intended for pedagogical purposes and aims to ease the beginners' and students' learning toward topology optimization with design-dependent fluidic pressure loads. TOPress is developed per the approach first reported in Kumar et al. (Struct Multidisc Optim 61(4):1637-1655, 2020). The Darcy law, in conjunction with the drainage term, is used to model the applied pressure load. The consistent nodal loads are determined from the obtained pressure field. The employed approach facilitates inexpensive computation of the load sensitivities using the adjoint-variable method. Compliance minimization subject to volume constraint optimization problems are solved. The success and efficacy of the code are demonstrated by solving benchmark numerical examples involving pressure loads, wherein the importance of load sensitivities is also demonstrated. TOPress contains six main parts, is described in detail, and is extended to solve different problems. Steps to include a projection filter are provided to achieve loadbearing designs close to~0-1. The code is provided in Appendix~B and can also be downloaded along with its extensions from \url{//github.com/PrabhatIn/TOPress}.
Safety assessment of crash and conflict avoidance systems is important for both the automotive industry and other stakeholders. One type of system that needs such an assessment is a driver monitoring system (DMS) with some intervention (e.g., warning or nudging) when the driver looks off-road for too long. Although using computer simulation to assess safety systems is becoming increasingly common, it is not yet commonly used for systems that affect driver behavior, such as DMSs. Models that generate virtual crashes, taking crash-causation mechanisms into account, are needed to assess these systems. However, few such models exist, and those that do have not been thoroughly validated on real-world data. This study aims to address this research gap by validating a rear-end crash-causation model which is based on four crash-causation mechanisms related to driver behavior: a) off-road glances, b) too-short headway, c) not braking with the maximum deceleration possible, and d) sleepiness (not reacting before the crash). The pre-crash kinematics were obtained from the German GIDAS in-depth crash database. Challenges with the validation process were identified and addressed. Most notably, a process was developed to transform the generated crashes to mimic the crash severity distribution in GIDAS. This step was necessary because GIDAS does not include property-damage-only (PDO) crashes, while the generated crashes cover the full range of severities (including low-severity crashes, of which many are PDOs). Our results indicate that the proposed model is a reasonably good crash generator. We further demonstrated that the model is a valid method for assessing DMSs in virtual simulations; it shows the safety impact of shorter longest off-road glances. As expected, cutting away long off-road glances substantially reduces the number of crashes that occur and reduces the average delta-v.
As a pivotal approach in machine learning and data science, manifold learning aims to uncover the intrinsic low-dimensional structure within complex nonlinear manifolds in high-dimensional space. By exploiting the manifold hypothesis, various techniques for nonlinear dimension reduction have been developed to facilitate visualization, classification, clustering, and gaining key insights. Although existing manifold learning methods have achieved remarkable successes, they still suffer from extensive distortions incurred in the global structure, which hinders the understanding of underlying patterns. Scalability issues also limit their applicability for handling large-scale data. Here, we propose a scalable manifold learning (scML) method that can manipulate large-scale and high-dimensional data in an efficient manner. It starts by seeking a set of landmarks to construct the low-dimensional skeleton of the entire data, and then incorporates the non-landmarks into the learned space based on the constrained locally linear embedding (CLLE). We empirically validated the effectiveness of scML on synthetic datasets and real-world benchmarks of different types, and applied it to analyze the single-cell transcriptomics and detect anomalies in electrocardiogram (ECG) signals. scML scales well with increasing data sizes and embedding dimensions, and exhibits promising performance in preserving the global structure. The experiments demonstrate notable robustness in embedding quality as the sample rate decreases.