Cluster-randomized trials often involve units that are irregularly distributed in space without well-separated communities. In these settings, cluster construction is a critical aspect of the design due to the potential for cross-cluster interference. The existing literature relies on partial interference models, which take clusters as given and assume no cross-cluster interference. We relax this assumption by allowing interference to decay with geographic distance between units. This induces a bias-variance trade-off: constructing fewer, larger clusters reduces bias due to interference but increases variance. We propose new estimators that exclude units most potentially impacted by cross-cluster interference and show that this substantially reduces asymptotic bias relative to conventional difference-in-means estimators. We provide formal justification for a new design that chooses the number of clusters to balance the asymptotic bias and variance of our estimators and uses unsupervised learning to automate cluster construction.
To handle the complexities of irregular and incomplete time series data, we propose an invertible solution of Neural Differential Equations (NDE)-based method. While NDE-based methods are a powerful method for analyzing irregularly-sampled time series, they typically do not guarantee reversible transformations in their standard form. Our method suggests the variation of Neural Controlled Differential Equations (Neural CDEs) with Neural Flow, which ensures invertibility while maintaining a lower computational burden. Additionally, it enables the training of a dual latent space, enhancing the modeling of dynamic temporal dynamics. Our research presents an advanced framework that excels in both classification and interpolation tasks. At the core of our approach is an enhanced dual latent states architecture, carefully designed for high precision across various time series tasks. Empirical analysis demonstrates that our method significantly outperforms existing models. This work significantly advances irregular time series analysis, introducing innovative techniques and offering a versatile tool for diverse practical applications.
Autonomous Micro Aerial Vehicles (MAVs) such as quadrotors equipped with manipulation mechanisms have the potential to assist humans in tasks such as construction and package delivery. Cables are a promising option for manipulation mechanisms due to their low weight, low cost, and simple design. However, designing control and planning strategies for cable mechanisms presents challenges due to indirect load actuation, nonlinear configuration space, and highly coupled system dynamics. In this paper, we propose a novel Nonlinear Model Predictive Control (NMPC) method that enables a team of quadrotors to manipulate a rigid-body payload in all 6 degrees of freedom via suspended cables. Our approach can concurrently exploit, as part of the receding horizon optimization, the available mechanical system redundancies to perform additional tasks such as inter-robot separation and obstacle avoidance while respecting payload dynamics and actuator constraints. To address real-time computational requirements and scalability, we employ a lightweight state vector parametrization that includes only payload states in all six degrees of freedom. This also enables the planning of trajectories on the $SE(3)$ manifold load configuration space, thereby also reducing planning complexity. We validate the proposed approach through simulation and real-world experiments.
Feedback transmissions are used to acknowledge correct packet reception, trigger erroneous packet re-transmissions, and adapt transmission parameters (e.g., rate and power). Despite the paramount role of feedback in establishing reliable communication links, the majority of the literature overlooks its impact by assuming genie-aided systems relying on flawless and instantaneous feedback. An idealistic feedback assumption is no longer valid for large-scale Internet of Things (IoT), which has energy-constrained devices, susceptible to interference, and serves delay-sensitive applications. Furthermore, feedback-free operation is necessitated for IoT receivers with stringent energy constraints. In this context, this paper explicitly accounts for the impact of feedback in energy-constrained and delay-sensitive large-scale IoT networks. We consider a time-slotted system with closed-loop and open-loop rate adaptation schemes, where packets are fragmented to operate at a reliable transmission rate satisfying packet delivery deadlines. In the closed-loop scheme, the delivery of each fragment is acknowledged through an error-prone feedback channel. The open-loop scheme has no feedback mechanism, and hence, a predetermined fragment repetition strategy is employed to improve transmission reliability. Using tools from stochastic geometry and queueing theory, we develop a novel spatiotemporal framework to optimize the number of fragments for both schemes and repetitions for the open-loop scheme. To this end, we quantify the impact of feedback on the network performance in terms of transmission reliability, latency, and energy consumption.
Gaussian process state-space models (GPSSMs) are a versatile and principled family of nonlinear dynamical system models. However, existing variational learning and inference methods for GPSSMs often necessitate optimizing a substantial number of variational parameters, leading to inadequate performance and efficiency. To overcome this issue, we propose incorporating the ensemble Kalman filter (EnKF), a well-established model-based filtering technique, into the variational inference framework to approximate the posterior distribution of latent states. This utilization of EnKF can effectively exploit the dependencies between latent states and GP dynamics, while eliminating the need for parameterizing the variational distribution, thereby significantly reducing the number of variational parameters. Moreover, we show that our proposed algorithm allows straightforward evaluation of an approximated evidence lower bound (ELBO) in variational inference via simply summating multiple terms with readily available closed-form solutions. Leveraging automatic differentiation tools, we hence can maximize the ELBO and train the GPSSM efficiently. We also extend the proposed algorithm to accommodate an online setting and provide detailed algorithmic analyses and insights. Extensive evaluation on diverse real and synthetic datasets demonstrates the superiority of our EnKF-aided variational inference algorithms in terms of learning and inference performance compared to existing methods.
The ability to derive useful information by asking clarifying questions (ACQ) is an important element of real life collaboration on reasoning tasks, such as question answering (QA). Existing natural language ACQ challenges, however, evaluate generations based on word overlap rather than the value of the information itself. Word overlap is often an inappropriate metric for question generation since many different questions could be useful in a given situation, and a single question can be phrased many different ways. Instead, we propose evaluating questions pragmatically based on the value of the information they retrieve. Here we present a definition and framework for natural language pragmatic asking of clarifying questions (PACQ), the problem of generating questions that result in answers useful for a reasoning task. We also present fact-level masking (FLM), a procedure for converting natural language datasets into self-supervised PACQ datasets by omitting particular critical facts. Finally, we generate a PACQ dataset from the HotpotQA dataset using FLM and evaluate several zero-shot language models on it. Our experiments show that current zero-shot models struggle to ask questions that retrieve useful information, as compared to human annotators. These results demonstrate an opportunity to use FLM datasets and the PACQ framework to objectively evaluate and improve question generation and other language models.
Memory constraint of always-on devices is one of the major concerns when deploying speech processing models on these devices. While larger models trained with sufficiently large amount of data generally perform better, making them fit in the device memory is a demanding challenge. In this paper, we aim to reduce model size by reparameterizing model weights across Transformer encoder layers and assuming a special weight composition and structure. More specifically, inspired by ResNet and the more recent LoRA work, we propose an approach named ResidualTransformer, where each weight matrix in a Transformer layer comprises 1) a shared full-rank component with its adjacent layers, and 2) a unique low-rank component to itself. The low-rank matrices only account for a small amount of model size increase. In addition, we add diagonal weight matrices to improve modeling capacity of the low-rank matrices. Experiments of our 10k-hour speech recognition and speech translation tasks show that the Transformer encoder size can be reduced by ~3X with very slight performance degradation.
Economic and policy factors are driving the continuous increase in the adoption and usage of electrical vehicles (EVs). However, despite being a cleaner alternative to combustion engine vehicles, EVs have negative impacts on the lifespan of microgrid equipment and energy balance due to increased power demand and the timing of their usage. In our view grid management should leverage on EVs scheduling flexibility to support local network balancing through active participation in demand response programs. In this paper, we propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid to align with a target energy profile provided by the distribution system operator. We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile. The results are promising showing that the proposed solution can effectively schedule the EVs charging and discharging actions to align with the target profile with a Person coefficient of 0.99, handling effective EVs scheduling situations that involve dynamicity given by the e-mobility features, relying only on data with no knowledge of EVs and microgrid dynamics.
Multi-sensor data that track system operating behaviors are widely available nowadays from various engineering systems. Measurements from each sensor over time form a curve and can be viewed as functional data. Clustering of these multivariate functional curves is important for studying the operating patterns of systems. One complication in such applications is the possible presence of sensors whose data do not contain relevant information. Hence it is desirable for the clustering method to equip with an automatic sensor selection procedure. Motivated by a real engineering application, we propose a functional data clustering method that simultaneously removes noninformative sensors and groups functional curves into clusters using informative sensors. Functional principal component analysis is used to transform multivariate functional data into a coefficient matrix for data reduction. We then model the transformed data by a Gaussian mixture distribution to perform model-based clustering with variable selection. Three types of penalties, the individual, variable, and group penalties, are considered to achieve automatic variable selection. Extensive simulations are conducted to assess the clustering and variable selection performance of the proposed methods. The application of the proposed methods to an engineering system with multiple sensors shows the promise of the methods and reveals interesting patterns in the sensor data.
Optical lithography is the main enabler to semiconductor manufacturing. It requires extensive processing to perform the Resolution Enhancement Techniques (RETs) required to transfer the design data to a working Integrated Circuits (ICs). The processing power and computational runtime for RETs tasks is ever increasing due to the continuous reduction of the feature size and the expansion of the chip area. State-of-the-art research sought Machine Learning (ML) technologies to reduce runtime and computational power, however they are still not used in production yet. In this study, we analyze the reasons holding back ML computational lithography from being production ready and present a novel highly scalable end-to-end flow that enables production ready ML-RET correction.
Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.