In this paper, we study the existence of equilibrium in a single-leader-multiple-follower game with decision-dependent chance constraints (DDCCs), where decision-dependent uncertainties (DDUs) exist in the constraints of followers. DDUs refer to the uncertainties impacted by the leader's strategy, while the leader cannot capture their exact probability distributions. To address such problems, we first use decision-dependent ambiguity sets under moment information and Cantelli's inequality to transform DDCCs into second-order cone constraints. This simplifies the game model by eliminating the probability distributions. We further prove that there exists at least one equilibrium point for this game by applying Kakutani's fixed-point theorem. Finally, a numerical example is provided to show the impact of DDUs on the equilibrium of such game models.
In this paper, we consider contamination by code generation test sets, in particular in their use in modern large language models. We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of synthetic data and (iii) overfitting to evaluation sets during model selection. To address this, we release Less Basic Python Problems (LBPP): an uncontaminated new benchmark of 161 prompts with their associated Python solutions. LBPP is released at //huggingface.co/datasets/CohereForAI/lbpp .
In this paper, we propose a generalized shift-splitting (GSS) preconditioner, along with its two relaxed variants to solve the double saddle point problem (DSPP). The convergence of the associated GSS iterative method is analyzed, and sufficient conditions for its convergence are established. Spectral analyses are performed to derive sharp bounds for the eigenvalues of the preconditioned matrices. Numerical experiments based on examples arising from the PDE-constrained optimization problems demonstrate the effectiveness and robustness of the proposed preconditioners compared with existing state-of-the-art preconditioners.
In this paper, we address the problem of lossy semantic communication to reduce uncertainty about the State of the World (SotW) for deductive tasks in point to point communication. A key challenge is transmitting the maximum semantic information with minimal overhead suitable for downstream applications. Our solution involves maximizing semantic content information within a constrained bit budget, where SotW is described using First-Order Logic, and content informativeness is measured by the usefulness of the transmitted information in reducing the uncertainty of the SotW perceived by the receiver. Calculating content information requires computing inductive logical probabilities of state descriptions; however, naive approaches are infeasible due to the massive size of the state space. To address this, our algorithm draws inspiration from state-of-the-art model counters and employs tree search-based model counting to reduce the computational burden. These algorithmic model counters, designed to count the number of models that satisfy a Boolean equation, efficiently estimate the number of world states that validate the observed evidence. Empirical validation using the FOLIO and custom deduction datasets demonstrate that our algorithm reduces uncertainty and improves task performance with fewer bits compared to baselines.
In this paper, we study the fundamental statistical efficiency of Reinforcement Learning in Mean-Field Control (MFC) and Mean-Field Game (MFG) with general model-based function approximation. We introduce a new concept called Mean-Field Model-Based Eluder Dimension (MF-MBED), which characterizes the inherent complexity of mean-field model classes. We show that a rich family of Mean-Field RL problems exhibits low MF-MBED. Additionally, we propose algorithms based on maximal likelihood estimation, which can return an $\epsilon$-optimal policy for MFC or an $\epsilon$-Nash Equilibrium policy for MFG. The overall sample complexity depends only polynomially on MF-MBED, which is potentially much lower than the size of state-action space. Compared with previous works, our results only require the minimal assumptions including realizability and Lipschitz continuity.
In this paper, we consider joint antenna allocation and transmit beamforming design in secure integrated sensing and communication (ISAC) systems. A dual-function base station (DFBS) aims to securely deliver messages to a single-antenna receiver while detecting potential eavesdroppers. To prevent eavesdropping, we incorporate specialized sensing signals, intentionally reducing communication signal power toward suspicious targets to improve sensing. We prioritize minimizing the matching error between the transmitting and required beampatterns for sensing and communication. Our design optimizes antenna allocation and beamforming at the DFBS, meeting minimum secrecy rate and power constraints. We propose solvers based on alternating optimization for the non-convex design problem. Simulations show that the antenna allocation scheme significantly improves safety performance.
In this paper, we consider the supervised pre-trained transformer for a class of sequential decision-making problems. The class of considered problems is a subset of the general formulation of reinforcement learning in that there is no transition probability matrix; though seemingly restrictive, the subset class of problems covers bandits, dynamic pricing, and newsvendor problems as special cases. Such a structure enables the use of optimal actions/decisions in the pre-training phase, and the usage also provides new insights for the training and generalization of the pre-trained transformer. We first note the training of the transformer model can be viewed as a performative prediction problem, and the existing methods and theories largely ignore or cannot resolve an out-of-distribution issue. We propose a natural solution that includes the transformer-generated action sequences in the training procedure, and it enjoys better properties both numerically and theoretically. The availability of the optimal actions in the considered tasks also allows us to analyze the properties of the pre-trained transformer as an algorithm and explains why it may lack exploration and how this can be automatically resolved. Numerically, we categorize the advantages of pre-trained transformers over the structured algorithms such as UCB and Thompson sampling into three cases: (i) it better utilizes the prior knowledge in the pre-training data; (ii) it can elegantly handle the misspecification issue suffered by the structured algorithms; (iii) for short time horizon such as $T\le50$, it behaves more greedy and enjoys much better regret than the structured algorithms designed for asymptotic optimality.
In this paper, we demonstrate how to enhance the validity of causal inference with unstructured high-dimensional treatments like texts, by leveraging the power of generative Artificial Intelligence. Specifically, we propose to use a deep generative model such as large language models (LLMs) to efficiently generate treatments and use their internal representation for subsequent causal effect estimation. We show that the knowledge of this true internal representation helps separate the treatment features of interest, such as specific sentiments and certain topics, from other possibly unknown confounding features. Unlike the existing methods, our proposed approach eliminates the need to learn causal representation from the data and hence produces more accurate and efficient estimates. We formally establish the conditions required for the nonparametric identification of the average treatment effect, propose an estimation strategy that avoids the violation of the overlap assumption, and derive the asymptotic properties of the proposed estimator through the application of double machine learning. Finally, using an instrumental variables approach, we extend the proposed methodology to the settings, in which the treatment feature is based on human perception rather than is assumed to be fixed given the treatment object. We conduct simulation studies using the generated text data with an open-source LLM, Llama3, to illustrate the advantages of our estimator over the state-of-the-art causal representation learning algorithms.
In this paper, we investigate the performance of the cross-domain iterative detection (CDID) framework with orthogonal time frequency space (OTFS) modulation, where two distinct CDID algorithms are presented. The proposed schemes estimate/detect the information symbols iteratively across the frequency domain and the delay-Doppler (DD) domain via passing either the a posteriori or extrinsic information. Building upon this framework, we investigate the error performance by considering the bias evolution and state evolution. Furthermore, we discuss their error performance in convergence and the DD domain error state lower bounds in each iteration. Specifically, we demonstrate that in convergence, the ultimate error performance of the CDID passing the a posteriori information can be characterized by two potential convergence points. In contrast, the ultimate error performance of the CDID passing the extrinsic information has only one convergence point, which, interestingly, aligns with the matched filter bound. Our numerical results confirm our analytical findings and unveil the promising error performance achieved by the proposed designs.
In this paper, we address the optimal sampling of a Wiener process under sampling and transmission costs, with the samples being forwarded to a remote estimator over a channel with random IID delay. The goal of the estimator is to reconstruct an estimate of the real-time signal value from causally received samples. Our study focuses on the optimal online strategy for both sampling and transmission, aiming to minimize the mean square estimation error. We establish that the optimal strategy involves threshold policies for both sampling and transmission, and we derive the optimal thresholds. We utilize Lagrange relaxation and backward induction as our methodology, revealing the problem of minimizing estimation error, under the assumption that sampling and transmission times are independent of the observed Wiener process. Our comparative analysis demonstrates that the estimation error achieved by the optimal joint sampling and transmission policy is significantly lower than that of age-optimal sampling, zero-wait sampling, periodic sampling, and policies that optimize only the sampling times.
In this study, we consider the problem of predicting task success for open-vocabulary manipulation by a manipulator, based on instruction sentences and egocentric images before and after manipulation. Conventional approaches, including multimodal large language models (MLLMs), often fail to appropriately understand detailed characteristics of objects and/or subtle changes in the position of objects. We propose Contrastive $\lambda$-Repformer, which predicts task success for table-top manipulation tasks by aligning images with instruction sentences. Our method integrates the following three key types of features into a multi-level aligned representation: features that preserve local image information; features aligned with natural language; and features structured through natural language. This allows the model to focus on important changes by looking at the differences in the representation between two images. We evaluate Contrastive $\lambda$-Repformer on a dataset based on a large-scale standard dataset, the RT-1 dataset, and on a physical robot platform. The results show that our approach outperformed existing approaches including MLLMs. Our best model achieved an improvement of 8.66 points in accuracy compared to the representative MLLM-based model.