We study off-policy evaluation (OPE) in partially observable environments with complex observations, with the goal of developing estimators whose guarantee avoids exponential dependence on the horizon. While such estimators exist for MDPs and POMDPs can be converted to history-based MDPs, their estimation errors depend on the state-density ratio for MDPs which becomes history ratios after conversion, an exponential object. Recently, Uehara et al. [2022a] proposed future-dependent value functions as a promising framework to address this issue, where the guarantee for memoryless policies depends on the density ratio over the latent state space. However, it also depends on the boundedness of the future-dependent value function and other related quantities, which we show could be exponential-in-length and thus erasing the advantage of the method. In this paper, we discover novel coverage assumptions tailored to the structure of POMDPs, such as outcome coverage and belief coverage, which enable polynomial bounds on the aforementioned quantities. As a side product, our analyses also lead to the discovery of new algorithms with complementary properties.
Learning representations of underlying environmental dynamics from partial observations is a critical challenge in machine learning. In the context of Partially Observable Markov Decision Processes (POMDPs), state representations are often inferred from the history of past observations and actions. We demonstrate that incorporating future information is essential to accurately capture causal dynamics and enhance state representations. To address this, we introduce a Dynamical Variational Auto-Encoder (DVAE) designed to learn causal Markovian dynamics from offline trajectories in a POMDP. Our method employs an extended hindsight framework that integrates past, current, and multi-step future information within a factored-POMDP setting. Empirical results reveal that this approach uncovers the causal graph governing hidden state transitions more effectively than history-based and typical hindsight-based models.
Thanks to the explosive growth of data and the development of computational resources, it is possible to build pre-trained models that can achieve outstanding performance on various tasks, such as neural language processing, computer vision, and more. Despite their powerful capabilities, pre-trained models have also sparked attention to the emerging security challenges associated with their real-world applications. Security and privacy issues, such as leaking privacy information and generating harmful responses, have seriously undermined users' confidence in these powerful models. Concerns are growing as model performance improves dramatically. Researchers are eager to explore the unique security and privacy issues that have emerged, their distinguishing factors, and how to defend against them. However, the current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models, which hinders a high-level and comprehensive understanding of these questions. To fill the gap, we conduct a systematical survey on the security risks of pre-trained models, proposing a taxonomy of attack and defense methods based on the accessibility of pre-trained models' input and weights in various security test scenarios. This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches. With the taxonomy analysis, we capture the unique security and privacy issues of pre-trained models, categorizing and summarizing existing security issues based on their characteristics. In addition, we offer a timely and comprehensive review of each category's strengths and limitations. Our survey concludes by highlighting potential new research opportunities in the security and privacy of pre-trained models.
In this work, we address unconstrained finite-sum optimization problems, with particular focus on instances originating in large scale deep learning scenarios. Our main interest lies in the exploration of the relationship between recent line search approaches for stochastic optimization in the overparametrized regime and momentum directions. First, we point out that combining these two elements with computational benefits is not straightforward. To this aim, we propose a solution based on mini-batch persistency. We then introduce an algorithmic framework that exploits a mix of data persistency, conjugate-gradient type rules for the definition of the momentum parameter and stochastic line searches. The resulting algorithm is empirically shown to outperform other popular methods from the literature, obtaining state-of-the-art results in both convex and nonconvex large scale training problems.
This study focuses on Intelligent Fault Diagnosis (IFD) in rotating machinery utilizing a single microphone and a data-driven methodology, effectively diagnosing 42 classes of fault types and severities. The research leverages sound data from the imbalanced MaFaulDa dataset, aiming to strike a balance between high performance and low resource consumption. The testing phase encompassed a variety of configurations, including sampling, quantization, signal normalization, silence removal, Wiener filtering, data scaling, windowing, augmentation, and classifier tuning using XGBoost. Through the analysis of time, frequency, mel-frequency, and statistical features, we achieved an impressive accuracy of 99.54% and an F-Beta score of 99.52% with just 6 boosting trees at an 8 kHz, 8-bit configuration. Moreover, when utilizing only MFCCs along with their first- and second-order deltas, we recorded an accuracy of 97.83% and an F-Beta score of 97.67%. Lastly, by implementing a greedy wrapper approach, we obtained a remarkable accuracy of 96.82% and an F-Beta score of 98.86% using 50 selected features, nearly all of which were first- and second-order deltas of the MFCCs.
Enterprise networks are becoming increasingly complex, posing challenges for traditional WANs in terms of scalability, management, and operational costs. Software Defined Networking (SDN) and its application in Wide Area Networks (SD-WAN) offer solutions by decoupling the control plane from the data plane, providing centralized management, enhanced flexibility, and automated provisioning. This research investigates the challenging application of SD-WAN to optimize traditional multisite enterprise networks. Experimental scenarios are designed in which SD-WAN is implemented on a traditional multi-site network topology with complex architecture, then followed by comprehensive evaluations of its performance across various critical aspects, including hardware status, transmission performance, and security.
Governments are increasingly employing funding for open source software (OSS) development as a policy lever to support the security of software supply chains, digital sovereignty, economic growth, and national competitiveness in science and innovation, among others. However, the impacts of public funding on OSS development remain poorly understood, with a lack of consensus on how to meaningfully measure them. This gap hampers assessments of the return on public investment and impedes the optimisation of public-interest funding strategies. We address this gap with a toolkit of methodological considerations that may inform such measurements, drawing on prior work on OSS valuations and community health metrics by the Community Health Analytics Open Source Software (CHAOSS) project as well as our first-hand learnings as practitioners tasked with evaluating funding programmes by the Next Generation Internet initiative and the Sovereign Tech Agency. We discuss salient considerations, including the importance of accounting for funding objectives, project life stage and social structure, and regional and organisational cost factors. Next, we present a taxonomy of potential social, economic, and technological impacts that can be both positive and negative, direct and indirect, internal (i.e. within a project) and external (i.e. among a project's ecosystem of dependents and users), and manifest over various time horizons. Furthermore, we discuss the merits and limitations of qualitative, quantitative, and mixed-methods approaches, as well as options for and hazards of estimating multiplier effects. With this toolkit, we contribute to the multi-stakeholder conversation about the value and impacts of funding on OSS developers and society at large.
Conventional economic and socio-behavioural models assume perfect symmetric access to information and rational behaviour among interacting agents in a social system. However, real-world events and observations appear to contradict such assumptions, leading to the possibility of other, more complex interaction rules existing between such agents. We investigate this possibility by creating two different models for a doctor-patient system. One retains the established assumptions, while the other incorporates principles of reflexivity theory and cognitive social structures. In addition, we utilize a microbial genetic algorithm to optimize the behaviour of the physician and patient agents in both models. The differences in results for the two models suggest that social systems may not always exhibit the behaviour or even accomplish the purpose for which they were designed and that modelling the social and cognitive influences in a social system may capture various ways a social agent balances complementary and competing information signals in making choices.
Thanks to unprecedented language understanding and generation capabilities of large language model (LLM), Retrieval-augmented Code Generation (RaCG) has recently been widely utilized among software developers. While this has increased productivity, there are still frequent instances of incorrect codes being provided. In particular, there are cases where plausible yet incorrect codes are generated for queries from users that cannot be answered with the given queries and API descriptions. This study proposes a task for evaluating answerability, which assesses whether valid answers can be generated based on users' queries and retrieved APIs in RaCG. Additionally, we build a benchmark dataset called Retrieval-augmented Code Generability Evaluation (RaCGEval) to evaluate the performance of models performing this task. Experimental results show that this task remains at a very challenging level, with baseline models exhibiting a low performance of 46.7%. Furthermore, this study discusses methods that could significantly improve performance.
This paper argues for the strategic treatment of artificial intelligence as a key industry within broader industrial policy framework of Pakistan, underscoring the importance of aligning it with national goals such as economic resilience and preservation of autonomy. The paper starts with defining industrial policy as a set of targeted government interventions to shape specific sectors for strategic outcomes and argues for its application to AI in Pakistan due to its huge potential, the risks of unregulated adoption, and prevailing market inefficiencies. The paper conceptualizes AI as a layered ecosystem, comprising foundational infrastructure, core computing, development platforms, and service and product layers, supported by education, government policy, and research and development. The analysis highlights that AI sector of Pakistan is predominantly service oriented, with limited product innovation and dependence on foreign technologies, posing risks to economic independence, national security, and employment. To address these challenges, the paper recommends educational reforms, support for local AI product development, initiatives for indigenous cloud and hardware capabilities, and public-private collaborations on foundational models. Additionally, it advocates for public procurement policies and infrastructure incentives to foster local solutions and reduce reliance on foreign providers. This strategy aims to position Pakistan as a competitive, autonomous player in the global AI ecosystem.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.