In the past several years, the last-iterate convergence of the Stochastic Gradient Descent (SGD) algorithm has triggered people's interest due to its good performance in practice but lack of theoretical understanding. For Lipschitz convex functions, different works have established the optimal $O(\log(1/\delta)\log T/\sqrt{T})$ or $O(\sqrt{\log(1/\delta)/T})$ high-probability convergence rates for the final iterate, where $T$ is the time horizon and $\delta$ is the failure probability. However, to prove these bounds, all the existing works are either limited to compact domains or require almost surely bounded noises. It is natural to ask whether the last iterate of SGD can still guarantee the optimal convergence rate but without these two restrictive assumptions. Besides this important question, there are still lots of theoretical problems lacking an answer. For example, compared with the last-iterate convergence of SGD for non-smooth problems, only few results for smooth optimization have yet been developed. Additionally, the existing results are all limited to a non-composite objective and the standard Euclidean norm. It still remains unclear whether the last-iterate convergence can be provably extended to wider composite optimization and non-Euclidean norms. In this work, to address the issues mentioned above, we revisit the last-iterate convergence of stochastic gradient methods and provide the first unified way to prove the convergence rates both in expectation and in high probability to accommodate general domains, composite objectives, non-Euclidean norms, Lipschitz conditions, smoothness, and (strong) convexity simultaneously. Additionally, we extend our analysis to obtain the last-iterate convergence under heavy-tailed noises.
We investigate the constant-depth circuit complexity of the Isomorphism Problem, Minimum Generating Set Problem (MGS), and Sub(quasi)group Membership Problem (Membership) for groups and quasigroups (=Latin squares), given as input in terms of their multiplication (Cayley) tables. Despite decades of research on these problems, lower bounds for these problems even against depth-$2$ AC circuits remain unknown. Perhaps surprisingly, Chattopadhyay, Tor\'an, and Wagner (FSTTCS 2010; ACM Trans. Comput. Theory, 2013) showed that Quasigroup Isomorphism could be solved by AC circuits of depth $O(\log \log n)$ using $O(\log^2 n)$ nondeterministic bits, a class we denote $\exists^{\log^2(n)}FOLL$. We narrow this gap by improving the upper bound for many of these problems to $quasiAC^0$, thus decreasing the depth to constant. In particular, we show: - MGS for quasigroups is in $\exists^{\log^2(n)}\forall^{\log n}NTIME(\mathrm{polylog}(n))\subseteq quasiAC^0$. Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1996) conjectured that this problem was $\exists^{\log^2(n)}P$-complete; our results refute a version of that conjecture for completeness under $quasiAC^0$ reductions unconditionally, and under polylog-space reductions assuming EXP $\neq$ PSPACE. - MGS for groups is in $AC^{1}(L)$, improving on the previous upper bound of $P$ (Lucchini & Thakkar, J. Algebra, 2024). - Quasigroup Isomorphism belongs to $\exists^{\log^2(n)}AC^0(DTISP(\mathrm{polylog},\log)\subseteq quasiAC^0$, improving on the previous bound of $\exists^{\log^2(n)}L\cap\exists^{\log^2(n)}FOLL\subseteq quasiFOLL$ (Chattopadhyay, Tor\'an, & Wagner, ibid.; Levet, Australas. J. Combin., 2023). Our results suggest that understanding the constant-depth circuit complexity may be key to resolving the complexity of problems concerning (quasi)groups in the multiplication table model.
Language models have improved by orders of magnitude with the recent emergence of Transformer-based Large Language Models (LLMs). LLMs have demonstrated their ability to generate natural code that is highly similar to code written by professional developers. One intermediate value an LLM can emit is entropy, which measures the naturalness of a token of code. We hypothesize that entropy can be used to improve the performance of Automated Program Repair (APR) tasks. While much progress has been made in Automated Program Repair (APR), fault localization techniques suffer from a lack of diversity in ranking scores, patch generation tools tend to be inefficient as all tests need to run before determining if a patch is likely to be correct, and patch ranking often suffers from the test-suite over-fitting problem. However, using an LLM directly for APR introduces concerns for training data leakage. In this work, we introduce a novel way of using the entropy of LLMs in combination with prior APR tools to improve all stages of APR. We show that entropy is highly complementary with prior fault localization tools. Our proposed re-ranking method achieves a 50% Top-5 score improvement over SBFL. We propose a patch-naturalness measurement, entropy-delta, to improve the efficiency of template-based repair techniques by ranking plausible patches before undergoing testing. When using entropy-delta for patch ranking and classification, our proposed method can rank correct patches more effectively than state-of-the-art machine learning tools with an 49% improvement in Top-1. Our work suggests that LLMs can be an effective addition to compliment prior APR tasks while minimizing both the test-suite overfitting problem and the LLM data leakage problem.
According to the World Health Organization, the involvement of Vulnerable Road Users (VRUs) in traffic accidents remains a significant concern, with VRUs accounting for over half of traffic fatalities. The increase of automation and connectivity levels of vehicles has still an uncertain impact on VRU safety. By deploying the Collective Perception Service (CPS), vehicles can include information about VRUs in Vehicle-to-Everything (V2X) messages, thus raising the general perception of the environment. Although an increased awareness is considered positive, one could argue that the awareness ratio, the metric used to measure perception, is only implicitly connected to the VRUs' safety. This paper introduces a tailored metric, the Risk Factor (RF), to measure the risk level for the interactions between Connected Automated Vehicles (CAVs) and VRUs. By evaluating the RF, we assess the impact of V2X communication on VRU risk mitigation. Our results show that high V2X penetration rates can reduce mean risk, quantified by our proposed metric, by up to 44%. Although the median risk value shows a significant decrease, suggesting a reduction in overall risk, the distribution of risk values reveals that CPS's mitigation effectiveness is overestimated, which is indicated by the divergence between RF and awareness ratio. Additionally, by analyzing a real-world traffic dataset, we pinpoint high-risk locations within a scenario, identifying areas near intersections and behind parked cars as especially dangerous. Our methodology can be ported and applied to other scenarios in order to identify high-risk areas. We value the proposed RF as an insightful metric for quantifying VRU safety in a highly automated and connected environment.
The capabilities of generative AI (genAI) have dramatically increased in recent times, and there are opportunities for children to leverage new features for personal and school-related endeavors. However, while the future of genAI is taking form, there remain potentially harmful limitations, such as generation of outputs with misinformation and bias. We ran a workshop study focused on ChatGPT to explore middle school girls' (N = 26) attitudes and reasoning about how genAI works. We focused on girls who are often disproportionately impacted by algorithmic bias. We found that: (1) middle school girls were initially overtrusting of genAI, (2) deliberate exposure to the limitations and mistakes of generative AI shifted this overtrust to disillusionment about genAI capabilities, though they were still optimistic for future possibilities of genAI, and (3) their ideas about school policy were nuanced. This work informs how children think about genAI like ChatGPT and its integration in learning settings.
In recent years, the field of Transfer Evolutionary Optimization (TrEO) has witnessed substantial growth, fueled by the realization of its profound impact on solving complex problems. Numerous algorithms have emerged to address the challenges posed by transferring knowledge between tasks. However, the recently highlighted ``no free lunch theorem'' in transfer optimization clarifies that no single algorithm reigns supreme across diverse problem types. This paper addresses this conundrum by adopting a benchmarking approach to evaluate the performance of various TrEO algorithms in realistic scenarios. Despite the growing methodological focus on transfer optimization, existing benchmark problems often fall short due to inadequate design, predominantly featuring synthetic problems that lack real-world relevance. This paper pioneers a practical TrEO benchmark suite, integrating problems from the literature categorized based on the three essential aspects of Big Source Task-Instances: volume, variety, and velocity. Our primary objective is to provide a comprehensive analysis of existing TrEO algorithms and pave the way for the development of new approaches to tackle practical challenges. By introducing realistic benchmarks that embody the three dimensions of volume, variety, and velocity, we aim to foster a deeper understanding of algorithmic performance in the face of diverse and complex transfer scenarios. This benchmark suite is poised to serve as a valuable resource for researchers, facilitating the refinement and advancement of TrEO algorithms in the pursuit of solving real-world problems.
Air pollution is one of the leading causes of mortality globally, resulting in millions of deaths each year. Efficient monitoring is important to measure exposure and enforce legal limits. New low-cost sensors can be deployed in greater numbers and in more varied locations, motivating the problem of efficient automated placement. Previous work suggests Bayesian optimisation is an appropriate method, but only considered a satellite data set, with data aggregated over all altitudes. It is ground-level pollution, that humans breathe, which matters most. We improve on those results using hierarchical models and evaluate our models on urban pollution data in London to show that Bayesian optimisation can be successfully applied to the problem.
Aspect-based sentiment analysis aims to predict sentiment polarity with fine granularity. While Graph Convolutional Networks (GCNs) are widely utilized for sentimental feature extraction, their naive application for syntactic feature extraction can compromise information preservation. This study introduces an innovative edge-enhanced GCN, named SentiSys, to navigate the syntactic graph while preserving intact feature information, leading to enhanced performance. Specifically,we first integrate a bidirectional long short-term memory (Bi-LSTM) network and a self-attention-based transformer. This combination facilitates effective text encoding, preventing the loss of information and predicting long dependency text. A bidirectional GCN (Bi-GCN) with message passing is then employed to encode relationships between entities. Additionally, unnecessary information is filtered out using an aspect-specific masking technique. To validate the effectiveness of our proposed model, we conduct extensive evaluation experiments and ablation studies on four benchmark datasets. The results consistently demonstrate improved performance in aspect-based sentiment analysis when employing SentiSys. This approach successfully addresses the challenges associated with syntactic feature extraction, highlighting its potential for advancing sentiment analysis methodologies.
We study the output length of one-way state generators (OWSGs), their weaker variants, and EFIs. - Standard OWSGs. Recently, Cavalar et al. (arXiv:2312.08363) give OWSGs with $m$-qubit outputs for any $m=\omega(\log \lambda)$, where $\lambda$ is the security parameter, and conjecture that there do not exist OWSGs with $O(\log \log \lambda)$-qubit outputs. We prove their conjecture in a stronger manner by showing that there do not exist OWSGs with $O(\log \lambda)$-qubit outputs. This means that their construction is optimal in terms of output length. - Inverse-polynomial-advantage OWSGs. Let $\epsilon$-OWSGs be a parameterized variant of OWSGs where a quantum polynomial-time adversary's advantage is at most $\epsilon$. For any constant $c\in \mathbb{N}$, we construct $\lambda^{-c}$-OWSGs with $((c+1)\log \lambda+O(1))$-qubit outputs assuming the existence of OWFs. We show that this is almost tight by proving that there do not exist $\lambda^{-c}$-OWSGs with at most $(c\log \lambda-2)$-qubit outputs. - Constant-advantage OWSGs. For any constant $\epsilon>0$, we construct $\epsilon$-OWSGs with $O(\log \log \lambda)$-qubit outputs assuming the existence of subexponentially secure OWFs. We show that this is almost tight by proving that there do not exist $O(1)$-OWSGs with $((\log \log \lambda)/2+O(1))$-qubit outputs. - Weak OWSGs. We refer to $(1-1/\mathsf{poly}(\lambda))$-OWSGs as weak OWSGs. We construct weak OWSGs with $m$-qubit outputs for any $m=\omega(1)$ assuming the existence of exponentially secure OWFs with linear expansion. We show that this is tight by proving that there do not exist weak OWSGs with $O(1)$-qubit outputs. - EFIs. We show that there do not exist $O(\log \lambda)$-qubit EFIs. We show that this is tight by proving that there exist $\omega(\log \lambda)$-qubit EFIs assuming the existence of exponentially secure PRGs.
While Centralized Training with Decentralized Execution (CTDE) has become the prevailing paradigm in Multi-Agent Reinforcement Learning (MARL), it may not be suitable for scenarios in which agents can fully communicate and share observations with each other. Fully centralized methods, also know as Centralized Training with Centralized Execution (CTCE) methods, can fully utilize observations of all the agents by treating the entire system as a single agent. However, traditional CTCE methods suffer from scalability issues due to the exponential growth of the joint action space. To address these challenges, in this paper we propose JointPPO, a CTCE method that uses Proximal Policy Optimization (PPO) to directly optimize the joint policy of the multi-agent system. JointPPO decomposes the joint policy into conditional probabilities, transforming the decision-making process into a sequence generation task. A Transformer-based joint policy network is constructed, trained with a PPO loss tailored for the joint policy. JointPPO effectively handles a large joint action space and extends PPO to multi-agent setting with theoretical clarity and conciseness. Extensive experiments on the StarCraft Multi-Agent Challenge (SMAC) testbed demonstrate the superiority of JointPPO over the strong baselines. Ablation experiments and analyses are conducted to explores the factors influencing JointPPO's performance.
Recent regulatory initiatives like the European AI Act and relevant voices in the Machine Learning (ML) community stress the need to describe datasets along several key dimensions for trustworthy AI, such as the provenance processes and social concerns. However, this information is typically presented as unstructured text in accompanying documentation, hampering their automated analysis and processing. In this work, we explore using large language models (LLM) and a set of prompting strategies to automatically extract these dimensions from documents and enrich the dataset description with them. Our approach could aid data publishers and practitioners in creating machine-readable documentation to improve the discoverability of their datasets, assess their compliance with current AI regulations, and improve the overall quality of ML models trained on them. In this paper, we evaluate the approach on 12 scientific dataset papers published in two scientific journals (Nature's Scientific Data and Elsevier's Data in Brief) using two different LLMs (GPT3.5 and Flan-UL2). Results show good accuracy with our prompt extraction strategies. Concrete results vary depending on the dimensions, but overall, GPT3.5 shows slightly better accuracy (81,21%) than FLAN-UL2 (69,13%) although it is more prone to hallucinations. We have released an open-source tool implementing our approach and a replication package, including the experiments' code and results, in an open-source repository.