This paper shows that the Heterogeneous Multiscale Method can be applied to elliptic problem without scale separation. The Localized Orthogonal Method is a special case of the Heterogeneous Multiscale Method.
Large Language Models (LLMs) have demonstrated remarkable performance across various tasks. A promising but largely under-explored area is their potential to facilitate human coordination with many agents. Such capabilities would be useful in domains including disaster response, urban planning, and real-time strategy scenarios. In this work, we introduce (1) a real-time strategy game benchmark designed to evaluate these abilities and (2) a novel framework we term HIVE. HIVE empowers a single human to coordinate swarms of up to 2,000 agents using natural language dialog with an LLM. We present promising results on this multi-agent benchmark, with our hybrid approach solving tasks such as coordinating agent movements, exploiting unit weaknesses, leveraging human annotations, and understanding terrain and strategic points. However, our findings also highlight critical limitations of current models, including difficulties in processing spatial visual information and challenges in formulating long-term strategic plans. This work sheds light on the potential and limitations of LLMs in human-swarm coordination, paving the way for future research in this area. The HIVE project page, which includes videos of the system in action, can be found here: hive.syrkis.com.
There is strong agreement that generative AI should be regulated, but strong disagreement on how to approach regulation. While some argue that AI regulation should mostly rely on extensions of existing laws, others argue that entirely new laws and regulations are needed to ensure that generative AI benefits society. In this paper, I argue that the debates on generative AI regulation can be informed by the debates and evidence on social media regulation. For example, AI companies have faced allegations of political bias regarding the images and text their models produce, similar to the allegations social media companies have faced regarding content ranking on their platforms. First, I compare and contrast the affordances of generative AI and social media to highlight their similarities and differences. Then, I discuss specific policy recommendations based on the evolution of social media and their regulation. These recommendations include investments in: efforts to counter bias and perceptions thereof (e.g., via transparency, researcher access, oversight boards, democratic input, research studies), specific areas of regulatory concern (e.g., youth wellbeing, election integrity) and trust and safety, computational social science research, and a more global perspective. Applying lessons learnt from social media regulation to generative AI regulation can save effort and time, and prevent avoidable mistakes.
In industrial contexts, effective workforce allocation is crucial for operational efficiency. This paper presents an ongoing project focused on developing a decision-making tool designed for workforce allocation, emphasising the explainability to enhance its trustworthiness. Our objective is to create a system that not only optimises the allocation of teams to scheduled tasks but also provides clear, understandable explanations for its decisions, particularly in cases where the problem is infeasible. By incorporating human-in-the-loop mechanisms, the tool aims to enhance user trust and facilitate interactive conflict resolution. We implemented our approach on a prototype tool/digital demonstrator intended to be evaluated on a real industrial scenario both in terms of performance and user acceptability.
We consider the bidimensional Stokes problem for incompressible fluids in stream function-vorticity. For this problem, the classical finite elements method of degree one converges only to order one-half for the L2 norm of the vorticity. We propose to use harmonic functions to approach the vorticity along the boundary. Discrete harmonics are functions that are used in practice to derive a new numerical method. We prove that we obtain with this numerical scheme an error of order one for the L2 norm of the vorticity.
In this paper, we present a novel keypoint-based classification model designed to recognise British Sign Language (BSL) words within continuous signing sequences. Our model's performance is assessed using the BOBSL dataset, revealing that the keypoint-based approach surpasses its RGB-based counterpart in computational efficiency and memory usage. Furthermore, it offers expedited training times and demands fewer computational resources. To the best of our knowledge, this is the inaugural application of a keypoint-based model for BSL word classification, rendering direct comparisons with existing works unavailable.
We study the problem of testing whether the missing values of a potentially high-dimensional dataset are Missing Completely at Random (MCAR). We relax the problem of testing MCAR to the problem of testing the compatibility of a collection of covariance matrices, motivated by the fact that this procedure is feasible when the dimension grows with the sample size. Our first contributions are to define a natural measure of the incompatibility of a collection of correlation matrices, which can be characterised as the optimal value of a Semi-definite Programming (SDP) problem, and to establish a key duality result allowing its practical computation and interpretation. By analysing the concentration properties of the natural plug-in estimator for this measure, we propose a novel hypothesis test, which is calibrated via a bootstrap procedure and demonstrates power against any distribution with incompatible covariance matrices. By considering key examples of missingness structures, we demonstrate that our procedures are minimax rate optimal in certain cases. We further validate our methodology with numerical simulations that provide evidence of validity and power, even when data are heavy tailed. Furthermore, tests of compatibility can be used to test the feasibility of positive semi-definite matrix completion problems with noisy observations, and thus our results may be of independent interest.
The digital twin approach has gained recognition as a promising solution to the challenges faced by the Architecture, Engineering, Construction, Operations, and Management (AECOM) industries. However, its broader application across AECOM sectors remains limited. One significant obstacle is that traditional digital twins rely on deterministic models, which require deterministic input parameters. This limits their accuracy, as they do not account for the substantial uncertainties inherent in AECOM projects. These uncertainties are particularly pronounced in geotechnical design and construction. To address this challenge, we propose a Probabilistic Digital Twin (PDT) framework that extends traditional digital twin methodologies by incorporating uncertainties, and is tailored to the requirements of geotechnical design and construction. The PDT framework provides a structured approach to integrating all sources of uncertainty, including aleatoric, data, model, and prediction uncertainties, and propagates them throughout the entire modeling process. To ensure that site-specific conditions are accurately reflected as additional information is obtained, the PDT leverages Bayesian methods for model updating. The effectiveness of the probabilistic digital twin framework is showcased through an application to a highway foundation construction project, demonstrating its potential to improve decision-making and project outcomes in the face of significant uncertainties.
This paper studies a generalized variant of the Colonel Blotto game, referred to as the Colonel Blotto game with costs. Unlike the classic Colonel Blotto game, which imposes the use-it-or-lose-it budget assumption, the Colonel Blotto game with costs captures the strategic importance of costs related both to obtaining resources and assigning them across battlefields. We show that every instance of the Colonel Blotto game with costs is strategically equivalent to an instance of the zero-sum Colonel Blotto game with one additional battlefield. This enables the computation of Nash equilibria of the Colonel Blotto game with costs in polynomial time with respect to the game parameters: the number of battlefields plus the number of resources available to the players.
*《Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs》A Jolicoeur-Martineau, I Mitliagkas [Mila] (2019)
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks. Recently, an upgraded version of BERT has been released with Whole Word Masking (WWM), which mitigate the drawbacks of masking partial WordPiece tokens in pre-training BERT. In this technical report, we adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task. The model was trained on the latest Chinese Wikipedia dump. We aim to provide easy extensibility and better performance for Chinese BERT without changing any neural architecture or even hyper-parameters. The model is verified on various NLP tasks, across sentence-level to document-level, including sentiment classification (ChnSentiCorp, Sina Weibo), named entity recognition (People Daily, MSRA-NER), natural language inference (XNLI), sentence pair matching (LCQMC, BQ Corpus), and machine reading comprehension (CMRC 2018, DRCD, CAIL RC). Experimental results on these datasets show that the whole word masking could bring another significant gain. Moreover, we also examine the effectiveness of Chinese pre-trained models: BERT, ERNIE, BERT-wwm. We release the pre-trained model (both TensorFlow and PyTorch) on GitHub: //github.com/ymcui/Chinese-BERT-wwm