The standard formulation of the PDE system of Mean Field Games (MFG) requires the differentiability of the Hamiltonian. However in many cases, the structure of the underlying optimal problem leads to a convex but non-differentiable Hamiltonian. For time-dependent MFG systems, we introduce a generalization of the problem as a Partial Differential Inclusions (PDI) by interpreting the derivative of the Hamiltonian in terms of the subdifferential set. In particular, we prove the existence and uniqueness of weak solutions to the resulting MFG PDI system under standard assumptions in the literature. We propose a monotone stabilized finite element discretization of the problem, using conforming affine elements in space and an implicit Euler discretization in time with mass-lumping. We prove the strong convergence in $L^2(H^1)$ of the value function approximations, and strong convergence in $L^p(L^2)$ of the density function approximations, together with strong $L^2$-convergence of the value function approximations at the initial time.
Surjectivity and injectivity are the most fundamental problems in cellular automata (CA). We simplify and modify Amoroso's algorithm into optimum and make it compatible with fixed, periodic and reflective boundaries. A new algorithm (injectivity tree algorithm) for injectivity is also proposed. After our theoretic analysis and experiments, our algorithm for injectivity can save much space and 90\% or even more time compared with Amoroso's algorithm for injectivity so that it can support the decision of CA with larger neighborhood sizes. At last, we prove that the reversibility with the periodic boundary and global injectivity of one-dimensional CA is equivalent.
Industry 4.0 has brought to attention the need for a connected, flexible, and autonomous production environment. The New Radio (NR)-sidelink, which was introduced by the third-generation partnership project (3GPP) in Release 16, can be particularly helpful for factories that need to facilitate cooperative and close-range communication. Automated Guided Vehicles (AGVs) are important for material handling and carriage within these environments, and using NR-sidelink communication can further enhance their performance. An efficient resource allocation mechanism is required to ensure reliable communication and avoid interference between AGVs and other wireless systems in the factory using NR-sidelink. This work evaluates the 3GPP standardized resource allocation algorithm for NR-sidelink for a use case of cooperative carrying AGVs. We suggest further improvements that are tailored to the quality of service (QoS) requirements of an indoor factory communication scenario with cooperative AGVs.The use of NR-sidelink communication has the potential to help meet the QoS requirements for different Industry 4.0 use cases. This work can be a foundation for further improvements in NR-sidelink in 3GPP Release 18 and beyond.
Graph Neural Networks (GNNs) have emerged as one of the leading approaches for machine learning on graph-structured data. Despite their great success, critical computational challenges such as over-smoothing, over-squashing, and limited expressive power continue to impact the performance of GNNs. In this study, inspired from the time-reversal principle commonly utilized in classical and quantum physics, we reverse the time direction of the graph heat equation. The resulted reversing process yields a class of high pass filtering functions that enhance the sharpness of graph node features. Leveraging this concept, we introduce the Multi-Scaled Heat Kernel based GNN (MHKG) by amalgamating diverse filtering functions' effects on node features. To explore more flexible filtering conditions, we further generalize MHKG into a model termed G-MHKG and thoroughly show the roles of each element in controlling over-smoothing, over-squashing and expressive power. Notably, we illustrate that all aforementioned issues can be characterized and analyzed via the properties of the filtering functions, and uncover a trade-off between over-smoothing and over-squashing: enhancing node feature sharpness will make model suffer more from over-squashing, and vice versa. Furthermore, we manipulate the time again to show how G-MHKG can handle both two issues under mild conditions. Our conclusive experiments highlight the effectiveness of proposed models. It surpasses several GNN baseline models in performance across graph datasets characterized by both homophily and heterophily.
We argue for the application of bibliometric indices to quantify the long-term uncertainty of outcome in sports. The Euclidean index is proposed to reward quality over quantity, while the rectangle index can be an appropriate measure of core performance. Their differences are highlighted through an axiomatic analysis and several examples. Our approach also requires a weighting scheme to compare different achievements. The methodology is illustrated by studying the knockout stage of the UEFA Champions League in the 20 seasons played between 2003 and 2023: club and country performances as well as three types of competitive balance are considered. Measuring competition at the level of national associations is a novelty. All results are remarkably robust concerning the bibliometric index and the assigned weights. Since the performances of national associations are more stable than the results of individual clubs, it would be better to build the seeding in the UEFA Champions League group stage upon association coefficients adjusted for league finishing positions rather than club coefficients.
Completely randomized experiment is the gold standard for causal inference. When the covariate information for each experimental candidate is available, one typical way is to include them in covariate adjustments for more accurate treatment effect estimation. In this paper, we investigate this problem under the randomization-based framework, i.e., that the covariates and potential outcomes of all experimental candidates are assumed as deterministic quantities and the randomness comes solely from the treatment assignment mechanism. Under this framework, to achieve asymptotically valid inference, existing estimators usually require either (i) that the dimension of covariates $p$ grows at a rate no faster than $O(n^{2 / 3})$ as sample size $n \to \infty$; or (ii) certain sparsity constraints on the linear representations of potential outcomes constructed via possibly high-dimensional covariates. In this paper, we consider the moderately high-dimensional regime where $p$ is allowed to be in the same order of magnitude as $n$. We develop a novel debiased estimator with a corresponding inference procedure and establish its asymptotic normality under mild assumptions. Our estimator is model-free and does not require any sparsity constraint on potential outcome's linear representations. We also discuss its asymptotic efficiency improvements over the unadjusted treatment effect estimator under different dimensionality constraints. Numerical analysis confirms that compared to other regression adjustment based treatment effect estimators, our debiased estimator performs well in moderately high dimensions.
We present a method for balancing between the Local and Global Structures (LGS) in graph embedding, via a tunable parameter. Some embedding methods aim to capture global structures, while others attempt to preserve local neighborhoods. Few methods attempt to do both, and it is not always possible to capture well both local and global information in two dimensions, which is where most graph drawing live. The choice of using a local or a global embedding for visualization depends not only on the task but also on the structure of the underlying data, which may not be known in advance. For a given graph, LGS aims to find a good balance between the local and global structure to preserve. We evaluate the performance of LGS with synthetic and real-world datasets and our results indicate that it is competitive with the state-of-the-art methods, using established quality metrics such as stress and neighborhood preservation. We introduce a novel quality metric, cluster distance preservation, to assess intermediate structure capture. All source-code, datasets, experiments and analysis are available online.
In Generalized Linear Models (GLMs) it is assumed that there is a linear effect of the predictor variables on the outcome. However, this assumption is often too strict, because in many applications predictors have a nonlinear relation with the outcome. Optimal Scaling (OS) transformations combined with GLMs can deal with this type of relations. Transformations of the predictors have been integrated in GLMs before, e.g. in Generalized Additive Models. However, the OS methodology has several benefits. For example, the levels of categorical predictors are quantified directly, such that they can be included in the model without defining dummy variables. This approach enhances the interpretation and visualization of the effect of different levels on the outcome. Furthermore, monotonicity restrictions can be applied to the OS transformations such that the original ordering of the category values is preserved. This improves the interpretation of the effect and may prevent overfitting. The scaling level can be chosen for each individual predictor such that models can include mixed scaling levels. In this way, a suitable transformation can be found for each predictor in the model. The implementation of OS in logistic regression is demonstrated using three datasets that contain a binary outcome variable and a set of categorical and/or continuous predictor variables.
Text-to-SQL is a task that converts a natural language question into a structured query language (SQL) to retrieve information from a database. Large language models (LLMs) work well in natural language generation tasks, but they are not specifically pre-trained to understand the syntax and semantics of SQL commands. In this paper, we propose an LLM-based framework for Text-to-SQL which retrieves helpful demonstration examples to prompt LLMs. However, questions with different database schemes can vary widely, even if the intentions behind them are similar and the corresponding SQL queries exhibit similarities. Consequently, it becomes crucial to identify the appropriate SQL demonstrations that align with our requirements. We design a de-semanticization mechanism that extracts question skeletons, allowing us to retrieve similar examples based on their structural similarity. We also model the relationships between question tokens and database schema items (i.e., tables and columns) to filter out scheme-related information. Our framework adapts the range of the database schema in prompts to balance length and valuable information. A fallback mechanism allows for a more detailed schema to be provided if the generated SQL query fails. Ours outperforms state-of-the-art models and demonstrates strong generalization ability on three cross-domain Text-to-SQL benchmarks.
*《Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs》A Jolicoeur-Martineau, I Mitliagkas [Mila] (2019)
Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.