We introduce a new class of totally balanced cooperative TU games, namely p -additive games. It is inspired by the class of inventory games that arises from inventory situations with temporary discounts (Toledo, 2002) and contains the class of inventory cost games (Meca et al. 2003). It is shown that every p-additive game and its corresponding subgames have a nonempty core. We also focus on studying the character concave or convex and monotone of p-additive games. In addition, the modified SOC-rule is proposed as a solution for p-additive games. This solution is suitable for p-additive games since it is a core-allocation which can be reached through a population monotonic allocation scheme. Moreover, two characterizations of the modified SOC-rule are provided.
Lattices are architected metamaterials whose properties strongly depend on their geometrical design. The analogy between lattices and graphs enables the use of graph neural networks (GNNs) as a faster surrogate model compared to traditional methods such as finite element modelling. In this work, we generate a big dataset of structure-property relationships for strut-based lattices. The dataset is made available to the community which can fuel the development of methods anchored in physical principles for the fitting of fourth-order tensors. In addition, we present a higher-order GNN model trained on this dataset. The key features of the model are (i) SE(3) equivariance, and (ii) consistency with the thermodynamic law of conservation of energy. We compare the model to non-equivariant models based on a number of error metrics and demonstrate its benefits in terms of predictive performance and reduced training requirements. Finally, we demonstrate an example application of the model to an architected material design task. The methods which we developed are applicable to fourth-order tensors beyond elasticity such as piezo-optical tensor etc.
To succeed in their objectives, groups of individuals must be able to make quick and accurate collective decisions on the best option among a set of alternatives with different qualities. Group-living animals aim to do that all the time. Plants and fungi are thought to do so too. Swarms of autonomous robots can also be programmed to make best-of-n decisions for solving tasks collaboratively. Ultimately, humans critically need it and so many times they should be better at it. Thanks to their mathematical tractability, simple models like the voter model and the local majority rule model have proven useful to describe the dynamics of such collective decision-making processes. To reach a consensus, individuals change their opinion by interacting with neighbors in their social network. At least among animals and robots, options with a better quality are exchanged more often and therefore spread faster than lower-quality options, leading to the collective selection of the best option. With our work, we study the impact of individuals making errors in pooling others' opinions caused, for example, by the need to reduce the cognitive load. Our analysis is grounded on the introduction of a model that generalizes the two existing models (local majority rule and voter model), showing a speed-accuracy trade-off regulated by the cognitive effort of individuals. We also investigate the impact of the interaction network topology on the collective dynamics. To do so, we extend our model and, by using the heterogeneous mean-field approach, we show the presence of another speed-accuracy trade-off regulated by network connectivity. An interesting result is that reduced network connectivity corresponds to an increase in collective decision accuracy.
The usual way of testing probability forecasts in game-theoretic probability is via construction of test martingales. The standard assumption is that all forecasts are output by the same forecaster. In this paper I will discuss possible extensions of this picture to testing probability forecasts output by several forecasters. This corresponds to multiple hypothesis testing in statistics. One interesting phenomenon is that even a slight relaxation of the requirement of family-wise validity leads to a very significant increase in the efficiency of testing procedures. The main goal of this paper is to report results of preliminary simulation studies and list some directions of further research.
The digital divide is the gap among population sub-groups in accessing and/or using digital technologies. For instance, older people show a lower propensity to have a broadband connection, use the Internet, and adopt new technologies than the younger ones. Motivated by the analysis of the heterogeneity in the use of digital technologies, we build a bipartite network concerning the presence of various digital skills in individuals from three different European countries: Finland, Italy, and Bulgaria. Bipartite networks provide a useful structure for representing relationships between two disjoint sets of nodes, formally called sending and receiving nodes. The goal is to perform a clustering of individuals (sending nodes) based on their digital skills (receiving nodes) for each country. In this regard, we employ a Mixture of Latent Trait Analyzers (MLTA) accounting for concomitant variables, which allows us to (i) cluster individuals according to their individual profile; (ii) analyze how socio-economic and demographic characteristics, as well as intergenerational ties, influence individual digitalization. Results show that the type of digitalization substantially depends on age, income and level of education, while the presence of children in the household seems to play an important role in the digitalization process in Italy and Finland only.
Traffic Weaver is a Python package developed to generate a semi-synthetic signal (time series) with finer granularity, based on averaged time series, in a manner that, upon averaging, closely matches the original signal provided. The key components utilized to recreate the signal encompass oversampling with a given strategy, stretching to match the integral of the original time series, smoothing, repeating, applying trend, and adding noise. The primary motivation behind Traffic Weaver is to furnish semi-synthetic time-varying traffic in telecommunication networks, facilitating the development and validation of traffic prediction models, as well as aiding in the deployment of network optimization algorithms tailored for time-varying traffic.
In this note, we investigate the robustness of Nash equilibria (NE) in multi-player aggregative games with coupling constraints. There are many algorithms for computing an NE of an aggregative game given a known aggregator. When the coupling parameters are affected by uncertainty, robust NE need to be computed. We consider a scenario where players' weight in the aggregator is unknown, making the aggregator kind of "a black box". We pursue a suitable learning approach to estimate the unknown aggregator by proposing an inverse variational inequality-based relationship. We then utilize the counterpart to reconstruct the game and obtain first-order conditions for robust NE in the worst case. Furthermore, we characterize the generalization property of the learning methodology via an upper bound on the violation probability. Simulation experiments show the effectiveness of the proposed inverse learning approach.
Cardiac valve event timing plays a crucial role when conducting clinical measurements using echocardiography. However, established automated approaches are limited by the need of external electrocardiogram sensors, and manual measurements often rely on timing from different cardiac cycles. Recent methods have applied deep learning to cardiac timing, but they have mainly been restricted to only detecting two key time points, namely end-diastole (ED) and end-systole (ES). In this work, we propose a deep learning approach that leverages triplane recordings to enhance detection of valve events in echocardiography. Our method demonstrates improved performance detecting six different events, including valve events conventionally associated with ED and ES. Of all events, we achieve an average absolute frame difference (aFD) of maximum 1.4 frames (29 ms) for start of diastasis, down to 0.6 frames (12 ms) for mitral valve opening when performing a ten-fold cross-validation with test splits on triplane data from 240 patients. On an external independent test consisting of apical long-axis data from 180 other patients, the worst performing event detection had an aFD of 1.8 (30 ms). The proposed approach has the potential to significantly impact clinical practice by enabling more accurate, rapid and comprehensive event detection, leading to improved clinical measurements.
There are now many explainable AI methods for understanding the decisions of a machine learning model. Among these are those based on counterfactual reasoning, which involve simulating features changes and observing the impact on the prediction. This article proposes to view this simulation process as a source of creating a certain amount of knowledge that can be stored to be used, later, in different ways. This process is illustrated in the additive model and, more specifically, in the case of the naive Bayes classifier, whose interesting properties for this purpose are shown.
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.