The fingerprinting game is analysed when the coalition size $k$ is known to the tracer, but the colluders can distribute themselves across $L$ TV channels. The collusion channel is introduced and the extra degrees of freedom for the coalition are made manifest in our formulation. We introduce a payoff functional that is analogous to the single TV channel case, and is conjectured to be closely related to the fingerprinting capacity. For the binary alphabet case under the marking assumption, and the restriction of access to one TV channel per person per segment, we derive the asymptotic behavior of the payoff functional. We find that the value of the maximin game for our payoff is asymptotically equal to $L^2/k^2 2 \ln 2$, with optimal strategy for the tracer being the arcsine distribution, and for the coalition being the interleaving attack across all TV channels, as well as assigning an equal number of colluders across the $L$ TV channels.
We present a data-driven approach to characterizing nonidentifiability of a model's parameters and illustrate it through dynamic as well as steady kinetic models. By employing Diffusion Maps and their extensions, we discover the minimal combinations of parameters required to characterize the output behavior of a chemical system: a set of effective parameters for the model. Furthermore, we introduce and use a Conformal Autoencoder Neural Network technique, as well as a kernel-based Jointly Smooth Function technique, to disentangle the redundant parameter combinations that do not affect the output behavior from the ones that do. We discuss the interpretability of our data-driven effective parameters, and demonstrate the utility of the approach both for behavior prediction and parameter estimation. In the latter task, it becomes important to describe level sets in parameter space that are consistent with a particular output behavior. We validate our approach on a model of multisite phosphorylation, where a reduced set of effective parameters (nonlinear combinations of the physical ones) has previously been established analytically.
Age of information (AoI) is an effective performance metric measuring the freshness of information and is popular for applications involving status update. Most of the existing works have adopted average AoI as the metric, which cannot provide strict performance guarantees. In this work, the outage probability of the peak AoI exceeding a given threshold is analyzed in a multi-source system under round robin scheduling. Two queueing disciplines are considered, namely the first-come-first-serve (FCFS) queue and the single packet queue. For FCFS, upper and lower bounds on the outage probability are derived which coincides asymptotically, characterizing its true scaling. For the single packet queue, an upper bound is derived whose effectiveness is validated by the simulation results. The analysis concretizes the common belief that single packet queueing has a better AoI performance than FCFS. Moreover, it also reveals that the two disciplines would have similar asymptotic performance when the inter-arrival time is much larger than the total transmission time.
In this paper we provide a rigorous convergence analysis for the renowned particle swarm optimization method by using tools from stochastic calculus and the analysis of partial differential equations. Based on a time-continuous formulation of the particle dynamics as a system of stochastic differential equations, we establish convergence to a global minimizer of a possibly nonconvex and nonsmooth objective function in two steps. First, we prove consensus formation of an associated mean-field dynamics by analyzing the time-evolution of the variance of the particle distribution. We then show that this consensus is close to a global minimizer by employing the asymptotic Laplace principle and a tractability condition on the energy landscape of the objective function. These results allow for the usage of memory mechanisms, and hold for a rich class of objectives provided certain conditions of well-preparation of the hyperparameters and the initial datum. In a second step, at least for the case without memory effects, we provide a quantitative result about the mean-field approximation of particle swarm optimization, which specifies the convergence of the interacting particle system to the associated mean-field limit. Combining these two results allows for global convergence guarantees of the numerical particle swarm optimization method with provable polynomial complexity. To demonstrate the applicability of the method we propose an efficient and parallelizable implementation, which is tested in particular on a competitive and well-understood high-dimensional benchmark problem in machine learning.
Games played in the last round of a round-robin tournament inspire match-fixing or tacit collusion if the two opposing teams can benefit from a particular result at the expense of other teams. In the case of four teams, the current study identifies all these situations caused by using head-to-head records as the primary tie-breaking principle. Simulations based on the 2016 UEFA European Football Championship reveal that the official tie-breaking policy substantially increases the risk of collusion, but it can be mitigated by choosing an optimal order of matches. Following the proposed schedule improves the competitiveness of the two games played in the last round and raises no problem on any aspect of the competition.
This work considers mitigation of information leakage between communication and sensing operations in joint communication and sensing systems. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to simultaneously achieve reliable communication and channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to a part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a characteristic of the receiver, e.g., its location. For independent identically distributed (i.i.d.) states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with a binary joint communication and sensing model.
In precision medicine, identifying optimal sequences of decision rules, termed dynamic treatment regimes (DTRs), is an important undertaking. One approach investigators may take to infer about optimal DTRs is via Bayesian dynamic Marginal Structural Models (MSMs). These models represent the expected outcome under adherence to a DTR for DTRs in a family indexed by a parameter $ \psi $; the function mapping regimes in the family to the expected outcome under adherence to a DTR is known as the value function. Models that allow for the straightforward identification of an optimal DTR may lead to biased estimates. If such a model is computationally tractable, common wisdom says that a grid-search for the optimal DTR may obviate this difficulty. In a Bayesian context, computational difficulties may be compounded if a posterior mean must be calculated at each grid point. We seek to alleviate these inferential challenges by implementing Gaussian Process ($ \mathcal{GP} $) optimization methods for estimators for the causal effect of adherence to a specified DTR. We examine how to identify optimal DTRs in settings where the value function is multi-modal, which are often not addressed in the DTR literature. We conclude that a $ \mathcal{GP} $ modeling approach that acknowledges noise in the estimated response surface leads to improved results. Additionally, we find that a grid-search may not always yield a robust solution and that it is often less efficient than a $ \mathcal{GP} $ approach. We illustrate the use of the proposed methods by analyzing a clinical dataset with the aim of quantifying the effect of different patterns of HIV therapy.
Content moderation research typically prioritizes representing and addressing challenges for one group of stakeholders or communities in one type of context. While taking a focused approach is reasonable or even favorable for empirical case studies, it does not address how content moderation works in multiple contexts. Through a systematic literature review of 86 content moderation papers that document empirical studies, we seek to uncover patterns and tensions within past content moderation research. We find that content moderation can be characterized as a series of trade-offs around moderation actions, styles, philosophies, and values. We discuss how facilitating cooperation and preventing abuse, two key elements in Grimmelmann's definition of moderation, are inherently dialectical in practice. We close by showing how researchers, designers, and moderators can use our framework of trade-offs in their own work, and arguing that trade-offs should be of central importance in investigating and designing content moderation.
Accurate diagnosis and prognosis of Alzheimer's disease are crucial for developing new therapies and reducing the associated costs. Recently, with the advances of convolutional neural networks, deep learning methods have been proposed to automate these two tasks using structural MRI. However, these methods often suffer from a lack of interpretability and generalization and have limited prognosis performance. In this paper, we propose a novel deep framework designed to overcome these limitations. Our pipeline consists of two stages. In the first stage, 125 3D U-Nets are used to estimate voxelwise grade scores over the whole brain. The resulting 3D maps are then fused to construct an interpretable 3D grading map indicating the disease severity at the structure level. As a consequence, clinicians can use this map to detect the brain structures affected by the disease. In the second stage, the grading map and subject's age are used to perform classification with a graph convolutional neural network. Experimental results based on 2106 subjects demonstrated competitive performance of our deep framework compared to state-of-the-art methods on different datasets for both AD diagnosis and prognosis. Moreover, we found that using a large number of U-Nets processing different overlapping brain areas improved the generalization capacity of the proposed methods.
Estimation of a conditional mean (linking a set of features to an outcome of interest) is a fundamental statistical task. While there is an appeal to flexible nonparametric procedures, effective estimation in many classical nonparametric function spaces (e.g., multivariate Sobolev spaces) can be prohibitively difficult -- both statistically and computationally -- especially when the number of features is large. In this paper, we present (penalized) sieve estimators for regression in nonparametric tensor product spaces: These spaces are more amenable to multivariate regression, and allow us to, in-part, avoid the curse of dimensionality. Our estimators can be easily applied to multivariate nonparametric problems and have appealing statistical and computational properties. Moreover, they can effectively leverage additional structures such as feature sparsity. In this manuscript, we give theoretical guarantees, indicating that the predictive performance of our estimators scale favorably in dimension. In addition, we also present numerical examples to compare the finite-sample performance of the proposed estimators with several popular machine learning methods.
Democratization of AI involves training and deploying machine learning models across heterogeneous and potentially massive environments. Diversity of data opens up a number of possibilities to advance AI systems, but also introduces pressing concerns such as privacy, security, and equity that require special attention. This work shows that it is theoretically impossible to design a rational learning algorithm that has the ability to successfully learn across heterogeneous environments, which we decoratively call collective intelligence (CI). By representing learning algorithms as choice correspondences over a hypothesis space, we are able to axiomatize them with essential properties. Unfortunately, the only feasible algorithm compatible with all of the axioms is the standard empirical risk minimization (ERM) which learns arbitrarily from a single environment. Our impossibility result reveals informational incomparability between environments as one of the foremost obstacles for researchers who design novel algorithms that learn from multiple environments, which sheds light on prerequisites for success in critical areas of machine learning such as out-of-distribution generalization, federated learning, algorithmic fairness, and multi-modal learning.