Numerous network models have been investigated to gain insights into the origins of fractality. In this work, we introduce two novel network models, to better understand the growing mechanism and structural characteristics of fractal networks. The Repulsion Based Fractal Model (RBFM) is built on the well-known Song-Havlin-Makse (SHM) model, but in RBFM repulsion is always present among a specific group of nodes. The model resolves the contradiction between the SHM model and the Hub Attraction Dynamical Growth model, by showing that repulsion is the characteristic that induces fractality. The Lattice Small-world Transition Model (LSwTM) was motivated by the fact that repulsion directly influences the node distances. Through LSwTM we study the fractal-small-world transition. The model illustrates the transition on a fixed number of nodes and edges using a preferential-attachment-based edge rewiring process. It shows that a small average distance works against fractal scaling, and also demonstrates that fractality is not a dichotomous property, continuous transition can be observed between the pure fractal and non-fractal characteristics.
When used in complex engineered systems, such as communication networks, artificial intelligence (AI) models should be not only as accurate as possible, but also well calibrated. A well-calibrated AI model is one that can reliably quantify the uncertainty of its decisions, assigning high confidence levels to decisions that are likely to be correct and low confidence levels to decisions that are likely to be erroneous. This paper investigates the application of conformal prediction as a general framework to obtain AI models that produce decisions with formal calibration guarantees. Conformal prediction transforms probabilistic predictors into set predictors that are guaranteed to contain the correct answer with a probability chosen by the designer. Such formal calibration guarantees hold irrespective of the true, unknown, distribution underlying the generation of the variables of interest, and can be defined in terms of ensemble or time-averaged probabilities. In this paper, conformal prediction is applied for the first time to the design of AI for communication systems in conjunction to both frequentist and Bayesian learning, focusing on demodulation, modulation classification, and channel prediction.
Computing the agreement between two continuous sequences is of great interest in statistics when comparing two instruments or one instrument with a gold standard. The probability of agreement (PA) quantifies the similarity between two variables of interest, and it is useful for accounting what constitutes a practically important difference. In this article we introduce a generalization of the PA for the treatment of spatial variables. Our proposal makes the PA dependent on the spatial lag. As a consequence, for isotropic stationary and nonstationary spatial processes, the conditions for which the PA decays as a function of the distance lag are established. Estimation is addressed through a first-order approximation that guarantees the asymptotic normality of the sample version of the PA. The sensitivity of the PA is studied for finite sample size, with respect to the covariance parameters. The new method is described and illustrated with real data involving autumnal changes in the green chromatic coordinate (Gcc), an index of "greenness" that captures the phenological stage of tree leaves, is associated with carbon flux from ecosystems, and is estimated from repeated images of forest canopies.
Due to the adoption of horizontal business models following the globalization of semiconductor manufacturing, the overproduction of integrated circuits (ICs) and the piracy of intellectual properties (IPs) can lead to significant damage to the integrity of the semiconductor supply chain. Logic locking emerges as a primary design-for-security measure to counter these threats, where ICs become fully functional only when unlocked with a secret key. However, Boolean satisfiability-based attacks have rendered most locking schemes ineffective. This gives rise to numerous defenses and new locking methods to achieve SAT resiliency. This paper provides a unique perspective on the SAT attack efficiency based on conjunctive normal form (CNF) stored in SAT solver. First, we show how the attack learns new relations between keys in every iteration using distinguishing input patterns and the corresponding oracle responses. The input-output pairs result in new CNF clauses of unknown keys to be appended to the SAT solver, which leads to an exponential reduction in incorrect key values. Second, we demonstrate that the SAT attack can break any locking scheme within linear iteration complexity of key size. Moreover, we show how key constraints on point functions affect the SAT attack complexity. We explain why proper key constraint on AntiSAT reduces the complexity effectively to constant 1. The same constraint helps the breaking of CAS-Lock down to linear iteration complexity. Our analysis provides a new perspective on the capabilities of SAT attack against multiplier benchmark c6288, and we provide new directions to achieve SAT resiliency.
Existing analyses of neural network training often operate under the unrealistic assumption of an extremely small learning rate. This lies in stark contrast to practical wisdom and empirical studies, such as the work of J. Cohen et al. (ICLR 2021), which exhibit startling new phenomena (the "edge of stability" or "unstable convergence") and potential benefits for generalization in the large learning rate regime. Despite a flurry of recent works on this topic, however, the latter effect is still poorly understood. In this paper, we take a step towards understanding genuinely non-convex training dynamics with large learning rates by performing a detailed analysis of gradient descent for simplified models of two-layer neural networks. For these models, we provably establish the edge of stability phenomenon and discover a sharp phase transition for the step size below which the neural network fails to learn "threshold-like" neurons (i.e., neurons with a non-zero first-layer bias). This elucidates one possible mechanism by which the edge of stability can in fact lead to better generalization, as threshold neurons are basic building blocks with useful inductive bias for many tasks.
To increase the adoption of counterfactual explanations in practice, several criteria that these should adhere to have been put forward in the literature. We propose counterfactual explanations using optimization with constraint learning (CE-OCL), a generic and flexible approach that addresses all these criteria and allows room for further extensions. Specifically, we discuss how we can leverage an optimization with constraint learning framework for the generation of counterfactual explanations, and how components of this framework readily map to the criteria. We also propose two novel modeling approaches to address data manifold closeness and diversity, which are two key criteria for practical counterfactual explanations. We test CE-OCL on several datasets and present our results in a case study. Compared against the current state-of-the-art methods, CE-OCL allows for more flexibility and has an overall superior performance in terms of several evaluation metrics proposed in related work.
Linear computation broadcast (LCBC) refers to a setting with $d$ dimensional data stored at a central server, where $K$ users, each with some prior linear side-information, wish to retrieve various linear combinations of the data. The goal is to determine the minimum amount of information that must be broadcast to satisfy all the users. The reciprocal of the optimal broadcast cost is the capacity of LCBC. The capacity is known for up to $K=3$ users. Since LCBC includes index coding as a special case, large $K$ settings of LCBC are at least as hard as the index coding problem. Instead of the general setting (all instances), by focusing on the generic setting (almost all instances) this work shows that the generic capacity of the symmetric LCBC (where every user has $m'$ dimensions of side-information and $m$ dimensions of demand) for large number of users ($K>d$ suffices) is $C_g=1/\Delta_g$, where $\Delta_g=\min\left\{\max\{0,d-m'\}, \frac{dm}{m+m'}\right\}$, is the broadcast cost that is both achievable and unbeatable asymptotically almost surely for large $n$, among all LCBC instances with the given parameters $p,K,d,m,m'$. Relative to baseline schemes of random coding or separate transmissions, $C_g$ shows an extremal gain by a factor of $K$ as a function of number of users, and by a factor of $\approx d/4$ as a function of data dimensions, when optimized over remaining parameters. For arbitrary number of users, the generic capacity of the symmetric LCBC is characterized within a factor of $2$.
Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
Reducing the global burden of stillbirths is important to improving child and maternal health. Of interest is understanding patterns in the timing of stillbirths -- that is, whether they occur in the intra- or antepartum period -- because stillbirths that occur intrapartum are largely preventable. However, data availability on the timing of stillbirths is highly variable across the world, with low- and middle-income countries generally having few reliable observations. In this paper we develop a Bayesian penalized splines regression framework to estimate the proportion of stillbirths that are intrapartum for all countries worldwide. The model accounts for known relationships with neonatal mortality, pools information across geographic regions, incorporates different errors based on data attributes, and allows for data-driven temporal trends. A weighting procedure is proposed to account for unrepresentative subnational data. Results suggest that the intrapartum proportion is generally decreasing over time, but progress is slower in some regions, particularly Sub-Saharan Africa.
Game theory has by now found numerous applications in various fields, including economics, industry, jurisprudence, and artificial intelligence, where each player only cares about its own interest in a noncooperative or cooperative manner, but without obvious malice to other players. However, in many practical applications, such as poker, chess, evader pursuing, drug interdiction, coast guard, cyber-security, and national defense, players often have apparently adversarial stances, that is, selfish actions of each player inevitably or intentionally inflict loss or wreak havoc on other players. Along this line, this paper provides a systematic survey on three main game models widely employed in adversarial games, i.e., zero-sum normal-form and extensive-form games, Stackelberg (security) games, zero-sum differential games, from an array of perspectives, including basic knowledge of game models, (approximate) equilibrium concepts, problem classifications, research frontiers, (approximate) optimal strategy seeking techniques, prevailing algorithms, and practical applications. Finally, promising future research directions are also discussed for relevant adversarial games.
Structural data well exists in Web applications, such as social networks in social media, citation networks in academic websites, and threads data in online forums. Due to the complex topology, it is difficult to process and make use of the rich information within such data. Graph Neural Networks (GNNs) have shown great advantages on learning representations for structural data. However, the non-transparency of the deep learning models makes it non-trivial to explain and interpret the predictions made by GNNs. Meanwhile, it is also a big challenge to evaluate the GNN explanations, since in many cases, the ground-truth explanations are unavailable. In this paper, we take insights of Counterfactual and Factual (CF^2) reasoning from causal inference theory, to solve both the learning and evaluation problems in explainable GNNs. For generating explanations, we propose a model-agnostic framework by formulating an optimization problem based on both of the two casual perspectives. This distinguishes CF^2 from previous explainable GNNs that only consider one of them. Another contribution of the work is the evaluation of GNN explanations. For quantitatively evaluating the generated explanations without the requirement of ground-truth, we design metrics based on Counterfactual and Factual reasoning to evaluate the necessity and sufficiency of the explanations. Experiments show that no matter ground-truth explanations are available or not, CF^2 generates better explanations than previous state-of-the-art methods on real-world datasets. Moreover, the statistic analysis justifies the correlation between the performance on ground-truth evaluation and our proposed metrics.