Pairwise metrics are often employed to estimate statistical dependencies between brain regions, however they do not capture higher-order information interactions. It is critical to explore higher-order interactions that go beyond paired brain areas in order to better understand information processing in the human brain. To address this problem, we applied multivariate mutual information, specifically, Total Correlation and Dual Total Correlation to reveal higher-order information in the brain. In this paper, we estimate these metrics using matrix-based R\'enyi's entropy, which offers a direct and easily interpretable approach that is not limited by direct assumptions about probability distribution functions of multivariate time series. We applied these metrics to resting-state fMRI data in order to examine higher-order interactions in the brain. Our results showed that the higher-order information interactions captured increase gradually as the interaction order increases. Furthermore, we observed a gradual increase in the correlation between the Total Correlation and Dual Total Correlation as the interaction order increased. In addition, the significance of Dual Total Correlation values compared to Total Correlation values also indicate that the human brain exhibits synergy dominance during the resting state.
The concept of image similarity is ambiguous, and images can be similar in one context and not in another. This ambiguity motivates the creation of metrics for specific contexts. This work explores the ability of deep perceptual similarity (DPS) metrics to adapt to a given context. DPS metrics use the deep features of neural networks for comparing images. These metrics have been successful on datasets that leverage the average human perception in limited settings. But the question remains if they could be adapted to specific similarity contexts. No single metric can suit all similarity contexts, and previous rule-based metrics are labor-intensive to rewrite for new contexts. On the other hand, DPS metrics use neural networks that might be retrained for each context. However, retraining networks takes resources and might ruin performance on previous tasks. This work examines the adaptability of DPS metrics by training ImageNet pretrained CNNs to measure similarity according to given contexts. Contexts are created by randomly ranking six image distortions. Distortions later in the ranking are considered more disruptive to similarity when applied to an image for that context. This also gives insight into whether the pretrained features capture different similarity contexts. The adapted metrics are evaluated on a perceptual similarity dataset to evaluate if adapting to a ranking affects their prior performance. The findings show that DPS metrics can be adapted with high performance. While the adapted metrics have difficulties with the same contexts as baselines, performance is improved in 99% of cases. Finally, it is shown that the adaption is not significantly detrimental to prior performance on perceptual similarity. The implementation of this work is available online: //github.com/LTU-Machine-Learning/Analysis-of-Deep-Perceptual-Loss-Networks
We consider an atomic congestion game in which each player $i$ either participates in the game with an exogenous and known probability $p_{i}\in(0,1]$, independently of everybody else, or stays out and incurs no cost. We compute the parameterized price of anarchy to characterize the impact of demand uncertainty on the efficiency of selfish behavior, considering two different notions of a social planner. A prophet planner knows the realization of the random participation in the game; the ordinary planner does not. As a consequence, a prophet planner can compute an adaptive social optimum that selects different solutions depending on the players that turn out to be active, whereas an ordinary planner faces the same uncertainty as the players and can only compute social optima with respect to the player participation distribution. For both planners, we derive the precise price of anarchy, which arises from an optimization problem parameterized by the maximum participation probability $q=\max_{i} p_{i}$. For the case of affine costs, we provide an analytic expression for the ordinary and prophet price of anarchy, parameterized as a function of $q$.
We explore the analytic properties of the density function $ h(x;\gamma,\alpha) $, $ x \in (0,\infty) $, $ \gamma > 0 $, $ 0 < \alpha < 1 $ which arises from the domain of attraction problem for a statistic interpolating between the supremum and sum of random variables. The parameter $ \alpha $ controls the interpolation between these two cases, while $ \gamma $ parametrises the type of extreme value distribution from which the underlying random variables are drawn from. For $ \alpha = 0 $ the Fr\'echet density applies, whereas for $ \alpha = 1 $ we identify a particular Fox H-function, which are a natural extension of hypergeometric functions into the realm of fractional calculus. In contrast for intermediate $ \alpha $ an entirely new function appears, which is not one of the extensions to the hypergeometric function considered to date. We derive series, integral and continued fraction representations of this latter function.
Modern cellular networks are multi-cell and use universal frequency reuse to maximize spectral efficiency. This results in high inter-cell interference. This problem is growing as cellular networks become three-dimensional with the adoption of unmanned aerial vehicles (UAVs). This is because the strength and number of interference links rapidly increase due to the line-of-sight channels in UAV communications. Existing interference management solutions need each transmitter to know the channel information of interfering signals, rendering them impractical due to excessive signaling overhead. In this paper, we propose leveraging deep reinforcement learning for interference management to tackle this shortcoming. In particular, we show that interference can still be effectively mitigated even without knowing its channel information. We then discuss novel approaches to scale the algorithms with linear/sublinear complexity and decentralize them using multi-agent reinforcement learning. By harnessing interference, the proposed solutions enable the continued growth of civilian UAVs.
Real-time perception and motion planning are two crucial tasks for autonomous driving. While there are many research works focused on improving the performance of perception and motion planning individually, it is still not clear how a perception error may adversely impact the motion planning results. In this work, we propose a joint simulation framework with LiDAR-based perception and motion planning for real-time automated driving. Taking the sensor input from the CARLA simulator with additive noise, a LiDAR perception system is designed to detect and track all surrounding vehicles and to provide precise orientation and velocity information. Next, we introduce a new collision bound representation that relaxes the communication cost between the perception module and the motion planner. A novel collision checking algorithm is implemented using line intersection checking that is more efficient for long distance range in comparing to the traditional method of occupancy grid. We evaluate the joint simulation framework in CARLA for urban driving scenarios. Experiments show that our proposed automated driving system can execute at 25 Hz, which meets the real-time requirement. The LiDAR perception system has high accuracy within 20 meters when evaluated with the ground truth. The motion planning results in consistent safe distance keeping when tested in CARLA urban driving scenarios.
Back-support exoskeletons are commonly used in the workplace to reduce low back pain risk for workers performing demanding activities. However, for the assistance of tasks differing from lifting, back-support exoskeletons potential has not been exploited extensively. This work focuses on the use of an active back-support exoskeleton to assist carrying. Two control strategies are designed that modulate the exoskeleton torques to comply with the task assistance requirements. In particular, two gait phase detection frameworks are exploited to adapt the assistance according to the legs' motion. The two strategies are assessed through an experimental analysis on ten subjects. Carrying task is performed without and with the exoskeleton assistance. Results prove the potential of the presented controls in assisting the task without hindering the gait movement and improving the usability experienced by users. Moreover, the exoskeleton assistance significantly reduces the lumbar load associated with the task, demonstrating its promising use for risk mitigation in the workplace.
Measuring similarity of neural networks has become an issue of great importance and research interest to understand and utilize differences of neural networks. While there are several perspectives on how neural networks can be similar, we specifically focus on two complementing perspectives, i.e., (i) representational similarity, which considers how activations of intermediate neural layers differ, and (ii) functional similarity, which considers how models differ in their outputs. In this survey, we provide a comprehensive overview of these two families of similarity measures for neural network models. In addition to providing detailed descriptions of existing measures, we summarize and discuss results on the properties and relationships of these measures, and point to open research problems. Further, we provide practical recommendations that can guide researchers as well as practitioners in applying the measures. We hope our work lays a foundation for our community to engage in more systematic research on the properties, nature and applicability of similarity measures for neural network models.
A core assumption of explainable AI systems is that explanations change what users know, thereby enabling them to act within their complex socio-technical environments. Despite the centrality of action, explanations are often organized and evaluated based on technical aspects. Prior work varies widely in the connections it traces between information provided in explanations and resulting user actions. An important first step in centering action in evaluations is understanding what the XAI community collectively recognizes as the range of information that explanations can present and what actions are associated with them. In this paper, we present our framework, which maps prior work on information presented in explanations and user action, and we discuss the gaps we uncovered about the information presented to users.
The fidelity-based smooth min-relative entropy is a distinguishability measure that has appeared in a variety of contexts in prior work on quantum information, including resource theories like thermodynamics and coherence. Here we provide a comprehensive study of this quantity. First we prove that it satisfies several basic properties, including the data-processing inequality. We also establish connections between the fidelity-based smooth min-relative entropy and other widely used information-theoretic quantities, including smooth min-relative entropy and smooth sandwiched R\'enyi relative entropy, of which the sandwiched R\'enyi relative entropy and smooth max-relative entropy are special cases. After that, we use these connections to establish the second-order asymptotics of the fidelity-based smooth min-relative entropy and all smooth sandwiched R\'enyi relative entropies, finding that the first-order term is the quantum relative entropy and the second-order term involves the quantum relative entropy variance. Utilizing the properties derived, we also show how the fidelity-based smooth min-relative entropy provides one-shot bounds for operational tasks in general resource theories in which the target state is mixed, with a particular example being randomness distillation. The above observations then lead to second-order expansions of the upper bounds on distillable randomness, as well as the precise second-order asymptotics of the distillable randomness of particular classical-quantum states. Finally, we establish semi-definite programs for smooth max-relative entropy and smooth conditional min-entropy, as well as a bilinear program for the fidelity-based smooth min-relative entropy, which we subsequently use to explore the tightness of a bound relating the last to the first.
With the rapid growth of knowledge bases (KBs), question answering over knowledge base, a.k.a. KBQA has drawn huge attention in recent years. Most of the existing KBQA methods follow so called encoder-compare framework. They map the question and the KB facts to a common embedding space, in which the similarity between the question vector and the fact vectors can be conveniently computed. This, however, inevitably loses original words interaction information. To preserve more original information, we propose an attentive recurrent neural network with similarity matrix based convolutional neural network (AR-SMCNN) model, which is able to capture comprehensive hierarchical information utilizing the advantages of both RNN and CNN. We use RNN to capture semantic-level correlation by its sequential modeling nature, and use an attention mechanism to keep track of the entities and relations simultaneously. Meanwhile, we use a similarity matrix based CNN with two-directions pooling to extract literal-level words interaction matching utilizing CNNs strength of modeling spatial correlation among data. Moreover, we have developed a new heuristic extension method for entity detection, which significantly decreases the effect of noise. Our method has outperformed the state-of-the-arts on SimpleQuestion benchmark in both accuracy and efficiency.