There is a fierce competition between two-sided mobility platforms (e.g., Uber and Lyft) fueled by massive subsidies, yet the underlying dynamics and interactions between the competing plat-forms are largely unknown. These platforms rely on the cross-side network effects to grow, they need to attract agents from both sides to kick-off: travellers are needed for drivers and drivers are needed for travellers. We use our coevolutionary model featured by the S-shaped learning curves to simulate the day-to-day dynamics of the ride-sourcing market at the microscopic level. We run three scenarios to illustrate the possible equilibria in the market. Our results underline how the correlation inside the ride-sourcing nest of the agents choice set significantly affects the plat-forms' market shares. While late entry to the market decreases the chance of platform success and possibly results in "winner-takes-all", heavy subsidies can keep the new platform in competition giving rise to "market sharing" regime.
Autism Spectrum Disorder (ASD) is characterized by challenges in social communication and restricted patterns, with motor abnormalities gaining traction for early detection. However, kinematic analysis in ASD is limited, often lacking robust validation and relying on hand-crafted features for single tasks, leading to inconsistencies across studies. Thus, end-to-end models have become promising methods to overcome the need for feature engineering. Our aim is to assess both approaches across various kinematic tasks to measure the efficacy of commonly used features in ASD assessment, while comparing them to end-to-end models. Specifically, we developed a virtual reality environment with multiple motor tasks and trained models using both classification approaches. We prioritized a reliable validation framework with repeated cross-validation. Our comparative analysis revealed that hand-crafted features outperformed our deep learning approach in specific tasks, achieving a state-of-the-art area under the curve (AUC) of 0.90$\pm$0.06. Conversely, end-to-end models provided more consistent results with less variability across all VR tasks, demonstrating domain generalization and reliability, with a maximum task AUC of 0.89$\pm$0.06. These findings show that end-to-end models enable less variable and context-independent ASD assessments without requiring domain knowledge or task specificity. However, they also recognize the effectiveness of hand-crafted features in specific task scenarios.
Automation can transform productivity in research activities that use liquid handling, such as organic synthesis, but it has made less impact in materials laboratories, which require sample preparation steps and a range of solid-state characterization techniques. For example, powder X-ray diffraction (PXRD) is a key method in materials and pharmaceutical chemistry, but its end-to-end automation is challenging because it involves solid powder handling and sample processing. Here we present a fully autonomous solid-state workflow for PXRD experiments that can match or even surpass manual data quality. The workflow involves 12 steps performed by a team of three multipurpose robots, illustrating the power of flexible, modular automation to integrate complex, multitask laboratories.
Open Radio Access Network (RAN) was introduced recently to incorporate intelligence and openness into the upcoming generation of RAN. Open RAN offers standardized interfaces and the capacity to accommodate network applications from external vendors through extensible applications (xApps), which enhance network management flexibility. The Near-Real-Time Radio Intelligent Controller (Near-RT-RIC) employs specialized and intelligent xApps for achieving time-critical optimization objectives, but conflicts may arise due to different vendors' xApps modifying the same parameters or indirectly affecting each others' performance. A standardized Conflict Management System (CMS) is absent in most of the popular Open RAN architectures including the most prominent O-RAN Alliance architecture. To address this, we propose a CMS with independent controllers for conflict detection and mitigation between xApps in the Near-RT-RIC. We utilize cooperative bargain game theory, including Nash Social Welfare Function (NSWF) and the Equal Gains (EG) solution, to find optimal configurations for conflicting parameters. Experimental results demonstrate the effectiveness of the proposed Conflict Management Controller (CMC) in balancing conflicting parameters and mitigating adverse impacts in the Near-RT-RIC on a theoretical example scenario.
As a fundamental information fusion approach, the arithmetic average (AA) fusion has recently been investigated for various random finite set (RFS) filter fusion in the context of multi-sensor multi-target tracking. It is not a straightforward extension of the ordinary density-AA fusion to the RFS distribution but has to preserve the form of the fusing multi-target density. In this work, we first propose a statistical concept, probability hypothesis density (PHD) consistency, and explain how it can be achieved by the PHD-AA fusion and lead to more accurate and robust detection and localization of the present targets. This forms a both theoretically sound and technically meaningful reason for performing inter-filter PHD AA-fusion/consensus, while preserving the form of the fusing RFS filter. Then, we derive and analyze the proper AA fusion formulations for most existing unlabeled/labeled RFS filters basing on the (labeled) PHD-AA/consistency. These derivations are theoretically unified, exact, need no approximation and greatly enable heterogenous unlabeled and labeled RFS density fusion which is separately demonstrated in two consequent companion papers.
We prove lower bounds for the Minimum Circuit Size Problem (MCSP) in the Sum-of-Squares (SoS) proof system. Our main result is that for every Boolean function $f: \{0,1\}^n \rightarrow \{0,1\}$, SoS requires degree $\Omega(s^{1-\epsilon})$ to prove that $f$ does not have circuits of size $s$ (for any $s > \mathrm{poly}(n)$). As a corollary we obtain that there are no low degree SoS proofs of the statement NP $\not \subseteq $ P/poly. We also show that for any $0 < \alpha < 1$ there are Boolean functions with circuit complexity larger than $2^{n^{\alpha}}$ but SoS requires size $2^{2^{\Omega(n^{\alpha})}}$ to prove this. In addition we prove analogous results on the minimum \emph{monotone} circuit size for monotone Boolean slice functions. Our approach is quite general. Namely, we show that if a proof system $Q$ has strong enough constraint satisfaction problem lower bounds that only depend on good expansion of the constraint-variable incidence graph and, furthermore, $Q$ is expressive enough that variables can be substituted by local Boolean functions, then the MCSP problem is hard for $Q$.
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high utility performance, such as accuracy, with a lack of privacy consideration, which is a major concern in modern society where privacy attacks are rampant. To address this issue, researchers have started to develop privacy-preserving GNNs. Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain. In this survey, we aim to address this gap by summarizing the attacks on graph data according to the targeted information, categorizing the privacy preservation techniques in GNNs, and reviewing the datasets and applications that could be used for analyzing/solving privacy issues in GNNs. We also outline potential directions for future research in order to build better privacy-preserving GNNs.
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and methods of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in Security and Defence. Australia is committed to OECD's values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for Defence; Defence Science has published 'A Method for Ethical AI in Defence' (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI.
Recently, Mutual Information (MI) has attracted attention in bounding the generalization error of Deep Neural Networks (DNNs). However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization. To address the limitation, this paper introduces a probabilistic representation of DNNs for accurately estimating the MI. Leveraging the proposed MI estimator, we validate the information theoretic explanation for generalization, and derive a tighter generalization bound than the state-of-the-art relaxations.
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.