The available evidence suggests that dynamic functional connectivity (dFC) can capture time-varying abnormalities in brain activity in resting-state cerebral functional magnetic resonance imaging (rs-fMRI) data and has a natural advantage in uncovering mechanisms of abnormal brain activity in schizophrenia(SZ) patients. Hence, an advanced dynamic brain network analysis model called the temporal brain category graph convolutional network (Temporal-BCGCN) was employed. Firstly, a unique dynamic brain network analysis module, DSF-BrainNet, was designed to construct dynamic synchronization features. Subsequently, a revolutionary graph convolution method, TemporalConv, was proposed, based on the synchronous temporal properties of feature. Finally, the first modular abnormal hemispherical lateralization test tool in deep learning based on rs-fMRI data, named CategoryPool, was proposed. This study was validated on COBRE and UCLA datasets and achieved 83.62% and 89.71% average accuracies, respectively, outperforming the baseline model and other state-of-the-art methods. The ablation results also demonstrate the advantages of TemporalConv over the traditional edge feature graph convolution approach and the improvement of CategoryPool over the classical graph pooling approach. Interestingly, this study showed that the lower order perceptual system and higher order network regions in the left hemisphere are more severely dysfunctional than in the right hemisphere in SZ and reaffirms the importance of the left medial superior frontal gyrus in SZ. Our core code is available at: //github.com/swfen/Temporal-BCGCN.
A significant challenge in control theory and technology is to devise agile and less resource-intensive experiments for evaluating the performance and feasibility of control algorithms for the collective coordination of large-scale complex systems. Many new methodologies are based on macroscopic representations of the emerging system behavior, and can be easily validated only through numerical simulations, because of the inherent hurdle of developing full scale experimental platforms. In this paper, we introduce a novel hybrid mixed reality set-up for testing swarm robotics techniques, focusing on the collective motion of robotic swarms. This hybrid apparatus combines both real differential drive robots and virtual agents to create a heterogeneous swarm of tunable size. We validate the methodology by extending to higher dimensions, and investigating experimentally, continuification-based control methods for swarms. Our study demonstrates the versatility and effectiveness of the platform for conducting large-scale swarm robotics experiments. Also, it contributes new theoretical insights into control algorithms exploiting continuification approaches.
Generalized variational inference (GVI) provides an optimization-theoretic framework for statistical estimation that encapsulates many traditional estimation procedures. The typical GVI problem is to compute a distribution of parameters that maximizes the expected payoff minus the divergence of the distribution from a specified prior. In this way, GVI enables likelihood-free estimation with the ability to control the influence of the prior by tuning the so-called learning rate. Recently, GVI was shown to outperform traditional Bayesian inference when the model and prior distribution are misspecified. In this paper, we introduce and analyze a new GVI formulation based on utility theory and risk management. Our formulation is to maximize the expected payoff while enforcing constraints on the maximizing distribution. We recover the original GVI distribution by choosing the feasible set to include a constraint on the divergence of the distribution from the prior. In doing so, we automatically determine the learning rate as the Lagrange multiplier for the constraint. In this setting, we are able to transform the infinite-dimensional estimation problem into a two-dimensional convex program. This reformulation further provides an analytic expression for the optimal density of parameters. In addition, we prove asymptotic consistency results for empirical approximations of our optimal distributions. Throughout, we draw connections between our estimation procedure and risk management. In fact, we demonstrate that our estimation procedure is equivalent to evaluating a risk measure. We test our procedure on an estimation problem with a misspecified model and prior distribution, and conclude with some extensions of our approach.
Counterfactual explanations play an important role in detecting bias and improving the explainability of data-driven classification models. A counterfactual explanation (CE) is a minimal perturbed data point for which the decision of the model changes. Most of the existing methods can only provide one CE, which may not be achievable for the user. In this work we derive an iterative method to calculate robust CEs, i.e. CEs that remain valid even after the features are slightly perturbed. To this end, our method provides a whole region of CEs allowing the user to choose a suitable recourse to obtain a desired outcome. We use algorithmic ideas from robust optimization and prove convergence results for the most common machine learning methods including logistic regression, decision trees, random forests, and neural networks. Our experiments show that our method can efficiently generate globally optimal robust CEs for a variety of common data sets and classification models.
Leveraging non-terrestrial platforms in 6G networks holds immense significance as it opens up opportunities to expand network coverage, enhance connectivity, and support a wide range of innovative applications, including global-scale Internet of Things and ultra-high-definition content delivery. To accomplish the seamless integration between terrestrial and non-terrestrial networks, substantial changes in radio access network (RAN) architecture are required. These changes involve the development of new RAN solutions that can efficiently manage the diverse characteristics of both terrestrial and non-terrestrial components, ensuring smooth handovers, resource allocation, and quality of service across the integrated network ecosystem. Additionally, the establishment of robust interconnection and communication protocols between terrestrial and non-terrestrial elements will be pivotal to utilize the full potential of 6G technology. Additionally, innovative approaches have been introduced to split the functionalities within the RAN into centralized and distributed domains. These novel paradigms are designed to enhance RAN's flexibility while simultaneously lowering the costs associated with infrastructure deployment, all while ensuring that the quality of service for end-users remains unaffected. In this work, we provide an extensive examination of various Non-Terrestrial Networks (NTN) architectures and the necessary adaptations required on the existing 5G RAN architecture to align with the distinct attributes of NTN. Of particular significance, we emphasize the crucial RAN functional split choices essential for the seamless integration of terrestrial and non-terrestrial components within advanced 6G networks.
We develop a partially explicit time discretization based on the framework of constraint energy minimizing generalized multiscale finite element method (CEM-GMsFEM) for the problem of linear poroelasticity with high contrast. Firstly, dominant basis functions generated by the CEM-GMsFEM approach are used to capture important degrees of freedom and it is known to give contrast-independent convergence that scales with the mesh size. In typical situation, one has very few degrees of freedom in dominant basis functions. This part is treated implicitly. Secondly, we design and introduce an additional space in the complement space and these degrees are treated explicitly. We also investigate the CFL-type stability restriction for this problem, and the restriction for the time step is contrast independent.
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.
Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.
High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.
Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.