This study explores the effectiveness of graph neural networks (GNNs) for vulnerability detection in software code, utilizing a real-world dataset of Java vulnerability-fixing commits. The dataset's structure, based on the number of modified methods in each commit, offers a natural partition that facilitates diverse investigative scenarios. The primary focus is to evaluate the general applicability of GNNs in identifying vulnerable code segments and distinguishing these from their fixed versions, as well as from random non-vulnerable code. Through a series of experiments, the research addresses key questions about the suitability of different configurations and subsets of data in enhancing the prediction accuracy of GNN models. Experiments indicate that certain model configurations, such as the pruning of specific graph elements and the exclusion of certain types of code representation, significantly improve performance. Additionally, the study highlights the importance of including random data in training to optimize the detection capabilities of GNNs.
Efficient and effective modeling of complex systems, incorporating cloud physics and precipitation, is essential for accurate climate modeling and forecasting. However, simulating these systems is computationally demanding since microphysics has crucial contributions to the dynamics of moisture and precipitation. In this paper, appropriate stochastic models are developed for the phase-transition dynamics of water, focusing on the precipitating quasi-geostrophic (PQG) model as a prototype. By treating the moisture, phase transitions, and latent heat release as integral components of the system, the PQG model constitutes a set of partial differential equations (PDEs) that involve Heaviside nonlinearities due to phase changes of water. Despite systematically characterizing the precipitation physics, expensive iterative algorithms are needed to find a PDE inversion at each numerical integration time step. As a crucial step toward building an effective stochastic model, a computationally efficient Markov jump process is designed to randomly simulate transitions between saturated and unsaturated states that avoids using the expensive iterative solver. The transition rates, which are deterministic, are derived from the physical fields, guaranteeing physical and statistical consistency with nature. Furthermore, to maintain the consistent spatial pattern of precipitation, the stochastic model incorporates an adaptive parameterization that automatically adjusts the transitions based on spatial information. Numerical tests show the stochastic model retains critical properties of the original PQG system while significantly reducing computational demands. It accurately captures observed precipitation patterns, including the spatial distribution and temporal variability of rainfall, alongside reproducing essential dynamic features such as potential vorticity fields and zonal mean flows.
Spatial autoregressive (SAR) models are important tools for studying network effects. However, with an increasing emphasis on data privacy, data providers often implement privacy protection measures that make classical SAR models inapplicable. In this study, we introduce a privacy-protected SAR model with noise-added response and covariates to meet privacy-protection requirements. However, in this scenario, the traditional quasi-maximum likelihood estimator becomes infeasible because the likelihood function cannot be directly formulated. To address this issue, we first consider an explicit expression for the likelihood function with only noise-added responses. Then, we develop techniques to correct the biases for derivatives introduced by noise. Correspondingly, a Newton-Raphson-type algorithm is proposed to obtain the estimator, leading to a corrected likelihood estimator. To further enhance computational efficiency, we introduce a corrected least squares estimator based on the idea of bias correction. These two estimation methods ensure both data security and the attainment of statistically valid estimators. Theoretical analysis of both estimators is carefully conducted, statistical inference methods and model extensions are discussed. The finite sample performances of different methods are demonstrated through extensive simulations and the analysis of a real dataset.
This work presents a novel and effective method for fitting multidimensional ellipsoids to scattered data in the contamination of noise and outliers. We approach the problem as a Bayesian parameter estimate process and maximize the posterior probability of a certain ellipsoidal solution given the data. We establish a more robust correlation between these points based on the predictive distribution within the Bayesian framework. We incorporate a uniform prior distribution to constrain the search for primitive parameters within an ellipsoidal domain, ensuring ellipsoid-specific results regardless of inputs. We then establish the connection between measurement point and model data via Bayes' rule to enhance the method's robustness against noise. Due to independent of spatial dimensions, the proposed method not only delivers high-quality fittings to challenging elongated ellipsoids but also generalizes well to multidimensional spaces. To address outlier disturbances, often overlooked by previous approaches, we further introduce a uniform distribution on top of the predictive distribution to significantly enhance the algorithm's robustness against outliers. We introduce an {\epsilon}-accelerated technique to expedite the convergence of EM considerably. To the best of our knowledge, this is the first comprehensive method capable of performing multidimensional ellipsoid specific fitting within the Bayesian optimization paradigm under diverse disturbances. We evaluate it across lower and higher dimensional spaces in the presence of heavy noise, outliers, and substantial variations in axis ratios. Also, we apply it to a wide range of practical applications such as microscopy cell counting, 3D reconstruction, geometric shape approximation, and magnetometer calibration tasks.
Objectives: This work aims to explore the impact of multicenter data heterogeneity on deep learning brain metastases (BM) autosegmentation performance, and assess the efficacy of an incremental transfer learning technique, namely learning without forgetting (LWF), to improve model generalizability without sharing raw data. Materials and methods: A total of six BM datasets from University Hospital Erlangen (UKER), University Hospital Zurich (USZ), Stanford, UCSF, NYU and BraTS Challenge 2023 on BM segmentation were used for this evaluation. First, the multicenter performance of a convolutional neural network (DeepMedic) for BM autosegmentation was established for exclusive single-center training and for training on pooled data, respectively. Subsequently bilateral collaboration was evaluated, where a UKER pretrained model is shared to another center for further training using transfer learning (TL) either with or without LWF. Results: For single-center training, average F1 scores of BM detection range from 0.625 (NYU) to 0.876 (UKER) on respective single-center test data. Mixed multicenter training notably improves F1 scores at Stanford and NYU, with negligible improvement at other centers. When the UKER pretrained model is applied to USZ, LWF achieves a higher average F1 score (0.839) than naive TL (0.570) and single-center training (0.688) on combined UKER and USZ test data. Naive TL improves sensitivity and contouring accuracy, but compromises precision. Conversely, LWF demonstrates commendable sensitivity, precision and contouring accuracy. When applied to Stanford, similar performance was observed. Conclusion: Data heterogeneity results in varying performance in BM autosegmentation, posing challenges to model generalizability. LWF is a promising approach to peer-to-peer privacy-preserving model training.
This work presents a framework for a robot with a multi-fingered hand to freely utilize daily tools, including functional parts like buttons and triggers. An approach heatmap is generated by selecting a functional finger, indicating optimal palm positions on the object's surface that enable the functional finger to contact the tool's functional part. Once the palm position is identified through the heatmap, achieving the functional grasp becomes a straightforward process where the fingers stably grasp the object with low-dimensional inputs using the eigengrasp. As our approach does not need human demonstrations, it can easily adapt to various sizes and designs, extending its applicability to different objects. In our approach, we use directional manipulability to obtain the approach heatmap. In addition, we add two kinds of energy functions, i.e., palm energy and functional energy functions, to realize the eigengrasp. Using this method, each robotic gripper can autonomously identify its optimal workspace for functional grasping, extending its applicability to non-anthropomorphic robotic hands. We show that several daily tools like spray, drill, and remotes can be efficiently used by not only an anthropomorphic Shadow hand but also a non-anthropomorphic Barrett hand.
The lack of transparency in the decision-making processes of deep learning systems presents a significant challenge in modern artificial intelligence (AI), as it impairs users' ability to rely on and verify these systems. To address this challenge, Concept Bottleneck Models (CBMs) have made significant progress by incorporating human-interpretable concepts into deep learning architectures. This approach allows predictions to be traced back to specific concept patterns that users can understand and potentially intervene on. However, existing CBMs' task predictors are not fully interpretable, preventing a thorough analysis and any form of formal verification of their decision-making process prior to deployment, thereby raising significant reliability concerns. To bridge this gap, we introduce Concept-based Memory Reasoner (CMR), a novel CBM designed to provide a human-understandable and provably-verifiable task prediction process. Our approach is to model each task prediction as a neural selection mechanism over a memory of learnable logic rules, followed by a symbolic evaluation of the selected rule. The presence of an explicit memory and the symbolic evaluation allow domain experts to inspect and formally verify the validity of certain global properties of interest for the task prediction process. Experimental results demonstrate that CMR achieves comparable accuracy-interpretability trade-offs to state-of-the-art CBMs, discovers logic rules consistent with ground truths, allows for rule interventions, and allows pre-deployment verification.
Semantic communication leveraging advanced deep learning (DL) technologies enhances the efficiency, reliability, and security of information transmission. Emerging stacked intelligent metasurface (SIM) having a diffractive neural network (DNN) architecture allows performing complex calculations at the speed of light. In this letter, we introduce an innovative SIM-aided semantic communication system for image recognition tasks. In the considered model, a SIM is positioned in front of the transmitting antenna. In contrast to conventional communication systems transmitting the modulated signals carrying the image information or compressed semantic information, the carrier electromagnetic (EM) wave is directly transmitted from the source in the proposed system. The input layer of the SIM is utilized for source encoding, while the remaining multi-layer architecture constitutes a DNN for semantic encoding. Specifically, the semantic encoder aims to transform the signals passing through the input layer of the SIM into a unique beam towards a receiving antenna corresponding to the image class. Remarkably, both the source and semantic encoding occur naturally as the EM waves propagate through the SIM. At the receiver, the image is recognized by probing the received signal magnitude across the receiving array. To this end, we develop an efficient algorithm to train the transmission coefficients of SIM's meta-atoms to learn the semantic representation of the image. Extensive numerical results verify the effectiveness of utilizing the SIM-based DNN for image recognition task-oriented semantic communications, achieving more than 90% recognition accuracy.
This study tackles the challenges of adversarial corruption in model-based reinforcement learning (RL), where the transition dynamics can be corrupted by an adversary. Existing studies on corruption-robust RL mostly focus on the setting of model-free RL, where robust least-square regression is often employed for value function estimation. However, these techniques cannot be directly applied to model-based RL. In this paper, we focus on model-based RL and take the maximum likelihood estimation (MLE) approach to learn transition model. Our work encompasses both online and offline settings. In the online setting, we introduce an algorithm called corruption-robust optimistic MLE (CR-OMLE), which leverages total-variation (TV)-based information ratios as uncertainty weights for MLE. We prove that CR-OMLE achieves a regret of $\tilde{\mathcal{O}}(\sqrt{T} + C)$, where $C$ denotes the cumulative corruption level after $T$ episodes. We also prove a lower bound to show that the additive dependence on $C$ is optimal. We extend our weighting technique to the offline setting, and propose an algorithm named corruption-robust pessimistic MLE (CR-PMLE). Under a uniform coverage condition, CR-PMLE exhibits suboptimality worsened by $\mathcal{O}(C/n)$, nearly matching the lower bound. To the best of our knowledge, this is the first work on corruption-robust model-based RL algorithms with provable guarantees.
We present a large-scale study on unsupervised spatiotemporal representation learning from videos. With a unified perspective on four recent image-based frameworks, we study a simple objective that can easily generalize all these methods to space-time. Our objective encourages temporally-persistent features in the same video, and in spite of its simplicity, it works surprisingly well across: (i) different unsupervised frameworks, (ii) pre-training datasets, (iii) downstream datasets, and (iv) backbone architectures. We draw a series of intriguing observations from this study, e.g., we discover that encouraging long-spanned persistency can be effective even if the timespan is 60 seconds. In addition to state-of-the-art results in multiple benchmarks, we report a few promising cases in which unsupervised pre-training can outperform its supervised counterpart. Code is made available at //github.com/facebookresearch/SlowFast
This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to //www.deakin.edu.au/~thanhthi/drl.htm.