Recently, deep neural networks have been shown to be vulnerable to backdoor attacks. A backdoor is inserted into neural networks via this attack paradigm, thus compromising the integrity of the network. As soon as an attacker presents a trigger during the testing phase, the backdoor in the model is activated, allowing the network to make specific wrong predictions. It is extremely important to defend against backdoor attacks since they are very stealthy and dangerous. In this paper, we propose a novel defense mechanism, Neural Behavioral Alignment (NBA), for backdoor removal. NBA optimizes the distillation process in terms of knowledge form and distillation samples to improve defense performance according to the characteristics of backdoor defense. NBA builds high-level representations of neural behavior within networks in order to facilitate the transfer of knowledge. Additionally, NBA crafts pseudo samples to induce student models exhibit backdoor neural behavior. By aligning the backdoor neural behavior from the student network with the benign neural behavior from the teacher network, NBA enables the proactive removal of backdoors. Extensive experiments show that NBA can effectively defend against six different backdoor attacks and outperform five state-of-the-art defenses.
Diffractive deep neural networks (D2NNs) are composed of successive transmissive layers optimized using supervised deep learning to all-optically implement various computational tasks between an input and output field-of-view (FOV). Here, we present a pyramid-structured diffractive optical network design (which we term P-D2NN), optimized specifically for unidirectional image magnification and demagnification. In this design, the diffractive layers are pyramidally scaled in alignment with the direction of the image magnification or demagnification. This P-D2NN design creates high-fidelity magnified or demagnified images in only one direction, while inhibiting the image formation in the opposite direction - achieving the desired unidirectional imaging operation using a much smaller number of diffractive degrees of freedom within the optical processor volume. Furthermore, P-D2NN design maintains its unidirectional image magnification/demagnification functionality across a large band of illumination wavelengths despite being trained with a single wavelength. We also designed a wavelength-multiplexed P-D2NN, where a unidirectional magnifier and a unidirectional demagnifier operate simultaneously in opposite directions, at two distinct illumination wavelengths. Furthermore, we demonstrate that by cascading multiple unidirectional P-D2NN modules, we can achieve higher magnification factors. The efficacy of the P-D2NN architecture was also validated experimentally using terahertz illumination, successfully matching our numerical simulations. P-D2NN offers a physics-inspired strategy for designing task-specific visual processors.
Recent advancements in deep convolutional neural networks have significantly improved the performance of saliency prediction. However, the manual configuration of the neural network architectures requires domain knowledge expertise and can still be time-consuming and error-prone. To solve this, we propose a new Neural Architecture Search (NAS) framework for saliency prediction with two contributions. Firstly, a supernet for saliency prediction is built with a weight-sharing network containing all candidate architectures, by integrating a dynamic convolution into the encoder-decoder in the supernet, termed SalNAS. Secondly, despite the fact that SalNAS is highly efficient (20.98 million parameters), it can suffer from the lack of generalization. To solve this, we propose a self-knowledge distillation approach, termed Self-KD, that trains the student SalNAS with the weighted average information between the ground truth and the prediction from the teacher model. The teacher model, while sharing the same architecture, contains the best-performing weights chosen by cross-validation. Self-KD can generalize well without the need to compute the gradient in the teacher model, enabling an efficient training system. By utilizing Self-KD, SalNAS outperforms other state-of-the-art saliency prediction models in most evaluation rubrics across seven benchmark datasets while being a lightweight model. The code will be available at //github.com/chakkritte/SalNAS
Which dynamic queries can be maintained efficiently? For constant-size changes, it is known that constant-depth circuits or, equivalently, first-order updates suffice for maintaining many important queries, among them reachability, tree isomorphism, and the word problem for context-free languages. In other words, these queries are in the dynamic complexity class DynFO. We show that most of the existing results for constant-size changes can be recovered for batch changes of polylogarithmic size if one allows circuits of depth O(log log n) or, equivalently, first-order updates that are iterated O(log log n) times.
Image Edge detection (ED) is a base task in computer vision. While the performance of the ED algorithm has been improved greatly by introducing CNN-based models, current models still suffer from unsatisfactory precision rates especially when only a low error toleration distance is allowed. Therefore, model architecture for more precise predictions still needs an investigation. On the other hand, the unavoidable noise training data provided by humans would lead to unsatisfactory model predictions even when inputs are edge maps themselves, which also needs improvement. In this paper, more precise ED models are presented with cascaded skipping density blocks (CSDB). Our models obtain state-of-the-art(SOTA) predictions in several datasets, especially in average precision rate (AP), which is confirmed by extensive experiments. Moreover, our models do not include down-sample operations, demonstrating those widely believed operations are not necessary. Also, a novel modification on data augmentation for training is employed, which allows noiseless data to be employed in model training and thus improves the performance of models predicting on edge maps themselves.
As networks grow in size and complexity, backbones become an essential network representation. Indeed, they provide a simplified yet informative overview of the underlying organization by retaining the most significant and structurally influential connections within a network. Network heterogeneity often results in complex and intricate structures, making it challenging to identify the backbone. In response, we introduce the Multilevel Backbone Extraction Framework, a novel approach that diverges from conventional backbone methodologies. This generic approach prioritizes the mesoscopic organization of networks. First, it splits the network into homogeneous-density components. Second, it extracts independent backbones for each component using any classical Backbone technique. Finally, the various backbones are combined. This strategy effectively addresses the heterogeneity observed in network groupings. Empirical investigations on real-world networks underscore the efficacy of the Multilevel Backbone approach in preserving essential network structures and properties. Experiments demonstrate its superiority over classical methods in handling network heterogeneity and enhancing network integrity. The framework is adaptable to various types of networks and backbone extraction techniques, making it a versatile tool for network analysis and backbone extraction across diverse network applications.
Uncertainty reduction is vital for improving system reliability and reducing risks. To identify the best target for uncertainty reduction, uncertainty importance measure is commonly used to prioritize the significance of input variable uncertainties. Then, designers will take steps to reduce the uncertainties of variables with high importance. However, for variables with minimal uncertainty, the cost of controlling their uncertainties can be unacceptable. Therefore, uncertainty magnitude should also be considered in developing uncertainty reduction strategies. Although variance-based methods have been developed for this purpose, they are dependent on statistical moments and have limitations when dealing with highly-skewed distributions that are commonly encountered in practical applications. Motivated by this problem, we propose a new uncertainty importance measure based on cumulative residual entropy. The proposed measure is moment-independent based on the cumulative distribution function, which can handle the highly-skewed distributions properly. Numerical implementations for estimating the proposed measure are devised and verified. A real-world engineering case considering highly-skewed distributions is introduced to show the procedure of developing uncertainty reduction strategies considering uncertainty magnitude and corresponding cost. The results demonstrate that the proposed measure can present a different uncertainty reduction recommendation compared to the variance-based approach because of its moment-independent characteristic.
Most of the intrusion detection methods in computer networks are based on traffic flow characteristics. However, this approach may not fully exploit the potential of deep learning algorithms to directly extract features and patterns from raw packets. Moreover, it impedes real-time monitoring due to the necessity of waiting for the processing pipeline to complete and introduces dependencies on additional software components. In this paper, we investigate deep learning methodologies capable of detecting attacks in real-time directly from raw packet data within network traffic. We propose a novel approach where packets are stacked into windows and separately recognised, with a 2D image representation suitable for processing with computer vision models. Our investigation utilizes the CIC IDS-2017 dataset, which includes both benign traffic and prevalent real-world attacks, providing a comprehensive foundation for our research.
Blockchain networks are critical for safeguarding digital transactions and assets, but they are increasingly targeted by ransomware attacks exploiting zero-day vulnerabilities. Traditional detection techniques struggle due to the complexity of these exploits and the lack of comprehensive datasets. The UGRansome dataset addresses this gap by offering detailed features for analysing ransomware and zero-day attacks, including timestamps, attack types, protocols, network flows, and financial impacts in bitcoins (BTC). This study uses the Lazy Predict library to automate machine learning (ML) on the UGRansome dataset. The study aims to enhance blockchain security through ransomware detection based on zero-day exploit recognition using the UGRansome dataset. Lazy Predict streamlines different ML model comparisons and identifies effective algorithms for threat detection. Key features such as timestamps, protocols, and financial data are used to predict anomalies as zero-day threats and to classify known signatures as ransomware. Results demonstrate that ML can significantly improve cybersecurity in blockchain environments. The DecisionTreeClassifier and ExtraTreeClassifier, with their high performance and low training times, are ideal candidates for deployment in real-time threat detection systems.
We are witnessing a widespread adoption of artificial intelligence in healthcare. However, most of the advancements in deep learning in this area consider only unimodal data, neglecting other modalities. Their multimodal interpretation necessary for supporting diagnosis, prognosis and treatment decisions. In this work we present a deep architecture, which jointly learns modality reconstructions and sample classifications using tabular and imaging data. The explanation of the decision taken is computed by applying a latent shift that, simulates a counterfactual prediction revealing the features of each modality that contribute the most to the decision and a quantitative score indicating the modality importance. We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset, which contains multimodal data for the early identification of patients at risk of severe outcome. The results show that the proposed method provides meaningful explanations without degrading the classification performance.
Bayesian neural networks (BNN) promise to combine the predictive performance of neural networks with principled uncertainty modeling important for safety-critical systems and decision making. However, posterior uncertainty estimates depend on the choice of prior, and finding informative priors in weight-space has proven difficult. This has motivated variational inference (VI) methods that pose priors directly on the function generated by the BNN rather than on weights. In this paper, we address a fundamental issue with such function-space VI approaches pointed out by Burt et al. (2020), who showed that the objective function (ELBO) is negative infinite for most priors of interest. Our solution builds on generalized VI (Knoblauch et al., 2019) with the regularized KL divergence (Quang, 2019) and is, to the best of our knowledge, the first well-defined variational objective for function-space inference in BNNs with Gaussian process (GP) priors. Experiments show that our method incorporates the properties specified by the GP prior on synthetic and small real-world data sets, and provides competitive uncertainty estimates for regression, classification and out-of-distribution detection compared to BNN baselines with both function and weight-space priors.