This research shows the analysis of multiple factors that inhibit the implementation of an Information Security Management System (ISMS). The research data were collected from 143 respondents from two universities in northeastern Mexico, in faculties of engineering in related areas. In this study, the Information Security Management System Measurement Instrument (IM-ISMS) was validated. A scale of 24 items was obtained, divided into four factors: organizational policies and regulations, privacy, integrity and authenticity. The results of this study agree with the results found by [10] in which they pre-sent a model that complies with ISO/IEC 27002:2013 controls and security and privacy criteria to improve the ISMS. [48], Mentioned that the implementation of controls based on ISO standards can meet the requirements for cybersecurity best practices.A scale of 24 items was obtained, divided into four factors: organizational policies and regulations, privacy, integrity and authenticity. This version of the instrument meets the criteria established for its validity (KMO, Bartlett's test of sphericity). An extraction was performed by the minimum residuals method, an oblique rotation was performed by the promax method, when performing the rotation 17 of the 24 items were grouped in the corresponding factor. The final reliability of the scale was calculated by the Omega coefficient, in all the dimensions the coefficients were greater than .70, therefore the re-liability of the instrument is good.
Cooperative Intelligent Transport Systems (C-ITS) create, share and process massive amounts of data which needs to be real-time managed to enable new cooperative and autonomous driving applications. Vehicle-to-Everything (V2X) communications facilitate information exchange among vehicles and infrastructures using various protocols. By providing computer power, data storage, and low latency capabilities, Multi-access Edge Computing (MEC) has become a key enabling technology in the transport industry. The Local Dynamic Map (LDM) concept has consequently been extended to its utilisation in MECs, into an efficient, collaborative, and centralised Edge Dynamic Map (EDM) for C-ITS applications. This research presents an EDM architecture for V2X communications and implements a real-time proof-of-concept using a Time-Series Database (TSDB) engine to store vehicular message information. The performance evaluation includes data insertion and querying, assessing the system's capacity and scale for low-latency Cooperative Awareness Message (CAM) applications. Traffic simulations using SUMO have been employed to generate virtual routes for thousands of vehicles, demonstrating the transmission of virtual CAM messages to the EDM.
We investigate error of the Euler scheme in the case when the right-hand side function of the underlying ODE satisfies nonstandard assumptions such as local one-sided Lipschitz condition and local H\"older continuity. Moreover, we assume two cases in regards to information availability: exact and noisy with respect to the right-hand side function. Optimality analysis of the Euler scheme is also provided. Finally, we present the results of some numerical experiments.
Large enterprises face a crucial imperative to achieve the Sustainable Development Goals (SDGs), especially goal 13, which focuses on combating climate change and its impacts. To mitigate the effects of climate change, reducing enterprise Scope 3 (supply chain emissions) is vital, as it accounts for more than 90\% of total emission inventories. However, tracking Scope 3 emissions proves challenging, as data must be collected from thousands of upstream and downstream suppliers.To address the above mentioned challenges, we propose a first-of-a-kind framework that uses domain-adapted NLP foundation models to estimate Scope 3 emissions, by utilizing financial transactions as a proxy for purchased goods and services. We compared the performance of the proposed framework with the state-of-art text classification models such as TF-IDF, word2Vec, and Zero shot learning. Our results show that the domain-adapted foundation model outperforms state-of-the-art text mining techniques and performs as well as a subject matter expert (SME). The proposed framework could accelerate the Scope 3 estimation at Enterprise scale and will help to take appropriate climate actions to achieve SDG 13.
Interest in the network analysis of bibliographic data has increased significantly in recent years. Yet, appropriate statistical models for examining the full dynamics of scientific citation networks, connecting authors to the papers they write and papers to other papers they cite, are not available. Very few studies exist that have examined how the social network between co-authors and the citation network among the papers shape one another and co-evolve. In consequence, our understanding of scientific citation networks remains incomplete. In this paper we extend recently derived relational hyperevent models (RHEM) to the analysis of scientific networks, providing a general framework to model the multiple dependencies involved in the relation linking multiple authors to the papers they write, and papers to the multiple references they cite. We demonstrate the empirical value of our model in an analysis of publicly available data on a scientific network comprising millions of authors and papers and assess the relative strength of various effects explaining scientific production. We outline the implications of the model for the evaluation of scientific research.
We consider the degree-Rips construction from topological data analysis, which provides a density-sensitive, multiparameter hierarchical clustering algorithm. We analyze its stability to perturbations of the input data using the correspondence-interleaving distance, a metric for hierarchical clusterings that we introduce. Taking certain one-parameter slices of degree-Rips recovers well-known methods for density-based clustering, but we show that these methods are unstable. However, we prove that degree-Rips, as a multiparameter object, is stable, and we propose an alternative approach for taking slices of degree-Rips, which yields a one-parameter hierarchical clustering algorithm with better stability properties. We prove that this algorithm is consistent, using the correspondence-interleaving distance. We provide an algorithm for extracting a single clustering from one-parameter hierarchical clusterings, which is stable with respect to the correspondence-interleaving distance. And, we integrate these methods into a pipeline for density-based clustering, which we call Persistable. Adapting tools from multiparameter persistent homology, we propose visualization tools that guide the selection of all parameters of the pipeline. We demonstrate Persistable on benchmark datasets, showing that it identifies multi-scale cluster structure in data.
The EM algorithm is a popular tool for maximum likelihood estimation but has not been used much for high-dimensional regularization problems in linear mixed-effects models. In this paper, we introduce the EMLMLasso algorithm, which combines the EM algorithm and the popular and efficient R package glmnet for Lasso variable selection of fixed effects in linear mixed-effects models. We compare the performance of our proposed EMLMLasso algorithm with the one implemented in the well-known R package glmmLasso through the analyses of both simulated and real-world applications. The simulations and applications demonstrated good properties, such as consistency, and the effectiveness of the proposed variable selection procedure, for both $p < n$ and $p > n$. Moreover, in all evaluated scenarios, the EMLMLasso algorithm outperformed glmmLasso. The proposed method is quite general and can be easily extended for ridge and elastic net penalties in linear mixed-effects models.
The paper addresses an error analysis of an Eulerian finite element method used for solving a linearized Navier--Stokes problem in a time-dependent domain. In this study, the domain's evolution is assumed to be known and independent of the solution to the problem at hand. The numerical method employed in the study combines a standard Backward Differentiation Formula (BDF)-type time-stepping procedure with a geometrically unfitted finite element discretization technique. Additionally, Nitsche's method is utilized to enforce the boundary conditions. The paper presents a convergence estimate for several velocity--pressure elements that are inf-sup stable. The estimate demonstrates optimal order convergence in the energy norm for the velocity component and a scaled $L^2(H^1)$-type norm for the pressure component.
We consider the information fiber optical channel modeled by the nonlinear Schrodinger equation with additive Gaussian noise. Using path-integral approach and perturbation theory for the small dimensionless parameter of the second dispersion, we calculate the conditional probability density functional in the leading and next-to-leading order in the dimensionless second dispersion parameter associated with the input signal bandwidth. Taking into account specific filtering of the output signal by the output signal receiver, we calculate the mutual information in the leading and next-to-leading order in the dispersion parameter and in the leading order in the parameter signal-to-noise ratio (SNR). Further, we find the explicit expression for the mutual information in case of the modified Gaussian input signal distribution taking into account the limited frequency bandwidth of the input signal.
Graph Neural Networks (GNNs) are becoming increasingly popular due to their superior performance in critical graph-related tasks. While quantization is widely used to accelerate GNN computation, quantized training faces unprecedented challenges. Current quantized GNN training systems often have longer training times than their full-precision counterparts for two reasons: (i) addressing the accuracy challenge leads to excessive overhead, and (ii) the optimization potential exposed by quantization is not adequately leveraged. This paper introduces Tango which re-thinks quantization challenges and opportunities for graph neural network training on GPUs with three contributions: Firstly, we introduce efficient rules to maintain accuracy during quantized GNN training. Secondly, we design and implement quantization-aware primitives and inter-primitive optimizations that can speed up GNN training. Finally, we integrate Tango with the popular Deep Graph Library (DGL) system and demonstrate its superior performance over state-of-the-art approaches on various GNN models and datasets.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.