亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces a value-driven cybersecurity innovation framework for the transportation and infrastructure sectors, as opposed to the traditional market-centric approaches that have dominated the field. Recontextualizing innovation categories into sustaining, incremental, disruptive, and transformative, we aim to foster a culture of self-innovation within organizations, enabling a strategic focus on cybersecurity measures that directly contribute to business value and strategic goals. This approach enhances operational effectiveness and efficiency of cyber defences primarily, while also aligns cybersecurity initiatives with mission-critical objectives. We detail a practical method for evaluating the business value of cybersecurity innovations and present a pragmatic approach for organizations to funnel innovative ideas in a structured and repeatable manner. The framework is designed to reinforce cybersecurity capabilities against an evolving cyber threat landscape while maintaining infrastructural integrity. Shifting the focus from general market appeal to sector-specific needs, our framework provides cybersecurity leaders with the strategic cyber-foresight necessary for prioritizing impactful initiatives, thereby making cybersecurity a core business enabler rather than a burden.

相關內容

醫學人工智能AIM(Artificial Intelligence in Medicine)雜志發表了多學科領域的原創文章,涉及醫學中的人工智能理論和實踐,以醫學為導向的人類生物學和衛生保健。醫學中的人工智能可以被描述為與研究、項目和應用相關的科學學科,旨在通過基于知識或數據密集型的計算機解決方案支持基于決策的醫療任務,最終支持和改善人類護理提供者的性能。 官網地址:

This paper investigates the positioning of the pilot symbols, as well as the power distribution between the pilot and the communication symbols in the OTFS modulation scheme. We analyze the pilot placements that minimize the mean squared error (MSE) in estimating the channel taps. In addition, we optimize the average channel capacity by adjusting the power balance. We show that this leads to a significant increase in average capacity. The results provide valuable guidance for designing the OTFS parameters to achieve maximum capacity. Numerical simulations are performed to validate the findings.

This paper proposes a novel reinforcement learning (RL) framework for credit underwriting that tackles ungeneralizable contextual challenges. We adapt RL principles for credit scoring, incorporating action space renewal and multi-choice actions. Our work demonstrates that the traditional underwriting approach aligns with the RL greedy strategy. We introduce two new RL-based credit underwriting algorithms to enable more informed decision-making. Simulations show these new approaches outperform the traditional method in scenarios where the data aligns with the model. However, complex situations highlight model limitations, emphasizing the importance of powerful machine learning models for optimal performance. Future research directions include exploring more sophisticated models alongside efficient exploration mechanisms.

Ransomware presents a significant and increasing threat to individuals and organizations by encrypting their systems and not releasing them until a large fee has been extracted. To bolster preparedness against potential attacks, organizations commonly conduct red teaming exercises, which involve simulated attacks to assess existing security measures. This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks. By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly, significantly streamlining traditional, manual penetration testing processes. The attack pathways revealed by the RL agent can provide valuable insights to the defense team, helping them identify network weak points and develop more resilient defensive measures. Experimental results on a 152-host example network confirm the effectiveness of the proposed approach, demonstrating the RL agent's capability to discover and orchestrate attacks on high-value targets while evading honeyfiles (decoy files strategically placed to detect unauthorized access).

As speech synthesis systems continue to make remarkable advances in recent years, the importance of robust deepfake detection systems that perform well in unseen systems has grown. In this paper, we propose a novel adaptive centroid shift (ACS) method that updates the centroid representation by continually shifting as the weighted average of bonafide representations. Our approach uses only bonafide samples to define their centroid, which can yield a specialized centroid for one-class learning. Integrating our ACS with one-class learning gathers bonafide representations into a single cluster, forming well-separated embeddings robust to unseen spoofing attacks. Our proposed method achieves an equal error rate (EER) of 2.19% on the ASVspoof 2021 deepfake dataset, outperforming all existing systems. Furthermore, the t-SNE visualization illustrates that our method effectively maps the bonafide embeddings into a single cluster and successfully disentangles the bonafide and spoof classes.

This paper presents novel techniques for improving the error correction performance and reducing the complexity of coarsely quantized 5G-LDPC decoders. The proposed decoder design supports arbitrary message-passing schedules on a base-matrix level by modeling exchanged messages with entry-specific discrete random variables. Variable nodes (VNs) and check nodes (CNs) involve compression operations designed using the information bottleneck method to maximize preserved mutual information between code bits and quantized messages. We introduce alignment regions that assign the messages to groups with aligned reliability levels to decrease the number of individual design parameters. Group compositions with degree-specific separation of messages improve performance by up to 0.4 dB. Further, we generalize our recently proposed CN-aware quantizer design to irregular LDPC codes and layered schedules. The method optimizes the VN quantizer to maximize preserved mutual information at the output of the subsequent CN update, enhancing performance by up to 0.2 dB. A schedule optimization modifies the order of layer updates, reducing the average iteration count by up to 35 %. We integrate all new techniques in a rate-compatible decoder design by extending the alignment regions along a rate-dimension. Our complexity analysis for 2-bit decoding estimates up to 64 % higher throughput versus 4-bit decoding at similar performance.

Accurate estimation of queuing delays is crucial for designing and optimizing communication networks, particularly in the context of Deterministic Networking (DetNet) scenarios. This study investigates the approximation of Internet queuing delays using an M/M/1 envelope model, which provides a simple methodology to find tight upper bounds of real delay percentiles. Real traffic statistics collected at large Internet Exchange Points (like Amsterdam and San Francisco) have been used to fit polynomial regression models for transforming packet queuing delays into the M/M/1 envelope models. We finally propose a methodology for providing delay percentiles in DetNet scenarios where tight latency guarantees need to be assured.

This paper introduces a novel framework for matrix diagonalization, recasting it as a sequential decision-making problem and applying the power of Decision Transformers (DTs). Our approach determines optimal pivot selection during diagonalization with the Jacobi algorithm, leading to significant speedups compared to the traditional max-element Jacobi method. To bolster robustness, we integrate an epsilon-greedy strategy, enabling success in scenarios where deterministic approaches fail. This work demonstrates the effectiveness of DTs in complex computational tasks and highlights the potential of reimagining mathematical operations through a machine learning lens. Furthermore, we establish the generalizability of our method by using transfer learning to diagonalize matrices of smaller sizes than those trained.

This paper proposes a generative model to detect change points in time series of graphs. The proposed framework consists of learnable prior distributions for low-dimensional graph representations and of a decoder that can generate graphs from the latent representations. The informative prior distributions in the latent spaces are learned from the observed data as empirical Bayes, and the expressive power of generative model is exploited to assist multiple change point detection. Specifically, the model parameters are learned via maximum approximate likelihood, with a Group Fused Lasso regularization on the prior parameters. The optimization problem is then solved via Alternating Direction Method of Multipliers (ADMM), and Langevin Dynamics are recruited for posterior inference. Experiments in both simulated and real data demonstrate the ability of the generative model in supporting change point detection with good performance.

Intelligent transportation systems play a crucial role in modern traffic management and optimization, greatly improving traffic efficiency and safety. With the rapid development of generative artificial intelligence (Generative AI) technologies in the fields of image generation and natural language processing, generative AI has also played a crucial role in addressing key issues in intelligent transportation systems, such as data sparsity, difficulty in observing abnormal scenarios, and in modeling data uncertainty. In this review, we systematically investigate the relevant literature on generative AI techniques in addressing key issues in different types of tasks in intelligent transportation systems. First, we introduce the principles of different generative AI techniques, and their potential applications. Then, we classify tasks in intelligent transportation systems into four types: traffic perception, traffic prediction, traffic simulation, and traffic decision-making. We systematically illustrate how generative AI techniques addresses key issues in these four different types of tasks. Finally, we summarize the challenges faced in applying generative AI to intelligent transportation systems, and discuss future research directions based on different application scenarios.

We describe ACE0, a lightweight platform for evaluating the suitability and viability of AI methods for behaviour discovery in multiagent simulations. Specifically, ACE0 was designed to explore AI methods for multi-agent simulations used in operations research studies related to new technologies such as autonomous aircraft. Simulation environments used in production are often high-fidelity, complex, require significant domain knowledge and as a result have high R&D costs. Minimal and lightweight simulation environments can help researchers and engineers evaluate the viability of new AI technologies for behaviour discovery in a more agile and potentially cost effective manner. In this paper we describe the motivation for the development of ACE0.We provide a technical overview of the system architecture, describe a case study of behaviour discovery in the aerospace domain, and provide a qualitative evaluation of the system. The evaluation includes a brief description of collaborative research projects with academic partners, exploring different AI behaviour discovery methods.

北京阿比特科技有限公司