亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper proposes a novel Self-Supervised Intrusion Detection (SSID) framework, which enables a fully online Machine Learning (ML) based Intrusion Detection System (IDS) that requires no human intervention or prior off-line learning. The proposed framework analyzes and labels incoming traffic packets based only on the decisions of the IDS itself using an Auto-Associative Deep Random Neural Network, and on an online estimate of its statistically measured trustworthiness. The SSID framework enables IDS to adapt rapidly to time-varying characteristics of the network traffic, and eliminates the need for offline data collection. This approach avoids human errors in data labeling, and human labor and computational costs of model training and data collection. The approach is experimentally evaluated on public datasets and compared with well-known ML models, showing that this SSID framework is very useful and advantageous as an accurate and online learning ML-based IDS for IoT systems.

相關內容

This paper proposes a novel Attention-based Encoder-Decoder network for End-to-End Neural speaker Diarization (AED-EEND). In AED-EEND system, we incorporate the target speaker enrollment information used in target speaker voice activity detection (TS-VAD) to calculate the attractor, which can mitigate the speaker permutation problem and facilitate easier model convergence. In the training process, we propose a teacher-forcing strategy to obtain the enrollment information using the ground-truth label. Furthermore, we propose three heuristic decoding methods to identify the enrollment area for each speaker during the evaluation process. Additionally, we enhance the attractor calculation network LSTM used in the end-to-end encoder-decoder based attractor calculation (EEND-EDA) system by incorporating an attention-based model. By utilizing such an attention-based attractor decoder, our proposed AED-EEND system outperforms both the EEND-EDA and TS-VAD systems with only 0.5s of enrollment data.

We introduce a new, open-source computational general relativity framework for the Wolfram Language called Gravitas, which boasts a number of novel and distinctive features as compared to the many pre-existing computational and numerical relativity frameworks currently available within the open-source community. These include, but are not limited to: seamless integration of its powerful symbolic and numerical subsystems, and, by extension, seamless transition between analytic/continuous representations and numerical/discrete representations of arbitrary spacetime geometries; highly modular, general and extensible representations of spacetime geometries, spacetime topologies, gauge conditions, coordinate systems, matter fields, evolution equations and initial data; ability to set up and run complex numerical relativity simulations, and to perform 2D and 3D visualizations, symbolic computations and numerical analysis (including the extraction of gravitational wave signals) on the resulting data, all from within a single notebook environment; and a totally-unstructured adaptive refinement scheme based on hypergraph rewriting, allowing for exceedingly efficient discretization and numerical evolution of Cauchy initial data for a wide range of challenging computational problems involving strong relativistic field dynamics. In this first in a series of two articles covering the framework, we focus on the design and capabilities of Gravitas's symbolic subsystem, including its general and flexible handling of arbitrary geometries parametrized by arbitrary curvilinear coordinate systems (along with an in-built library of standard metrics and coordinate conditions), as well as its various high-level tensor calculus and differential geometry features. We proceed to show how this subsystem can be used to solve the Einstein field equations both analytically and numerically.

There is an emerging effort to combine the two popular 3D frameworks using Multi-View Stereo (MVS) and Neural Implicit Surfaces (NIS) with a specific focus on the few-shot / sparse view setting. In this paper, we introduce a novel integration scheme that combines the multi-view stereo with neural signed distance function representations, which potentially overcomes the limitations of both methods. MVS uses per-view depth estimation and cross-view fusion to generate accurate surfaces, while NIS relies on a common coordinate volume. Based on this strategy, we propose to construct per-view cost frustum for finer geometry estimation, and then fuse cross-view frustums and estimate the implicit signed distance functions to tackle artifacts that are due to noise and holes in the produced surface reconstruction. We further apply a cascade frustum fusion strategy to effectively captures global-local information and structural consistency. Finally, we apply cascade sampling and a pseudo-geometric loss to foster stronger integration between the two architectures. Extensive experiments demonstrate that our method reconstructs robust surfaces and outperforms existing state-of-the-art methods.

This paper proposes an analysis of the prospects of the cyber security industry and educational ecosystems in four Southeast Asian countries, namely Vietnam, Singapore, Malaysia, and Indonesia, which are along the Maritime Silk Road, by using two novel metrics: the "Cybersecurity Education Prospects Index" (CEPI) and the "Cybersecurity Industry Prospects Index" (CIPI). The CEPI evaluates the state of cybersecurity education by assessing the availability and quality of cybersecurity degrees together with their ability to attract new students. On the other hand, the CIPI measures the potential for the cybersecurity industry's growth and development by assessing the talent pool needed to build and sustain its growth. Ultimately, this study emphasizes the vital importance of a healthy cybersecurity ecosystem where education is responsible for supporting the industry to ensure the security and reliability of commercial operations in these countries against a complex and evolving cyber threat landscape.

The travelling salesman problem (TSP) is one of the well-studied NP-hard problems in the literature. The state-of-the art inexact TSP solvers are the Lin-Kernighan-Helsgaun (LKH) heuristic and Edge Assembly crossover (EAX). A recent study suggests that EAX with restart mechanisms perform well on a wide range of TSP instances. However, this study is limited to 2,000 city problems. We study for problems ranging from 2,000 to 85,900. We see that the performance of the solver varies with the type of the problem. However, combining these solvers in an ensemble setup, we are able to outperform the individual solver's performance. We see the ensemble setup as an efficient way to make use of the abundance of compute resources. In addition to EAX and LKH, we use several versions of the hybrid of EAX and Mixing Genetic Algorithm (MGA). A hybrid of MGA and EAX is known to solve some hard problems. We see that the ensemble of the hybrid version outperforms the state-of-the-art solvers on problems larger than 10,000 cities.

We give novel Python and R interfaces for the (Java) Tetrad project for causal modeling, search, and estimation. The Tetrad project is a mainstay in the literature, having been under consistent development for over 30 years. Some of its algorithms are now classics, like PC and FCI; others are recent developments. It is increasingly the case, however, that researchers need to access the underlying Java code from Python or R. Existing methods for doing this are inadequate. We provide new, up-to-date methods using the JPype Python-Java interface and the Reticulate Python-R interface, directly solving these issues. With the addition of some simple tools and the provision of working examples for both Python and R, using JPype and Reticulate to interface Python and R with Tetrad is straightforward and intuitive.

This paper proposes a non-linear Model Predictive Contouring Control (MPCC) for obstacle avoidance in automated vehicles driven at the limit of handling. The proposed controller integrates motion planning, path tracking and vehicle stability objectives, prioritising obstacle avoidance in emergencies. The controller's prediction model is a non-linear single-track vehicle model with the Fiala tyre to capture the vehicle's non-linear behaviour. The MPCC computes the optimal steering angle and brake torques to minimise tracking error in safe situations and maximise the vehicle-to-obstacle distance in emergencies. Furthermore, the MPCC is extended with the tyre friction circle to fully exploit the vehicle's manoeuvrability and stability. The MPCC controller is tested using real-time rapid prototyping hardware to prove its real-time capability. The performance is compared with a state-of-the-art Model Predictive Control (MPC) in a high-fidelity simulation environment. The double lane change scenario results demonstrate a significant improvement in successfully avoiding obstacles and maintaining vehicle stability.

This paper introduces a novel approach to detour management in Urban Air Traffic Management (UATM) using knowledge representation and reasoning. It aims to understand the complexities and requirements of UAM detours, enabling a method that quickly identifies safe and efficient routes in a carefully sampled environment. This method implemented in Answer Set Programming uses non-monotonic reasoning and a two-phase conversation between a human manager and the UATM system, considering factors like safety and potential impacts. The robustness and efficacy of the proposed method were validated through several queries from two simulation scenarios, contributing to the symbiosis of human knowledge and advanced AI techniques. The paper provides an introduction, citing relevant studies, problem formulation, solution, discussions, and concluding comments.

Randomized Controlled Trials (RCTs) often adjust for baseline covariates in order to increase power. This technical note provides a short derivation of a simple rule of thumb for approximating the ratio of the power of an adjusted analysis to that of an unadjusted analysis. Specifically, if the unadjusted analysis is powered to approximately 80\%, then the ratio of the power of the adjusted analysis to the power of the unadjusted analysis is approximately $1 + \frac{1}{2} R^2$, where $R$ is the correlation between the baseline covariate and the outcome.

This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current GPT- and BERT-style LLMs. Then, we discuss the influence of pre-training data, training data, and test data. Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, natural language generation tasks, emergent abilities, and considerations for specific tasks.We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios. We also try to understand the importance of data and the specific challenges associated with each NLP task. Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency, to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found at \url{//github.com/Mooler0410/LLMsPracticalGuide}.

北京阿比特科技有限公司