亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Cellular communication technologies such as 5G are deployed on a large scale around the world. Compared to other communication technologies such as WiFi, Bluetooth, or Ultra Wideband, the 5G communication standard describes support for a large variety of use cases, e.g., Internet of Things, vehicular, industrial, and campus-wide communications. An organization can operate a Private 5G network to provide connectivity to devices in their manufacturing environment. Physical Layer Key Generation (PLKG) is a method to generate a symmetric secret on two nodes despite the presence of a potential passive eavesdropper. To the best of our knowledge, this work is one of the first to implement PLKG in a real Private 5G network. Therefore, it highlights the possibility of integrating PLKG in the communication technology highly relevant for industrial applications. This paper exemplifies the establishment of a long-term symmetric key between an aerial vehicle and IT infrastructure both located in a manufacturing environment and communicating via the radio interface of the Private 5G network.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Molecular communication (MC) is a paradigm that employs molecules as information transmitters, hence, requiring unconventional transceivers and detection techniques for the Internet of Bio-Nano Things (IoBNT). In this study, we provide a novel MC model that incorporates a spherical transmitter and receiver with partial absorption. This model offers a more realistic representation than receiver architectures in literature, e.g. passive or entirely absorbing configurations. An optimization-based technique utilizing particle swarm optimization (PSO) is employed to accurately estimate the cumulative number of molecules received. This technique yields nearly constant correction parameters and demonstrates a significant improvement of 5 times in terms of root mean square error (RMSE). The estimated channel model provides an approximate analytical impulse response; hence, it is used for estimating channel parameters such as distance, diffusion coefficient, or a combination of both. We apply iterative maximum likelihood estimation (MLE) for the parameter estimation, which gives consistent errors compared to the estimated Cramer-Rao Lower Bound (CLRB).

Culture fundamentally shapes people's reasoning, behavior, and communication. Generative artificial intelligence (AI) technologies may cause a shift towards a dominant culture. As people increasingly use AI to expedite and even automate various professional and personal tasks, cultural values embedded in AI models may bias authentic expression. We audit large language models for cultural bias, comparing their responses to nationally representative survey data, and evaluate country-specific prompting as a mitigation strategy. We find that GPT-4, 3.5 and 3 exhibit cultural values resembling English-speaking and Protestant European countries. Our mitigation strategy reduces cultural bias in recent models but not for all countries/territories. To avoid cultural bias in generative AI, especially in high-stakes contexts, we suggest using culture matching and ongoing cultural audits.

Agricultural robots must navigate challenging dynamic and semi-structured environments. Recently, environmental modeling using LiDAR-based SLAM has shown promise in providing highly accurate geometry. However, how this chaotic environmental information can be used to achieve effective robot automation in the agricultural sector remains unexplored. In this study, we propose a novel semantic mapping and navigation framework for achieving robotic autonomy in orchards. It consists of two main components: a semantic processing module and a navigation module. First, we present a novel 3D detection network architecture, 3D-ODN, which can accurately process object instance information from point clouds. Second, we develop a framework to construct the visibility map by incorporating semantic information and terrain analysis. By combining these two critical components, our framework is evaluated in a number of key horticultural production scenarios, including a robotic system for in-situ phenotyping and daily monitoring, and a selective harvesting system in apple orchards. The experimental results show that our method can ensure high accuracy in understanding the environment and enable reliable robot autonomy in agricultural environments.

Age of information (AoI) and reliability are two critical metrics to support real-time applications in Industrial Internet of Things (IIoT). These metrics reflect different concepts of timely delivery of sensor information. Monitoring traffic serves to maintain fresh status updates, expressed in a low AoI, which is important for proper control and actuation actions. On the other hand, safety-critical information, e.g., emergency alarms, is generated sporadically and must be delivered with high reliability within a predefined deadline. In this work, we investigate the AoI-reliability trade-off in a real-time monitoring scenario that supports two traffic flows, namely AoI-oriented traffic and deadline-oriented traffic. Both traffic flows are transmitted to a central controller over an unreliable shared channel. We derive expressions of the average AoI for the AoI-oriented traffic and reliability, represented by Packet Loss Probability (PLP), for the deadline-oriented traffic using Discrete-Time Markov Chain (DTMC). We also conduct discrete-event simulations in MATLAB to validate the analytical results and evaluate the interaction between the two types of traffic flows. The results clearly demonstrate the tradeoff between the AoI and PLP in such heterogeneous IIoT networks and give insights on how to configure the network to achieve a target pair of AoI and PLP.

Enabling real-time communication in Industrial Internet of Things (IIoT) networks is crucial to support autonomous, self-organized and re-configurable industrial automation for Industry 4.0 and the forthcoming Industry 5.0. In this paper, we consider a SIC-assisted real-time IIoT network, in which sensor nodes generate reports according to an event-generation probability that is specific for the monitored phenomena. The reports are delivered over a block-fading channel to a common Access Point (AP) in slotted ALOHA fashion, which leverages the imbalances in the received powers among the contending users and applies successive interference cancellation (SIC) to decode user packets from the collisions. We provide an extensive analytical treatment of the setup, deriving the Age of Information (AoI), throughput and deadline violation probability, when the AP has access to both the perfect as well as the imperfect channel-state information. We show that adopting SIC improves all the performance parameters with respect to the standard slotted ALOHA, as well as to an age-dependent access method. The analytical results agree with the simulation based ones, demonstrating that investing in the SIC capability at the receiver enables this simple access method to support timely and efficient information delivery in IIoT networks.

Language serves as a vehicle for conveying thought, enabling communication among individuals. The ability to distinguish between diverse concepts, identify fairness and injustice, and comprehend a range of legal notions fundamentally relies on logical reasoning. Large Language Models (LLMs) attempt to emulate human language understanding and generation, but their competency in logical reasoning remains limited. This paper seeks to address the philosophical question: How can we effectively teach logical reasoning to LLMs while maintaining a deep understanding of the intricate relationship between language and logic? By focusing on bolstering LLMs' capabilities in logical reasoning, we aim to expand their applicability in law and other logic-intensive disciplines. To this end, we propose a Reinforcement Learning from Logical Feedback (RLLF) approach, which serves as a potential framework for refining LLMs' reasoning capacities. Through RLLF and a revised evaluation methodology, we explore new avenues for research in this domain and contribute to the development of LLMs capable of handling complex legal reasoning tasks while acknowledging the fundamental connection between language and logic.

Mediumband wireless communication refers to wireless communication through a class of channels known as mediumband that exists on the TmTs-plane. This paper, through statistical analysis and computer simulations, studies the performance limits of this class of channels in terms of uncoded bit error rate (BER) and diversity order. We show that, owing mainly to the effect of the deep fading avoidance, which is unique to the channels in the mediumband region, mediumband wireless systems, if designed judiciously, have the potential to achieve significantly superior error rate and higher order diversity even in non-line-of-sight (NLoS) propagation environments where the achievable diversity order is otherwise low.

Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.

Knowledge graphs represent factual knowledge about the world as relationships between concepts and are critical for intelligent decision making in enterprise applications. New knowledge is inferred from the existing facts in the knowledge graphs by encoding the concepts and relations into low-dimensional feature vector representations. The most effective representations for this task, called Knowledge Graph Embeddings (KGE), are learned through neural network architectures. Due to their impressive predictive performance, they are increasingly used in high-impact domains like healthcare, finance and education. However, are the black-box KGE models adversarially robust for use in domains with high stakes? This thesis argues that state-of-the-art KGE models are vulnerable to data poisoning attacks, that is, their predictive performance can be degraded by systematically crafted perturbations to the training knowledge graph. To support this argument, two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time. These adversarial attacks target the task of predicting the missing facts in knowledge graphs using KGE models, and the evaluation shows that the simpler attacks are competitive with or outperform the computationally expensive ones. The thesis contributions not only highlight and provide an opportunity to fix the security vulnerabilities of KGE models, but also help to understand the black-box predictive behaviour of KGE models.

Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.

北京阿比特科技有限公司