亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Accurately estimating Origin-Destination (OD) matrices is a topic of increasing interest for efficient transportation network management and sustainable urban planning. Traditionally, travel surveys have supported this process; however, their availability and comprehensiveness can be limited. Moreover, the recent COVID-19 pandemic has triggered unprecedented shifts in mobility patterns, underscoring the urgency of accurate and dynamic mobility data supporting policies and decisions with data-driven evidence. In this study, we tackle these challenges by introducing an innovative pipeline for estimating dynamic OD matrices. The real motivating problem behind this is based on the Trenord railway transportation network in Lombardy, Italy. We apply a novel approach that integrates ticket and subscription sales data with passenger counts obtained from Automated Passenger Counting (APC) systems, making use of the Iterative Proportional Fitting (IPF) algorithm. Our work effectively addresses the complexities posed by incomplete and diverse data sources, showcasing the adaptability of our pipeline across various transportation contexts. Ultimately, this research bridges the gap between available data sources and the escalating need for precise OD matrices. The proposed pipeline fosters a comprehensive grasp of transportation network dynamics, providing a valuable tool for transportation operators, policymakers, and researchers. Indeed, to highlight the potentiality of dynamic OD matrices, we showcase some methods to perform anomaly detection of mobility trends in the network through such matrices and interpret them in light of events that happened in the last months of 2022.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Spiking neural networks (SNNs), inspired by the spiking behavior of biological neurons, provide a unique pathway for capturing the intricacies of temporal data. However, applying SNNs to time-series forecasting is challenging due to difficulties in effective temporal alignment, complexities in encoding processes, and the absence of standardized guidelines for model selection. In this paper, we propose a framework for SNNs in time-series forecasting tasks, leveraging the efficiency of spiking neurons in processing temporal information. Through a series of experiments, we demonstrate that our proposed SNN-based approaches achieve comparable or superior results to traditional time-series forecasting methods on diverse benchmarks with much less energy consumption. Furthermore, we conduct detailed analysis experiments to assess the SNN's capacity to capture temporal dependencies within time-series data, offering valuable insights into its nuanced strengths and effectiveness in modeling the intricate dynamics of temporal data. Our study contributes to the expanding field of SNNs and offers a promising alternative for time-series forecasting tasks, presenting a pathway for the development of more biologically inspired and temporally aware forecasting models.

A structure-preserving Finite Element Method (FEM) for the transport equation in one- and two-dimensional domains is presented. This Distributed Parameter System (DPS) has non-collocated boundary control and observation, and reveals a scattering-energy preserving structure. We show that the discretized model preserves the aforementioned structure from the original infinite-dimensional system. Moreover, we analyse the case of moving meshes for the one-dimensional case. The moving mesh requires less states than the fixed one to produce solutions with a comparable accuracy, and it can also reduce the overshoot and oscillations of Gibbs phenomenon produced when using the FEM. Numerical simulations are provided for the case of a one-dimensional transport equation with fixed and moving meshes.

Semantic communication is focused on optimizing the exchange of information by transmitting only the most relevant data required to convey the intended message to the receiver and achieve the desired communication goal. For example, if we consider images as the information and the goal of the communication is object detection at the receiver side, the semantic of information would be the objects in each image. Therefore, by only transferring the semantics of images we can achieve the communication goal. In this paper, we propose a design framework for implementing semantic-aware and goal-oriented communication of images. To achieve this, we first define the baseline problem as a set of mathematical problems that can be optimized to improve the efficiency and effectiveness of the communication system. We consider two scenarios in which either the data rate or the error at the receiver is the limiting constraint. Our proposed system model and solution is inspired by the concept of auto-encoders, where the encoder and the decoder are respectively implemented at the transmitter and receiver to extract semantic information for specific object detection goals. Our numerical results validate the proposed design framework to achieve low error or near-optimal in a goal-oriented communication system while reducing the amount of data transfers.

The Critical Node Problem (CNP) is concerned with identifying the critical nodes in a complex network. These nodes play a significant role in maintaining the connectivity of the network, and removing them can negatively impact network performance. CNP has been studied extensively due to its numerous real-world applications. Among the different versions of CNP, CNP-1a has gained the most popularity. The primary objective of CNP-1a is to minimize the pair-wise connectivity in the remaining network after deleting a limited number of nodes from a network. Due to the NP-hard nature of CNP-1a, many heuristic/metaheuristic algorithms have been proposed to solve this problem. However, most existing algorithms start with a random initialization, leading to a high cost of obtaining an optimal solution. To improve the efficiency of solving CNP-1a, a knowledge-guided genetic algorithm named K2GA has been proposed. Unlike the standard genetic algorithm framework, K2GA has two main components: a pretrained neural network to obtain prior knowledge on possible critical nodes, and a hybrid genetic algorithm with local search for finding an optimal set of critical nodes based on the knowledge given by the trained neural network. The local search process utilizes a cut node-based greedy strategy. The effectiveness of the proposed knowledgeguided genetic algorithm is verified by experiments on 26 realworld instances of complex networks. Experimental results show that K2GA outperforms the state-of-the-art algorithms regarding the best, median, and average objective values, and improves the best upper bounds on the best objective values for eight realworld instances.

There is strong interest in developing mathematical methods that can be used to understand complex neural networks used in image analysis. In this paper, we introduce techniques from Linear Algebra to model neural network layers as maps between signal spaces. First, we demonstrate how signal spaces can be used to visualize weight spaces and convolutional layer kernels. We also demonstrate how residual vector spaces can be used to further visualize information lost at each layer. Second, we introduce the concept of invertible networks and an algorithm for computing input images that yield specific outputs. We demonstrate our approach on two invertible networks and ResNet18.

User expectations impact the evaluation of new interactive systems. Elevated expectations may enhance the perceived effectiveness of interfaces in user studies, similar to a placebo effect observed in medical studies. To showcase the placebo effect, we executed a user study with 18 participants who conducted a reaction time test with two different computer screen refresh rates. Participants saw a stated screen refresh rate before every condition, which corresponded to the true refresh rate only in half of the conditions and was lower or higher in the other half. Results revealed successful priming, as participants believed in superior or inferior performance based on the narrative despite using the opposite refresh rate. Post-experiment questionnaires confirmed participants still held onto the initial narrative. Interestingly, the objective performance remained unchanged between both refresh rates. We discuss how study narratives can influence subjective measures and suggest strategies to mitigate placebo effects in user-centered study designs.

Coherent point-to-multi-point (PtMP) optical network based on digital subcarrier multiplexing (DSCM) has been a promising technology for metro and access networks to achieve cost savings, low latency, and high flexibility. In-phase and quadrature (IQ) impairments of the coherent transceiver (e.g. IQ skew and power imbalance) cause severe performance degradation. In the DSCM-based coherent PtMP optical networks, it is hard to realize far-end IQ-impairments estimation for the hub transmitter because the leaf on one subcarrier cannot acquire the signal on the symmetrical subcarrier. In this paper, we propose a far-end IQ-impairments estimation based on the specially designed time-and-frequency interleaving tones (TFITs), which can simultaneously estimate IQ skews and power imbalances of the hub transmitter and leaf receiver at an individual leaf. The feasibility of the TFITs-based IQ-impairments estimation has been experimentally verified by setting up $8$Gbaud/SC$\times$$4$SCs DSCM-based coherent PtMP optical network. The experimental results depict that the absolute errors in the estimated IQ skew and power imbalance are within $\pm 0.5$ps and $\pm 0.2$dB, respectively. In conclusion, TFITs-based IQ-impairments estimation has great potential for DSCM-based coherent PtMP optical networks.

Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

北京阿比特科技有限公司