亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Fifth-generation (5G) mobile communication networks have recently emerged in various fields, including highspeed trains. However, the dense deployment of 5G millimeter wave (mmWave) base stations (BSs) and the high speed of moving trains lead to frequent handovers (HOs), which can adversely affect the Quality-of-Service (QoS) of mobile users. As a result, HO optimization and resource allocation are essential considerations for managing mobility in high-speed train systems. In this paper, we model system performance of a high-speed train system with a novel machine learning (ML) approach that is nested cross validation scheme that prevents information leakage from model evaluation into the model parameter tuning, thereby avoiding overfitting and resulting in better generalization error. To this end, we employ ML methods for the high-speed train system scenario. Handover Margin (HOM) and Time-to-Trigger (TTT) values are used as features, and several KPIs are used as outputs, and several ML methods including Gradient Boosting Regression (GBR), Adaptive Boosting (AdaBoost), CatBoost Regression (CBR), Artificial Neural Network (ANN), Kernel Ridge Regression (KRR), Support Vector Regression (SVR), and k-Nearest Neighbor Regression (KNNR) are employed for the problem. Finally, performance comparisons of the cross validation schemes with the methods are made in terms of mean absolute error (MAE) and mean square error (MSE) metrics are made. As per obtained results, boosting methods, ABR, CBR, GBR, with nested cross validation scheme superiorly outperforms conventional cross validation scheme results with the same methods. On the other hand, SVR, KNRR, KRR, ANN with the nested scheme produce promising results for prediction of some KPIs with respect to their conventional scheme employment.

相關內容

交叉(cha)驗(yan)(yan)(yan)(yan)(yan)證(zheng),有時也稱為(wei)(wei)旋轉估計(ji)(ji)或(huo)樣(yang)本外測試(shi),是用(yong)于(yu)評估統計(ji)(ji)結果如(ru)何(he)的(de)(de)(de)(de)(de)各種(zhong)類似模(mo)型(xing)(xing)(xing)驗(yan)(yan)(yan)(yan)(yan)證(zheng)技術(shu)中的(de)(de)(de)(de)(de)任(ren)何(he)一(yi)種(zhong)分(fen)析(xi)將概括(kuo)為(wei)(wei)一(yi)個(ge)(ge)(ge)獨(du)立(li)的(de)(de)(de)(de)(de)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji)。它主要(yao)用(yong)于(yu)設置(zhi),其(qi)目的(de)(de)(de)(de)(de)是預測,和一(yi)個(ge)(ge)(ge)想(xiang)要(yao)估計(ji)(ji)如(ru)何(he)準確(que)地一(yi)個(ge)(ge)(ge)預測模(mo)型(xing)(xing)(xing)在(zai)實踐中執行(xing)。在(zai)預測問題中,通常會給(gei)模(mo)型(xing)(xing)(xing)一(yi)個(ge)(ge)(ge)已知(zhi)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)的(de)(de)(de)(de)(de)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji),在(zai)該數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji)上進(jin)行(xing)訓練(lian)(訓練(lian)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji))以(yi)及未知(zhi)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)(或(huo)首(shou)次(ci)看到的(de)(de)(de)(de)(de)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju))的(de)(de)(de)(de)(de)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji)(根據(ju)(ju)(ju)(ju)(ju)該數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji)測試(shi)模(mo)型(xing)(xing)(xing))(稱為(wei)(wei)驗(yan)(yan)(yan)(yan)(yan)證(zheng)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji)或(huo)測試(shi)集(ji)(ji)(ji))。交叉(cha)驗(yan)(yan)(yan)(yan)(yan)證(zheng)的(de)(de)(de)(de)(de)目標是測試(shi)模(mo)型(xing)(xing)(xing)預測未用(yong)于(yu)估計(ji)(ji)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)的(de)(de)(de)(de)(de)新數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)的(de)(de)(de)(de)(de)能力,以(yi)發現(xian)諸如(ru)過度擬合(he)或(huo)選擇偏倚(yi)之類的(de)(de)(de)(de)(de)問題,并(bing)提(ti)供有關(guan)如(ru)何(he)進(jin)行(xing)建模(mo)的(de)(de)(de)(de)(de)見解。該模(mo)型(xing)(xing)(xing)將推廣到一(yi)個(ge)(ge)(ge)獨(du)立(li)的(de)(de)(de)(de)(de)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji)(例(li)(li)如(ru),未知(zhi)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji),例(li)(li)如(ru)來自實際問題的(de)(de)(de)(de)(de)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)集(ji)(ji)(ji))。 一(yi)輪(lun)交叉(cha)驗(yan)(yan)(yan)(yan)(yan)證(zheng)涉(she)及分(fen)割一(yi)個(ge)(ge)(ge)樣(yang)品的(de)(de)(de)(de)(de)數(shu)(shu)(shu)據(ju)(ju)(ju)(ju)(ju)到互補的(de)(de)(de)(de)(de)子(zi)集(ji)(ji)(ji),在(zai)一(yi)個(ge)(ge)(ge)子(zi)集(ji)(ji)(ji)執行(xing)所述分(fen)析(xi)(稱為(wei)(wei)訓練(lian)集(ji)(ji)(ji)),以(yi)及驗(yan)(yan)(yan)(yan)(yan)證(zheng)在(zai)另一(yi)子(zi)集(ji)(ji)(ji)中的(de)(de)(de)(de)(de)分(fen)析(xi)(稱為(wei)(wei)驗(yan)(yan)(yan)(yan)(yan)證(zheng)集(ji)(ji)(ji)合(he)或(huo)測試(shi)集(ji)(ji)(ji))。為(wei)(wei)了減少可變(bian)性(xing)(xing),在(zai)大多數(shu)(shu)(shu)方法(fa)中,使(shi)用(yong)不同的(de)(de)(de)(de)(de)分(fen)區執行(xing)多輪(lun)交叉(cha)驗(yan)(yan)(yan)(yan)(yan)證(zheng),并(bing)將驗(yan)(yan)(yan)(yan)(yan)證(zheng)結果組合(he)(例(li)(li)如(ru)取平均值(zhi))在(zai)各輪(lun)中,以(yi)估計(ji)(ji)模(mo)型(xing)(xing)(xing)的(de)(de)(de)(de)(de)預測性(xing)(xing)能。 總而言之,交叉(cha)驗(yan)(yan)(yan)(yan)(yan)證(zheng)結合(he)了預測中適用(yong)性(xing)(xing)的(de)(de)(de)(de)(de)度量(liang)(平均),以(yi)得出模(mo)型(xing)(xing)(xing)預測性(xing)(xing)能的(de)(de)(de)(de)(de)更準確(que)估計(ji)(ji)。

Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs. Most existing HGNN-based approaches are supervised or semi-supervised learning methods requiring graphs to be annotated, which is costly and time-consuming. Self-supervised contrastive learning has been proposed to address the problem of requiring annotated data by mining intrinsic information hidden within the given data. However, the existing contrastive learning methods are inadequate for heterogeneous graphs because they construct contrastive views only based on data perturbation or pre-defined structural properties (e.g., meta-path) in graph data while ignore the noises that may exist in both node attributes and graph topologies. We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies and integrates and enhances them by reciprocally contrastive mechanism to better model heterogeneous graphs. In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately. We further use both attribute similarity and topological correlation to construct high-quality contrastive samples. Extensive experiments on three large real-world heterogeneous graphs demonstrate the superiority and robustness of HGCL over state-of-the-art methods.

The advent of predictive methodologies has catalyzed the emergence of data-driven decision support across various domains. However, developing models capable of effectively handling input time series data presents an enduring challenge. This study presents novel preference learning approaches to multiple criteria sorting problems in the presence of temporal criteria. We first formulate a convex quadratic programming model characterized by fixed time discount factors, operating within a regularization framework. To enhance scalability and accommodate learnable time discount factors, we introduce a novel monotonic Recurrent Neural Network (mRNN). It is designed to capture the evolving dynamics of preferences over time while upholding critical properties inherent to MCS problems, including criteria monotonicity, preference independence, and the natural ordering of classes. The proposed mRNN can describe the preference dynamics by depicting marginal value functions and personalized time discount factors along with time, effectively amalgamating the interpretability of traditional MCS methods with the predictive potential offered by deep preference learning models. Comprehensive assessments of the proposed models are conducted, encompassing synthetic data scenarios and a real-case study centered on classifying valuable users within a mobile gaming app based on their historical in-app behavioral sequences. Empirical findings underscore the notable performance improvements achieved by the proposed models when compared to a spectrum of baseline methods, spanning machine learning, deep learning, and conventional multiple criteria sorting approaches.

Large language models (LLMs) have shown incredible performance in completing various real-world tasks. The current knowledge learning paradigm of LLMs is mainly based on learning from examples, in which LLMs learn the internal rule implicitly from a certain number of supervised examples. However, the learning paradigm may not well learn those complicated rules, especially when the training examples are limited. We are inspired that humans can learn the new tasks or knowledge in another way by learning from rules. That is, humans can grasp the new tasks or knowledge quickly and generalize well given only a detailed rule and a few optional examples. Therefore, in this paper, we aim to explore the feasibility of this new learning paradigm, which encodes the rule-based knowledge into LLMs. We propose rule distillation, which first uses the strong in-context abilities of LLMs to extract the knowledge from the textual rules and then explicitly encode the knowledge into LLMs' parameters by learning from the above in-context signals produced inside the model. Our experiments show that making LLMs learn from rules by our method is much more efficient than example-based learning in both the sample size and generalization ability.

The ability to follow instructions is crucial for Large Language Models (LLMs) to handle various real-world applications. Existing benchmarks primarily focus on evaluating pure response quality, rather than assessing whether the response follows constraints stated in the instruction. To fill this research gap, in this paper, we propose FollowBench, a Multi-level Fine-grained Constraints Following Benchmark for LLMs. FollowBench comprehensively includes five different types (i.e., Content, Situation, Style, Format, and Example) of fine-grained constraints. To enable a precise constraint following estimation on diverse difficulties, we introduce a Multi-level mechanism that incrementally adds a single constraint to the initial instruction at each increased level. To assess whether LLMs' outputs have satisfied every individual constraint, we propose to prompt strong LLMs with constraint-evolution paths to handle challenging open-ended instructions. By evaluating ten closed-source and open-source popular LLMs on FollowBench, we highlight the weaknesses of LLMs in instruction following and point towards potential avenues for future work. The data and code are publicly available at //github.com/YJiangcm/FollowBench.

Traditional recommender systems have predominantly relied on identity representations (IDs) to characterize users and items. In contrast, the emergence of pre-trained language model (PLM) en-coders has significantly enriched the modeling of contextual item descriptions. While PLMs excel in addressing few-shot, zero-shot, and unified modeling scenarios, they often overlook the critical collaborative filtering signal. This omission gives rise to two pivotal challenges: (1) Collaborative Contextualization, aiming for the seamless integration of collaborative signals with contextual representations. (2) The necessity to bridge the representation gap between ID-based and contextual representations while preserving their contextual semantics. In this paper, we introduce CollabContext, a novel model that skillfully merges collaborative filtering signals with contextual representations, aligning these representations within the contextual space while retaining essential contextual semantics. Experimental results across three real-world datasets showcase substantial improvements. Through its capability in collaborative contextualization, CollabContext demonstrates remarkable enhancements in recommendation performance, particularly in cold-start scenarios. The code is available after the conference accepts the paper.

Domain fronting is a network communication technique that involves leveraging (or abusing) content delivery networks (CDNs) to disguise the final destination of network packets by presenting them as if they were intended for a different domain than their actual endpoint. This technique can be used for both benign and malicious purposes, such as circumventing censorship or hiding malware-related communications from network security systems. Since domain fronting has been known for a few years, some popular CDN providers have implemented traffic filtering approaches to curb its use at their CDN infrastructure. However, it remains unclear to what extent domain fronting has been mitigated. To better understand whether domain fronting can still be effectively used, we propose a systematic approach to discover CDNs that are still prone to domain fronting. To this end, we leverage passive and active DNS traffic analysis to pinpoint domain names served by CDNs and build an automated tool that can be used to discover CDNs that allow domain fronting in their infrastructure. Our results reveal that domain fronting is feasible in 22 out of 30 CDNs that we tested, including some major CDN providers like Akamai and Fastly. This indicates that domain fronting remains widely available and can be easily abused for malicious purposes.

The notion of group invariance helps neural networks in recognizing patterns and features under geometric transformations. Indeed, it has been shown that group invariance can largely improve deep learning performances in practice, where such transformations are very common. This research studies affine invariance on continuous-domain convolutional neural networks. Despite other research considering isometric invariance or similarity invariance, we focus on the full structure of affine transforms generated by the generalized linear group $\mathrm{GL}_2(\mathbb{R})$. We introduce a new criterion to assess the similarity of two input signals under affine transformations. Then, unlike conventional methods that involve solving complex optimization problems on the Lie group $G_2$, we analyze the convolution of lifted signals and compute the corresponding integration over $G_2$. In sum, our research could eventually extend the scope of geometrical transformations that practical deep-learning pipelines can handle.

Graph neural networks (GNNs) have been proven to be effective in various network-related tasks. Most existing GNNs usually exploit the low-frequency signals of node features, which gives rise to one fundamental question: is the low-frequency information all we need in the real world applications? In this paper, we first present an experimental investigation assessing the roles of low-frequency and high-frequency signals, where the results clearly show that exploring low-frequency signal only is distant from learning an effective node representation in different scenarios. How can we adaptively learn more information beyond low-frequency information in GNNs? A well-informed answer can help GNNs enhance the adaptability. We tackle this challenge and propose a novel Frequency Adaptation Graph Convolutional Networks (FAGCN) with a self-gating mechanism, which can adaptively integrate different signals in the process of message passing. For a deeper understanding, we theoretically analyze the roles of low-frequency signals and high-frequency signals on learning node representations, which further explains why FAGCN can perform well on different types of networks. Extensive experiments on six real-world networks validate that FAGCN not only alleviates the over-smoothing problem, but also has advantages over the state-of-the-arts.

Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoder-decoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (ie, without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first (44.42% mIoU) position in the highly competitive ADE20K test server leaderboard.

Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.

北京阿比特科技有限公司