亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Temporal Action Detection (TAD) is fundamental yet challenging for real-world video applications. Leveraging the unique benefits of transformers, various DETR-based approaches have been adopted in TAD. However, it has recently been identified that the attention collapse in self-attention causes the performance degradation of DETR for TAD. Building upon previous research, this paper newly addresses the attention collapse problem in cross-attention within DETR-based TAD methods. Moreover, our findings reveal that cross-attention exhibits patterns distinct from predictions, indicating a short-cut phenomenon. To resolve this, we propose a new framework, Prediction-Feedback DETR (Pred-DETR), which utilizes predictions to restore the collapse and align the cross- and self-attention with predictions. Specifically, we devise novel prediction-feedback objectives using guidance from the relations of the predictions. As a result, Pred-DETR significantly alleviates the collapse and achieves state-of-the-art performance among DETR-based methods on various challenging benchmarks including THUMOS14, ActivityNet-v1.3, HACS, and FineAction.

相關內容

 DeepSeek-AI,Aixin Liu,Bei Feng,Bing Xue,Bingxuan Wang,Bochao Wu,Chengda Lu,Chenggang Zhao,Chengqi Deng,Chenyu Zhang,Chong Ruan,Damai Dai,Daya Guo,Dejian Yang,Deli Chen,Dongjie Ji,Erhang Li,Fangyun Lin,Fucong Dai,Fuli Luo,Guangbo Hao,Guanting Chen,Guowei Li,H. Zhang,Han Bao,Hanwei Xu,Haocheng Wang,Haowei Zhang,Honghui Ding,Huajian Xin,Huazuo Gao,Hui Li,Hui Qu,J. L. Cai,Jian Liang,Jianzhong Guo,Jiaqi Ni,Jiashi Li,Jiawei Wang,Jin Chen,Jingchang Chen,Jingyang Yuan,Junjie Qiu,Junlong Li,Junxiao Song,Kai Dong,Kai Hu,Kaige Gao,Kang Guan,Kexin Huang,Kuai Yu,Lean Wang,Lecong Zhang,Lei Xu,Leyi Xia,Liang Zhao,Litong Wang,Liyue Zhang,Meng Li,Miaojun Wang,Mingchuan Zhang,Minghua Zhang,Minghui Tang,Mingming Li,Ning Tian,Panpan Huang,Peiyi Wang,Peng Zhang,Qiancheng Wang,Qihao Zhu,Qinyu Chen,Qiushi Du,R. J. Chen,R. L. Jin,Ruiqi Ge,Ruisong Zhang,Ruizhe Pan,Runji Wang,Runxin Xu,Ruoyu Zhang,Ruyi Chen,S. S. Li,Shanghao Lu,Shangyan Zhou,Shanhuang Chen,Shaoqing Wu,Shengfeng Ye,Shengfeng Ye,Shirong Ma,Shiyu Wang,Shuang Zhou,Shuiping Yu,Shunfeng Zhou,Shuting Pan,T. Wang,Tao Yun,Tian Pei,Tianyu Sun,W. L. Xiao,Wangding Zeng,Wanjia Zhao,Wei An,Wen Liu,Wenfeng Liang,Wenjun Gao,Wenqin Yu,Wentao Zhang,X. Q. Li,Xiangyue Jin,Xianzu Wang,Xiao Bi,Xiaodong Liu,Xiaohan Wang,Xiaojin Shen,Xiaokang Chen,Xiaokang Zhang,Xiaosha Chen,Xiaotao Nie,Xiaowen Sun,Xiaoxiang Wang,Xin Cheng,Xin Liu,Xin Xie,Xingchao Liu,Xingkai Yu,Xinnan Song,Xinxia Shan,Xinyi Zhou,Xinyu Yang,Xinyuan Li,Xuecheng Su,Xuheng Lin,Y. K. Li,Y. Q. Wang,Y. X. Wei,Y. X. Zhu,Yang Zhang,Yanhong Xu,Yanhong Xu,Yanping Huang,Yao Li,Yao Zhao,Yaofeng Sun,Yaohui Li,Yaohui Wang,Yi Yu,Yi Zheng,Yichao Zhang,Yifan Shi,Yiliang Xiong,Ying He,Ying Tang,Yishi Piao,Yisong Wang,Yixuan Tan,Yiyang Ma,Yiyuan Liu,Yongqiang Guo,Yu Wu,Yuan Ou,Yuchen Zhu,Yuduan Wang,Yue Gong,Yuheng Zou,Yujia He,Yukun Zha,Yunfan Xiong,Yunxian Ma,Yuting Yan,Yuxiang Luo,Yuxiang You,Yuxuan Liu,Yuyang Zhou,Z. F. Wu,Z. Z. Ren,Zehui Ren,Zhangli Sha,Zhe Fu,Zhean Xu,Zhen Huang,Zhen Zhang,Zhenda Xie,Zhengyan Zhang,Zhewen Hao,Zhibin Gou,Zhicheng Ma,Zhigang Yan,Zhihong Shao,Zhipeng Xu,Zhiyu Wu,Zhongyu Zhang,Zhuoshu Li,Zihui Gu,Zijia Zhu,Zijun Liu,Zilin Li,Ziwei Xie,Ziyang Song,Ziyi Gao,Zizheng Pan
 DeepSeek-AI,Aixin Liu,Bei Feng,Bing Xue,Bingxuan Wang,Bochao Wu,Chengda Lu,Chenggang Zhao,Chengqi Deng,Chenyu Zhang,Chong Ruan,Damai Dai,Daya Guo,Dejian Yang,Deli Chen,Dongjie Ji,Erhang Li,Fangyun Lin,Fucong Dai,Fuli Luo,Guangbo Hao,Guanting Chen,Guowei Li,H. Zhang,Han Bao,Hanwei Xu,Haocheng Wang,Haowei Zhang,Honghui Ding,Huajian Xin,Huazuo Gao,Hui Li,Hui Qu,J. L. Cai,Jian Liang,Jianzhong Guo,Jiaqi Ni,Jiashi Li,Jiawei Wang,Jin Chen,Jingchang Chen,Jingyang Yuan,Junjie Qiu,Junlong Li,Junxiao Song,Kai Dong,Kai Hu,Kaige Gao,Kang Guan,Kexin Huang,Kuai Yu,Lean Wang,Lecong Zhang,Lei Xu,Leyi Xia,Liang Zhao,Litong Wang,Liyue Zhang,Meng Li,Miaojun Wang,Mingchuan Zhang,Minghua Zhang,Minghui Tang,Mingming Li,Ning Tian,Panpan Huang,Peiyi Wang,Peng Zhang,Qiancheng Wang,Qihao Zhu,Qinyu Chen,Qiushi Du,R. J. Chen,R. L. Jin,Ruiqi Ge,Ruisong Zhang,Ruizhe Pan,Runji Wang,Runxin Xu,Ruoyu Zhang,Ruyi Chen,S. S. Li,Shanghao Lu,Shangyan Zhou,Shanhuang Chen,Shaoqing Wu,Shengfeng Ye,Shengfeng Ye,Shirong Ma,Shiyu Wang,Shuang Zhou,Shuiping Yu,Shunfeng Zhou,Shuting Pan,T. Wang,Tao Yun,Tian Pei,Tianyu Sun,W. L. Xiao,Wangding Zeng,Wanjia Zhao,Wei An,Wen Liu,Wenfeng Liang,Wenjun Gao,Wenqin Yu,Wentao Zhang,X. Q. Li,Xiangyue Jin,Xianzu Wang,Xiao Bi,Xiaodong Liu,Xiaohan Wang,Xiaojin Shen,Xiaokang Chen,Xiaokang Zhang,Xiaosha Chen,Xiaotao Nie,Xiaowen Sun,Xiaoxiang Wang,Xin Cheng,Xin Liu,Xin Xie,Xingchao Liu,Xingkai Yu,Xinnan Song,Xinxia Shan,Xinyi Zhou,Xinyu Yang,Xinyuan Li,Xuecheng Su,Xuheng Lin,Y. K. Li,Y. Q. Wang,Y. X. Wei,Y. X. Zhu,Yang Zhang,Yanhong Xu,Yanhong Xu,Yanping Huang,Yao Li,Yao Zhao,Yaofeng Sun,Yaohui Li,Yaohui Wang,Yi Yu,Yi Zheng,Yichao Zhang,Yifan Shi,Yiliang Xiong,Ying He,Ying Tang,Yishi Piao,Yisong Wang,Yixuan Tan,Yiyang Ma,Yiyuan Liu,Yongqiang Guo,Yu Wu,Yuan Ou,Yuchen Zhu,Yuduan Wang,Yue Gong,Yuheng Zou,Yujia He,Yukun Zha,Yunfan Xiong,Yunxian Ma,Yuting Yan,Yuxiang Luo,Yuxiang You,Yuxuan Liu,Yuyang Zhou,Z. F. Wu,Z. Z. Ren,Zehui Ren,Zhangli Sha,Zhe Fu,Zhean Xu,Zhen Huang,Zhen Zhang,Zhenda Xie,Zhengyan Zhang,Zhewen Hao,Zhibin Gou,Zhicheng Ma,Zhigang Yan,Zhihong Shao,Zhipeng Xu,Zhiyu Wu,Zhongyu Zhang,Zhuoshu Li,Zihui Gu,Zijia Zhu,Zijun Liu,Zilin Li,Ziwei Xie,Ziyang Song,Ziyi Gao,Zizheng Pan

We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. The model checkpoints are available at //github.com/deepseek-ai/DeepSeek-V3.

Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.

Graph Neural Networks (GNNs) have been shown as promising solutions for collaborative filtering (CF) with the modeling of user-item interaction graphs. The key idea of existing GNN-based recommender systems is to recursively perform the message passing along the user-item interaction edge for refining the encoded embeddings. Despite their effectiveness, however, most of the current recommendation models rely on sufficient and high-quality training data, such that the learned representations can well capture accurate user preference. User behavior data in many practical recommendation scenarios is often noisy and exhibits skewed distribution, which may result in suboptimal representation performance in GNN-based models. In this paper, we propose SHT, a novel Self-Supervised Hypergraph Transformer framework (SHT) which augments user representations by exploring the global collaborative relationships in an explicit way. Specifically, we first empower the graph neural CF paradigm to maintain global collaborative effects among users and items with a hypergraph transformer network. With the distilled global context, a cross-view generative self-supervised learning component is proposed for data augmentation over the user-item interaction graph, so as to enhance the robustness of recommender systems. Extensive experiments demonstrate that SHT can significantly improve the performance over various state-of-the-art baselines. Further ablation studies show the superior representation ability of our SHT recommendation framework in alleviating the data sparsity and noise issues. The source code and evaluation datasets are available at: //github.com/akaxlh/SHT.

We present a large-scale study on unsupervised spatiotemporal representation learning from videos. With a unified perspective on four recent image-based frameworks, we study a simple objective that can easily generalize all these methods to space-time. Our objective encourages temporally-persistent features in the same video, and in spite of its simplicity, it works surprisingly well across: (i) different unsupervised frameworks, (ii) pre-training datasets, (iii) downstream datasets, and (iv) backbone architectures. We draw a series of intriguing observations from this study, e.g., we discover that encouraging long-spanned persistency can be effective even if the timespan is 60 seconds. In addition to state-of-the-art results in multiple benchmarks, we report a few promising cases in which unsupervised pre-training can outperform its supervised counterpart. Code is made available at //github.com/facebookresearch/SlowFast

Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.

Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks. Despite its powerful capacity to learn and generalize from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep, which limit the model scalability. In this work, we propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism, \ie node self-attention, neighborhood attention, and layer memory attention. We explain why the proposed attentive modules can improve GNN for few-shot learning with theoretical analysis and illustrations. Extensive experiments show that the proposed Attentive GNN outperforms the state-of-the-art GNN-based methods for few-shot learning over the mini-ImageNet and Tiered-ImageNet datasets, with both inductive and transductive settings.

Convolutional neural networks (CNNs) have shown dramatic improvements in single image super-resolution (SISR) by using large-scale external samples. Despite their remarkable performance based on the external dataset, they cannot exploit internal information within a specific image. Another problem is that they are applicable only to the specific condition of data that they are supervised. For instance, the low-resolution (LR) image should be a "bicubic" downsampled noise-free image from a high-resolution (HR) one. To address both issues, zero-shot super-resolution (ZSSR) has been proposed for flexible internal learning. However, they require thousands of gradient updates, i.e., long inference time. In this paper, we present Meta-Transfer Learning for Zero-Shot Super-Resolution (MZSR), which leverages ZSSR. Precisely, it is based on finding a generic initial parameter that is suitable for internal learning. Thus, we can exploit both external and internal information, where one single gradient update can yield quite considerable results. (See Figure 1). With our method, the network can quickly adapt to a given image condition. In this respect, our method can be applied to a large spectrum of image conditions within a fast adaptation process.

Learning with limited data is a key challenge for visual recognition. Few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels. This style of transfer learning is task-agnostic: the embedding function is not learned optimally discriminative with respect to the unseen classes, where discerning among them is the target task. In this paper, we propose a novel approach to adapt the embedding model to the target classification task, yielding embeddings that are task-specific and are discriminative. To this end, we employ a type of self-attention mechanism called Transformer to transform the embeddings from task-agnostic to task-specific by focusing on relating instances from the test instances to the training instances in both seen and unseen classes. Our approach also extends to both transductive and generalized few-shot classification, two important settings that have essential use cases. We verify the effectiveness of our model on two standard benchmark few-shot classification datasets --- MiniImageNet and CUB, where our approach demonstrates state-of-the-art empirical performance.

We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.

We investigate the problem of automatically determining what type of shoe left an impression found at a crime scene. This recognition problem is made difficult by the variability in types of crime scene evidence (ranging from traces of dust or oil on hard surfaces to impressions made in soil) and the lack of comprehensive databases of shoe outsole tread patterns. We find that mid-level features extracted by pre-trained convolutional neural nets are surprisingly effective descriptors for this specialized domains. However, the choice of similarity measure for matching exemplars to a query image is essential to good performance. For matching multi-channel deep features, we propose the use of multi-channel normalized cross-correlation and analyze its effectiveness. Our proposed metric significantly improves performance in matching crime scene shoeprints to laboratory test impressions. We also show its effectiveness in other cross-domain image retrieval problems: matching facade images to segmentation labels and aerial photos to map images. Finally, we introduce a discriminatively trained variant and fine-tune our system through our proposed metric, obtaining state-of-the-art performance.

北京阿比特科技有限公司