Real-time Federated Evolutionary Neural Architecture Search (Zhu and Jin. 2020) //arxiv.org/abs/2003.02793
BATS: Binary ArchitecTure Search (Bulat et al. 2020)
ADWPNAS: Architecture-Driven Weight Prediction for Neural Architecture Search (Zhang et al. 2020)
NAS-Count: Counting-by-Density with Neural Architecture Search (Hu et al. 2020)
ImmuNetNAS: An Immune-network approach for searching Convolutional Neural Network Architectures (Kefan and Pang. 2020)
Neural Inheritance Relation Guided One-Shot Layer Assignment Search (Meng et al. 2020)
Automatically Searching for U-Net Image Translator Architecture (Shu and Wang. 2020)
AutoEmb: Automated Embedding Dimensionality Search in Streaming Recommendations (Zhao et al. 2020)
Memory-Efficient Models for Scene Text Recognition via Neural Architecture Search (Hong et al. 2020; accepted at WACV’20 workshop)
Search for Winograd-Aware Quantized Networks (Fernandez-Marques et al. 2020)
Semi-Supervised Neural Architecture Search (Luo et al. 2020)
Neural Architecture Search for Compressed Sensing Magnetic Resonance Image Reconstruction (Yan et al. 2020)
DSNAS: Direct Neural Architecture Search without Parameter Retraining (Hu et al. 2020)
Neural Architecture Search For Fault Diagnosis (Li et al. 2020; accepted at ESREL’20)
Learning Architectures for Binary Networks (Singh et al. 2020)
Efficient Evolutionary Architecture Search for CNN Optimization on GTSRB (Johner and Wassner. 2020; accepted at ICMLA’19)
Automating Deep Neural Network Model Selection for Edge Inference (Lu et al. 2020; accepted at CogMI’20)
Neural Architecture Search over Decentralized Data (Xu et al. 2020)
Automatic Structural Search for Multi-task Learning VALPs (Garciarena et al. 2020; accepted at OLA’20)
RandomNet: Towards Fully Automatic Neural Architecture Design for Multimodal Learning (Alletto et al. 2020; accepted at Meta-Eval 2020 workshop)
Classifying the classifier: dissecting the weight space of neural networks (Eilertsen et al. 2020)
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization (Chen and Hsieh. 2020)
Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator (Abdelfattah et al. 2020; accepted at DAC’20)
Variational Depth Search in ResNets (Antoran et al. 2020)
Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks (Yang et al. 2020; accepted at DAC’20)
FPNet: Customized Convolutional Neural Network for FPGA Platforms (Yang et al. 2020; accepted at FPT’20)
AutoFCL: Automatically Tuning Fully Connected Layers for Transfer Learning (Basha et al. 2020)
NASS: Optimizing Secure Inference via Neural Architecture Search (Bian et al. 2020; accepted at ECAI’20)
Search for Better Students to Learn Distilled Knowledge (Gu et al. 2020)
Bayesian Neural Architecture Search using A Training-Free Performance Metric (Camero et al. 2020)
NAS-Bench-1Shot1: Benchmarking and Dissecting One-Short Neural Architecture Search (Zela et al. 2020; accepted at ICLR’20)
Convolution Neural Network Architecture Learning for Remote Sensing Scene Classification (Chen et al. 2010)
Multi-objective Neural Architecture Search via Non-stationary Policy Gradient (Chen et al. 2020)
Efficient Neural Architecture Search: A Broad Version (Ding et al. 2020)
ENAS U-Net: Evolutionary Neural Architecture Search for Retinal Vessel (Fan et al. 2020)
FlexiBO: Cost-Aware Multi-Objective Optimization of Deep Neural Networks (Iqbal et al. 2020)
Up to two billion times acceleration of scientific simulations with deep neural architecture search (Kasim et al. 2020)
Latency-Aware Differentiable Neural Architecture Search (Xu et al. 2020)
MixPath: A Unified Approach for One-shot Neural Architecture Search (Chu et al. 2020)
Neural Architecture Search for Skin Lesion Classification (Kwasigroch et al. 2020; accepted at IEEE Access)
AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search (Chen et al. 2020)
Neural Architecture Search for Deep Image Prior (Ho et al. 2020)
Fast Neural Network Adaptation via Parameter Remapping and Architecture Search (Fang et al. 2020; accepted at ICLR’20)
FTT-NAS: Discovering Fault-Tolerant Neural Architecture (Li et al. 2020; accepted at ASP-DAC 2020)
Deeper Insights into Weight Sharing in Neural Architecture Search (Zhang et al. 2020)
EcoNAS: Finding Proxies for Economical Neural Architecture Search (Zhou et al. 2020; accepted at CVPR’20)
DeepMaker: A multi-objective optimization framework for deep neural networks in embedded systems (Loni et al. 2020; accepted at Microprocessors and Microsystems)
Auto-ORVNet: Orientation-boosted Volumetric Neural Architecture Search for 3D Shape Classification (Ma et al. 2020; accepted at IEEE Access)
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search (Dong and Yang et al. 2020; accepted at ICLR’20)
CVPR 2020 論文開源項目合集,同時歡迎各位大佬提交issue,分享CVPR 2020開源項目
Spatially Attentive Output Layer for Image Classification
論文:還沒有公布
代碼:
Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection
BiDet: An Efficient Binarized Object Detector
Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud
MAST: A Memory-Augmented Self-supervised Tracker
論文:
代碼:
Cars Can't Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks
論文:
代碼:
PolarMask: Single Shot Instance Segmentation with Polar Representation
CenterMask : Real-Time Anchor-Free Instance Segmentation
Deep Snake for Real-Time Instance Segmentation
論文:
代碼:
State-Aware Tracker for Real-Time Video Object Segmentation
論文:
代碼:
Learning Fast and Robust Target Models for Video Object Segmentation
Rethinking Performance Estimation in Neural Architecture Search
CARS: Continuous Evolution for Efficient Neural Architecture Search
Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions
Weakly supervised discriminative feature learning with state information for person identification
FPConv: Learning Local Flattening for Point Convolution
D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features
Searching Central Difference Convolutional Networks for Face Anti-Spoofing
論文:
代碼:
Suppressing Uncertainties for Large-Scale Facial Expression Recognition
論文:
代碼(即將開源):
The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation
Distribution-Aware Coordinate Representation for Human Pose Estimation
主頁:
論文:
代碼:
Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation
論文:暫無
代碼:
VIBE: Video Inference for Human Body Pose and Shape Estimation
Back to the Future: Joint Aware Temporal Deep Learning 3D Human Pose Estimation
Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS
PointAugment: an Auto-Augmentation Framework for Point Cloud Classification
ABCNet: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network
ABCNet: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network
Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution
HRank: Filter Pruning using High-Rank Feature Map
Domain Decluttering: Simplifying Images to Mitigate Synthetic-Real Domain Shift and Improve Depth Estimation
論文:
代碼:
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior
VC R-CNN:Visual Commonsense R-CNN
Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement
Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction
IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning
Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS
GhostNet: More Features from Cheap Operations
論文:
代碼:
AdderNet: Do We Really Need Multiplications in Deep Learning?
Deep Image Harmonization via Domain Verification
Blurry Video Frame Interpolation
Extremely Dense Point Correspondences using a Learned Feature Descriptor
Filter Grafting for Deep Neural Networks
Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation
Detecting Attended Visual Targets in Video
論文:
代碼:
Deep Image Spatial Transformation for Person Image Generation
Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications
In this paper, we propose a residual non-local attention network for high-quality image restoration. Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features. To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts. Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block. The trunk branch is used to extract hierarchical features. Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions. The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map. Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network. Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution. Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.