Deep Learning 25
- COOT:Cooperative Hierarchical Transformer for Video-Text Representation Learning
- Data2Vis - Automatic Generation of Data Visualizations Using Sequence-to-Sequence Recurrent Neural Networks
- Stanford CS231n Lec 02. Image Classification
- Pre-training of Deep Bidirectional Transformers for Language Understanding(BERT)
- Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
- Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (DCGAN)
- Unsupervised Intra-domain Adaptation for Semantic Segmentation
- Multimodal Unsupervised Image-to-Image Translation (MUNIT)
- DETR:End-to-End Object Detection with Transformers
- MMDetection 사용하기
- An Image is Worth 16x16 Words:Transformers for Image Recognition at Scale(ViT)
- StyleGAN:A Style-Based Generator Architecture for Generative Adversarial Networks
- HarDNet:A Low Memory Traffic network
- Neural Architecture Search With Reinforcement Learning
- 딥러닝 모델을 이용한 수화 교육 웹 어플리케이션-Handlang(2)
- 딥러닝 모델을 이용한 수화 교육 웹 어플리케이션-Handlang(1)
- Meta Reinforcement Learning As Task Inference
- Model-Agnostic Meta-Learning for fast adaptation of deep networks
- Attention Is All You Need
- Generative Adversarial Nets
- PEGASUS:Pre-training with Extracted Gap-sentences for Abstractive Summarization
- You Only Look Once(YOLO):Unified, Real-Time Object Detection
- Text Summarization with Pretrained Encoders
- Fine-tune BERT for Extractive Summarization
- Faster R-CNN:Towards Real-Time Object Detection with Region Proposal Networks