랩 고급 딥러닝 및 강화학습 세미나
페이지 정보

작성자 최고관리자
댓글 0건 조회 1,309회 작성일 2022-02-12 03:39
본문
딥러닝
<Attention>
1주: Attention
- Why Attention?
- Seq2Seq (Sequence to Sequence)
- Seq2Seq with Attention
- Query, Key, Value
2주: Transformer
- Self-Attention Mechanism
- Structure of Transformer (3 Different Self-Attention Mechanism)
- Similarities and Differences between Seq2Seq & Transformer
- Multi-Head Attention
3주: Transformer in Vision 1
- ViT (Vision Transformer)
- DeiT (Data Efficient Image Transformer)
- MLP-Mixer
4주: Transformer in Vision 2
- CNN (Convolutional Neural Network) vs ViT vs MLP-Mixer
- DETR (Detection Transformer)
<Graph Neural Networks>
1주: Graph Generation
- Radius-graph
- kNN-graph
- Farthest Point Sampling
2주: Graph Convolution Networks (GCN)
- Neural Network for Graph (NN4G)
- node2vec
- Graph Sage
- Kipf and Welling’s GCN
- Message Passing Neural Network (MPNN)
3주: Application of GNN
- Feature Learning on Point Cloud (PointNet++, DGCNN)
- 3D Object Detection (Point GNN)
강화학습
<Meta-Learning and Self-Supervision>
1주: Model Predictive Control and Meta-Learning:
- Concept of MPC, Sampling-based MPC (Random Shooting, MPPI)
- Meta-Learning (Recurrence-based, MAML, FOMAML, Reptile)
2주: Meta-Learning for System Identification in Sampling-based MPC:
- Meta-Learning + MPC for Fast Adaptation (ReBAL, GrBAL, FAMLE)
- Additional Adaptation Methods (PETS, CARL)
3주: Self-Supervision in Reinforcement Learning
- Self-supervised Learning
- Hindsight Goal Relabeling
- Representation Learning
4주: Unsupervised Skill Discovery
- What is Skill?
- Unsupervised Skill Discovery methods
- How to utilize skills?
- 이전글악천후 환경에서의 자율주행을 위한 4D 레이다 대용량 데이터셋 및 벤치마크 구축 22.04.06
- 다음글2022년도 KAIST 대학원생 급여 상한액 조정에 따른 석박사과정 급여 조정 22.02.07
댓글목록
등록된 댓글이 없습니다.