Board

Lab board

랩 고급 딥러닝 및 강화학습 세미나

페이지 정보

profile_image

작성자 최고관리자

댓글 0건 조회 1,309회 작성일 2022-02-12 03:39

본문

딥러닝

<Attention>

1: Attention

  • Why Attention?
  • Seq2Seq (Sequence to Sequence)
  • Seq2Seq with Attention
  • Query, Key, Value

2: Transformer

  • Self-Attention Mechanism
  • Structure of Transformer (3 Different Self-Attention Mechanism)
  • Similarities and Differences between Seq2Seq & Transformer
  • Multi-Head Attention

3: Transformer in Vision 1

  • ViT (Vision Transformer)
  • DeiT (Data Efficient Image Transformer)
  • MLP-Mixer

4: Transformer in Vision 2

  • CNN (Convolutional Neural Network) vs ViT vs MLP-Mixer
  • DETR (Detection Transformer)

<Graph Neural Networks>

1주: Graph Generation

  • Radius-graph
  • kNN-graph
  • Farthest Point Sampling

2주: Graph Convolution Networks (GCN)

  • Neural Network for Graph (NN4G)          
  • node2vec
  • Graph Sage
  • Kipf and Welling’s GCN
  • Message Passing Neural Network (MPNN) 

3주: Application of GNN

  • Feature Learning on Point Cloud (PointNet++, DGCNN)
  • 3D Object Detection (Point GNN)


강화학습

<Meta-Learning and Self-Supervision>

1주: Model Predictive Control and Meta-Learning:   

  • Concept of MPC, Sampling-based MPC (Random Shooting, MPPI)
  • Meta-Learning (Recurrence-based, MAML, FOMAML, Reptile)

2주: Meta-Learning for System Identification in Sampling-based MPC:   

  • Meta-Learning + MPC for Fast Adaptation (ReBAL, GrBAL, FAMLE)                 
  • Additional Adaptation Methods (PETS, CARL)

3주: Self-Supervision in Reinforcement Learning

  • Self-supervised Learning
  • Hindsight Goal Relabeling
  • Representation Learning 

4주: Unsupervised Skill Discovery

  • What is Skill?
  • Unsupervised Skill Discovery methods
  • How to utilize skills?

댓글목록

등록된 댓글이 없습니다.