겨울방학 학부연구생 딥러닝 세미나
페이지 정보

작성자 최고관리자
댓글 0건 조회 1,328회 작성일 2023-02-03 16:20
본문
일시: 2023년 2월 2일
[Adeeb Mohammed Islam]
"Transformer based Perception"
The seminar showcases the advantages and disadvantages of image transformer in comparison to CNN for vision task. The study shows that vision transformers generalizes better than CNN given a sufficiently large dataset to train on. As large labelled datasets are expensive, a variant of Vision Transformer called BeiT uses the concept of Masked Image Modeling to train in a self-supervised manner on any unlabeled dataset. It was also shown that the self attention mechanism learns to generate some sort of segmentation map without any human annotation. This enables easy finetuning of the pretrained model for any downstream task on a small dataset. The demo at the end shows a BeiT model finetuned on 3000 CARLA images for semanti
c segmentation
[Manoj Pandey]
"Low Cost Perception Enhancement with Optical Flow and Sensor Fusion: An Comprehensive study of Optical Flow, its Use Case and Future Challenges of Optical Flow In Autonomous Vehicles"
Optical flow is the filed of Comptuer vision that deals with estimation of the pixel brightness
movement in an image or video sequences. This have various application like autonomous
driving, action recognition, and scene flow estimation. There are various Optical Flow algorithms
that have promising results, better and faster than traditional algorithms.
FlowNet is a deep learning based flow algorithms which used convolutional and deconvolution
layer to estimate the flow vectors between two consecutives frames. This network showed that
deep learning based optical flow methods can also give better results in benchmark datasets.
RAFT is another state-of-the-art optical flow algorithms that used Recurrent All Fields
Transforms layers to capture the correlation between every pixels in coarse to fine model. It
uses recurrent layer to iteratively predict the optical flow field in robust manner.
With Optical flow, We can more effectively transform radar sensor data into more dense point
cloud data with the fusion of optical flow with radial velocity information of the radar data. This
allows us to accumulate several frames radar data and form the dense cloud map that can be
used for several other downstream tasks similar to that of the lidar data.
However, Adversarial attacks on the optical flow networks have raised serious questions in the
robustness of such models. Given the safety critical nature of optical flow applications, This
topics is important research topics for better usecase of optical flow.
[Sanjar Atamuraolov]
"Sensor Fustion for Localization & Object Tracking"
My work explores the use of sensor fusion localization and object tracking. The Extended Kalman Filter (EKF) is used for this project to fuse Lidar and Radar data to accurately track objects around the vehicle. Low-level implementation of Kalman Filter equations and sensor fusion are written in C++ for best practice. Particle filter based localization are employed to combine other sensors and provide an accurate real-time estimate of ego vehicle's position and velocity. The performance is evaluated across RMSE metric using ground truth data and the estimated results.
- 이전글Motional - AVElab 산학협력연구 미팅 23.05.25
- 다음글NeurIPS 2022 논문 세미나 23.02.02
댓글목록
등록된 댓글이 없습니다.