Autonomous Driving

FSG Driverless Workshop - Powered by Waymo

We did a tutorial about how to train Computer Vision and LiDAR perception networks during a workshop hosted by Formula Student Germany and Waymo.

MIT Driverless Overview for Recruiting

Recruiting session of MIT Driverless for 2020-2021 academic year. Find more detail in the video and slides.

End of Summer Showcase of MIT Driverless's Perception Team

Perception team presented our summer projects to all our sponsors, families and potential future members! Find more detail in the video and slides.

Faster LiDAR Based on Predictive Deep Learning Model

Normal LiDAR in the market runs at 10hz, which is sufficient for state-of-the-art autonomous road vehicle, but not enough for autonomous racing vehicle that runs at 180mph. In order to make a "faster LiDAR", inspired by the fact that camera(30+hz) and LiDAR(10hz) holds different operating frequency, we propose a method that uses both camera and LiDAR history information to predict future LiDAR frames.

Replication of the “PointPainting”

By replicating the state-of-the-art sensor fusion detection model ["PointPainting"](https://arxiv.org/pdf/1911.10150.pdf), we further use this tool to test/evaluate our 3D point cloud predictive model and end-to-end extrinsic sensor calibration model.

Accurate, Low-Latency Visual Perception for Autonomous Racing: Challenges, Mechanisms, and Practical Solutions

Autonomous racing provides the opportunity to test safety-critical perception pipelines at their limit. This paper describes the practical challenges and solutions to applying state-of-the-art computer vision algorithms to build a low-latency, …

"Point-Voxel CNN" Deployment on Full Scale Auotnomous Racing Vehicle

We deployed the state-of-the-art LiDAR perception model ["Point-Voxel CNN"](http://papers.nips.cc/paper/8382-point-voxel-cnn-for-efficient-3d-deep-learning.pdf) on MIT Driverless's full scale autonomous racing vehicle. The deployment includes converting model task from segmentation to classification, ROS integration and full scale vehicle testing. Find full story in the video provided here.

Perception System of MIT Driverless's Autonomous Racing Vehicle

We presented the perception system design of our autonomous racing vehicle, including camera triggering and deep learning model design, to people at the Sea Machines Robotics. Find more detail in the slides.

Grad-Cam With Object Detection(YOLOV3)

As my first project at [MIT Driverless](https://driverless.mit.edu/), my task was meant to find the visual explaination of the CNN-based object detection model that perception team is using, the [YOLOv3](https://arxiv.org/pdf/1804.02767.pdf). After reviewing the results, we concluded that the network has its most attention on the bottom part of the object(traffic cone), and in some cases the margin between cone and ground.