Computer Vision

FSG Driverless Workshop - Powered by Waymo

We did a tutorial about how to train Computer Vision and LiDAR perception networks during a workshop hosted by Formula Student Germany and Waymo.

Accurate, Low-Latency Visual Perception for Autonomous Racing: Challenges, Mechanisms, and Practical Solutions

Autonomous racing provides the opportunity to test safety-critical perception pipelines at their limit. This paper describes the practical challenges and solutions to applying state-of-the-art computer vision algorithms to build a low-latency, …

Perception System of MIT Driverless's Autonomous Racing Vehicle

We presented the perception system design of our autonomous racing vehicle, including camera triggering and deep learning model design, to people at the Sea Machines Robotics. Find more detail in the slides.

Grad-Cam With Object Detection(YOLOV3)

As my first project at [MIT Driverless](https://driverless.mit.edu/), my task was meant to find the visual explaination of the CNN-based object detection model that perception team is using, the [YOLOv3](https://arxiv.org/pdf/1804.02767.pdf). After reviewing the results, we concluded that the network has its most attention on the bottom part of the object(traffic cone), and in some cases the margin between cone and ground.

Motion Blur Detection

In order to make custom high-speed cameras that can deal with small patches of motion blur, I proposed a custom convolutional model that can detect motion blurred patches within images, achieved two sigma accuracy.