We did a tutorial about how to train Computer Vision and LiDAR perception networks during a workshop hosted by Formula Student Germany and Waymo.
Autonomous racing provides the opportunity to test safety-critical perception pipelines at their limit. This paper describes the practical challenges and solutions to applying state-of-the-art computer vision algorithms to build a low-latency, …
We presented the perception system design of our autonomous racing vehicle, including camera triggering and deep learning model design, to people at the Sea Machines Robotics. Find more detail in the slides.
As my first project at [MIT Driverless](https://driverless.mit.edu/), my task was meant to find the visual explaination of the CNN-based object detection model that perception team is using, the [YOLOv3](https://arxiv.org/pdf/1804.02767.pdf). After reviewing the results, we concluded that the network has its most attention on the bottom part of the object(traffic cone), and in some cases the margin between cone and ground.
In order to make custom high-speed cameras that can deal with small patches of motion blur, I proposed a custom convolutional model that can detect motion blurred patches within images, achieved two sigma accuracy.