Autonomous Driving: Boosting Optical Flow with Synthetic Data
Optical flow is defined as the task of estimating per-pixel motion between video frames. Optical flow models take two sequential frames as input and return as output a flow vector that predicts where each pixel in the first frame will be in the second frame. Optical flow is an important task for autonomous driving, but real-world flow data is very hard to label.
For humans, it can actually be impossible to label. Labeling can only be done using LiDAR information to estimate object motion, whether dynamic or static, from the ego trajectory. Because LiDAR scans are inherently sparse, the very few public optical flow datasets are also sparse. A way around this problem is to use synthetic data, where dense flow labels are readily available. This post goes over how synthetic data can improve optical flow tasks and how tuning Parallel Domain’s synthetic data to mitigate important domain gaps can lead to major performance improvements.
Read the full article here.