Event-based cameras promise a shift towards perception robustness, with resilience to motion blur and high dynamic range. At the same time, autonomous driving is becoming a prominent application of cutting edge mobile robotics, with safety and robustness at its core metrics. Naturally, the interest in event cameras is growing among the autonomous driving community.
During a semester project at the Robotics Perception Group (UZH & ETH Zurich) I tackled the challenge of lacking availability of ground truth data in motion segmentation for this new type of vision sensors applied to urban driving scenarios. The motion segmentation problem was then approached with two new baselines, incorporating geometric knowledge. A preliminary evaluation on the new ground truth is performed, serving as successful proof of concepts.
The full report and the final presentation will be made available in the future.