Data Efficient End-to-end Computational Videography Pipeline
Supervisor: Professor Greg Slabaugh
Advances in deep neural networks applied to computational photography have narrowed the gap between smartphone image quality and that achieved with DSLR. However, these advances have relied heavily on labelled datasets which are difficult to collect, especially in the context of video. This project will investigate new ways to learn the Image Signal Processor (ISP) mapping from RAW sensor data to RGB in the video domain.
The project will explore no reference and unpaired techniques to train data-efficient networks, end-to-end, which produce high-quality videos. Inherent in this mapping will be careful modelling of motion between frames to best exploit temporal redundancy in the video.