Image restoration is a classic and fundamental task in the computer vision field, the problem of restoring a distortion-free image from its corrupted observation. Recently, the performance of the image restoration methods has been shown drastic improvement since using a deep learning-based approach.
However, most of the previous deep learning-based methods assume that the image is corrupted only with a single distortion. In fact, there are many different types of distortions that affect image quality and various scenarios that can contaminate an image. Especially, we need to deal with a more complex scenario that distortions occur simultaneously within a single image. Depending on the high-level application side, such as medical image processing or 3D object reconstruction, many applications take the image reconstructing process by merging several images that are gotten from different devices. Because there are many differences in the performance between the devices, the reconstructed image from an application may have spatially-varying distortions.
To expand the coverage of image restoration, we propose a new image restoration task and a dataset which is handling with degraded images by spatially-heterogeneous distortions. Also, we propose a novel deep learning-based restoration method to restore the above new image restoration task. It is designed using two Multi-Task Learning(MTL) approaches – Mixture of Experts, Parameter Sharing- by complementarily merging these methods. In our method, each parameter shared expert learns meaningful features by dividing a complex restoration case. The experimental results show that our proposed method works better than the existing image restoration methods.