Self-Supervised Blind Denoising
12 June 2021
This post describes a novel state-of-the-art blind denoising method based on self-supervised deep neural networks[1] I am currently developing with Charles Ollion, Sylvain Le Corff (CMAP, Ecole Polytechnique, Université Paris-Saclay), Elisabeth Gassiat (Université Paris-Saclay, CNRS, Laboratoire de mathématiques d’Orsay) and Luc Lehéricy (Laboratoire J. A. Dieudonné, Université Côte d’Azur, CNRS). The preprint is available here: https://arxiv.org/abs/2102.08023
![]() ![]() |
![]() ![]() |
Introduction
- Our method is self-supervised, meaning that only a dataset of noisy images is sufficient to train the neural network (provided they are corrupted by the same process). This makes this method very useful when no pairs of high quality (clean) images + noisy images is available to train supervised methods such as CARE[3], which corresponds to most real-world use cases.
- Our method performs blind denoising: it doesn’t require a prior knowledge on the corruption process and is even able to characterize it.
- It is inspired by the recent Noise2Self[4] and Noise2Void[5] methods, and perform much better on all tested public datasets. It even performs better than the supervised method CARE[3].
A good denoising method is able to efficiently reduce noise, without removing other high-frequency features such as gradients. As a comparison, Gaussian blur is one of the most simple and effective denoising method, but strongly affects gradients (i.e. the image looks blurred). The following figure displays the result of a Gaussian blur filter on the same image, with a scale adjusted so that the denoising efficiency is equivalent to our method. One can see how contrasts are much more reduced by Gaussian blur than by our method.
![]() ![]() |
![]() ![]() |
The figure below shows the result of our method on another dataset.
![]() ![]() |
![]() ![]() |
![]() ![]() |
![]() ![]() |