Skip to content

Restore the low bit-depth images back to the high bit-depth images(Pytorch codes)

Notifications You must be signed in to change notification settings

yurizzzzz/Bit-Depth_Enhancement

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bit-Depth Enhancement(Pytorch Code)

Restore the low bit-depth images back to the high bit-depth images

The neural network is based on the UNet network

Introduction

The task of bit depth enhancement is to recover the significant bits lost by quantization. It has important applications in high bit depth display and photo editing. Although most displays are 8-bit, many TVs and smartphones (such as Samsung Galaxy S10 and iPhone x) already support 10 bit displays to meet high dynamic range standards due to growing consumer demand for finer hue values. However, these displays are not fully utilized because most of the available image and video content is still 8-bit. If the 8-bit data is directly stretched on the 10 bit display, there will be obvious contour artifacts, color distortion and detail loss. Therefore, it is of great significance to study bit depth enhancement.

Requirement

  • Environment : Python 3.7(or higher edition)
  • DeepLearning Framework: Pytorch 1.4.0 and torchvision 0.4.0
  • NVIDIA Turing GPU is strongly recommended
  • Linux/Windows operating system
  • A machine with a 4-core processor and 8GB memory (or higher configuration)
  • A GPU with 4GB memory minimum
  • Keep the batch-size of 2(or higher) and train images cropped to 256x256

Quantilize

Before the training,the dataset must be preprocessed.In the Preprocess folder,there are two methods to quantilize the images.One is based on Kmeans,the other is linear quantiliztion.And,the linear quantilizing is recommended.Therefore,you can run the Quantilize.py to get the low bit-depth images. image

Train

In the code, we use the MITAdobe FiveK as the dataset.And, the fiveK images are divided into training data and validation data.Then,the training data are cropped to 256x256 and the validation data are cropped to 512x512(if the size of images is too large,it will consume a lot of memory of GPU so that the machine may report RuntimeError:CUDA out of memory).Before running the training.py,please change the args such as,batch_size,model_dir,train_dir,test_dir,label_dir QQ截图20201219162906

Test

After training,save the checkpoint.tar file which contains the model parameter and variables.What's more,test data are also needed.In the code, Sintel datasets are downloaded as the test data.Of course,do not forget to change the args image

Results

  • LOSS, PSNR, SSIM

Result

  • Input, Result, Label

image

Contact me

About

Restore the low bit-depth images back to the high bit-depth images(Pytorch codes)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages