Image colorization using AI and Python

April 26, 2022

DeOldify is a Deep Learning (DL) based project for colorizing and restoring old images and videos. It helps us add color to old black and white photos adding life to them. The DL model uses a unique NoGAN architecture to train the model.

We will use this model to convert some old black and white photos of a city by adding color to them.


To follow along, you need to be familiar with:

  • Machine Learning algorithms.
  • Google Colab.


The Deoldify model

Deoldify uses a Generative Adversarial Neural Network (GAN). It uses a special type of GAN called a self-attention GAN.

Aside from using self-attention GAN and some special transformations, this model also uses a technique known as No-GAN. It is a highly efficient way of training GANs.

Most GANs have two parts; a Generator and a Discriminator.

The Generator is the part that creates the image. The discriminator tries to pick out the real color images from fake recolored images. The No-GAN technique works by training the Generator and the Discriminator models present in GANs in isolation.

It’s similar to how you would train a normal neural network but different from GANs as they are usually trained side by side. They are then fine-tuned together, typically how you would train a GAN.

The model works by taking a black and white image and passing it to the Deoldify model. The model will then output a colored image. The model is trained on several colored images, and does a great job in producing colored images.

That’s a summary of the Deoldify model in a nutshell. Please visit this GitHub documentation to learn more.

Cloning the GitHub Repository

We are going to use the GitHub repository that contains the actual model. Inside our Google Colab, let’s type in the following code:

!git clone DeOldify

The above code clones the DeOldify repository into the DeOldify folder. We will be working inside this folder. To get into this folder, we write the following code:

cd DeOldify

Once inside, we can now install the dependencies needed for the project.

Installing the necessary dependencies

To use the model, we need to install a couple of dependencies.

!pip install -r colab_requirements.txt

By running the above command, all the dependencies available in the requirement.txt file inside the cloned folder gets installed. These dependencies include:

  • fastai==1.0.51
  • wandb
  • tensorboardX==1.6
  • ffmpeg-python
  • youtube-dl>=2019.4.17
  • jupyterlab
  • pillow>=8.0.0

All these dependencies are necessary for the model to work. They all get installed automatically, and there’s no need to install them manually. Once done, we can go ahead and download the model.

Downloading the model

Next, we will need to download the pre-trained model.

!mkdir 'models'
!wget -O ./models/ColorizeArtistic_gen.pth

We have created a new folder called models inside the main DeOldify folder. Using wget, a software package for retrieving files using HTTP, HTTPS, FTP, and FTPS, we download the pre-trained model into that newly created folder.

Let’s create a variable colorizer to store our model.

colorizer = get_image_colorizer(artistic=False)

Performing colorization on old black and white photos

Let’s take black and white images and add some color to them. We will use old images of iconic buildings that still stand to date in the city of Nairobi, Kenya.

These are the images we will use:

Image of KICC:


Image Source: Pinterest

Image of Nairobi Railway Station:

Nairobi Railway Station

Image Source: African Digital Heritage

Image of Stanley Hotel:

New Stanley hotel

Image Source: Pinterest

Image of the Norfolk Hotel:

Norfolk hotel

Image Source: Arxiv

Inside the test_images folder located in the main DeOldify folder, upload all the images you want to colorize.

Using the plot_transformed_image method, we can pass in our images, and colored output images will be generated. These generated images are of the size 8px by 8px. You can change these values if you wish.

colorizer.plot_transformed_image('test_images/image-name.jpg', render_factor=35, display_render_factor=True, figsize=(8,8))

The default value of 35 for the render_factor works well in most scenarios. The render_factor determines the resolution at which the color portion of the image is rendered. The lower render_factor is ideal for lower resolution images, while a higher render_factor for high-resolution images.

However, with the lower render_factor in low-resolution images, images tend to be vibrant, unlike high-resolution images where the colors seem to be washed away.

These are the generated colored images:

Colored image of KICC:

New colored KICC

Colored image of Nairobi Railway Station:

Colored Nairobi Railway Station

Colored image of Stanley Hotel:

New colored Stanley hotel

Colored image of the Norfolk Hotel:

Colored Norfolk hotel

We can see that the Deoldify model has added some color to our images. We achieved these results with only a few lines of code. Amazing, right?

Of course, it’s not perfect. But, this technology shows you what is possible with amazing technologies such as the one used in this experiment.

Please find the complete code for this tutorial here.

Wrapping up

The Deoldify model lets you recolor old images and videos of family members or even cities. The model is open-source and available through GitHub. You can easily experiment with your old photos from your childhood and add color to them.


Peer Review Contributions by: Wilkister Mumbi