
I used Svelte to build the demo as a dynamic frontend. It can display side-by-side the results from the two different colourisation models. It’s an effortless single-page application with a file input field. Frontend – Ubuntu-based NGINX container and Svelte appįinally, I put together a fancy UI for you to try the solution out. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection (as detailed in the previous blog). To containerise our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. Here’s how the transformation pipeline looks like on the input: In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result. It synchronously returns the colourised result using the neural network predictions. The backend takes an HTTP POST request with the to-be-colourised picture. To provide web serving capabilities, Flask is an easy choice. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. … Backend – Ubuntu-based Flask app (Python)įor the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. # copy the model files and configure the Model Server We then rebase on the more lightweight Model Server image.
#Colorize black and white photos freeware download
(We could also have done it with two different versions of the same model – read more in the documentation.)įinally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model.

The models are served with the same instance of the OpenVINO™ Model Server as two different models. Let them be called ‘V1’ (Siggraph) and ‘V2’. We compare the results of two different model versions for the demo.


For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. It will generate predictions for the colours coded on the two remaining axes: A and B.įrom the architecture diagram, we can also see that the model expects a 256×256 pixels input size. We will send only the L axis to the neural network’s input. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. The LAB format is relevant here as it fully isolates the colour information from the lightness information.
#Colorize black and white photos freeware code
There are many 3-dimensional spaces to code colours: RGB, HSL, HSV, etc. Neural network architecture (from arXiv:1603.08511 )Īs you can see on the network architecture diagram, the neural network uses an unusual colour space: LAB.
