Introduction: What If, I Get an 'Invisibility Cloak' As in Harry Potter Movies!
I hope most of the Harry Potter series followers, must be remembering this scene from the movie. I follow most of the stories from the book told by my friends(i don't read books mostly), watched few series; And found it more fascinating and got interested as well.
Very recently, I had the experience of playing with the 'Invisibility cloak', in real-time and wish to share it here in this instructable guide! Do watch the video available in the link for reference, and let me know your thoughts.
Full tutorial of making, is elaborately written in this instructable!
Let me give a glimpse of the original story behind it.
"It was only when he had attained great age that the youngest brother finally took off the Cloak of Invisibility and gave it to his son."—Ignotus Peverell passing the cloak on to his son
Ignotus passes the cloak down to his son The Cloak of Invisibility was passed down to Ignotus' son. Ignotus' son had no male heirs so his oldest daughter, Iolanthe, inherited it instead. The Peverell family died in the male line around this time, but the heirloom was passed down the generations through the female line, the Potters, as Iolanthe had married Hardwin Potter from Stinchcombe.
In the 20th century, the Cloak eventually ended up in the hands of Henry Potter, a Wizengamot member, who passed it to his eldest son Fleamont. Fleamont was the father of James Potter, Harry Potter's father. James used the Cloak of Invisibility in many of his misdeeds at Hogwarts School of Witchcraft and Wizardry and kept it afterwards. Around the time that Lord Voldemort was hunting the Potters for their son, the Cloak of Invisibility came to the attention of Albus Dumbledore when James showed him the Cloak. Dumbledore, who had searched for the Deathly Hallows in youth, asked to borrow the Cloak from James to study it. After James was killed, the Cloak was left in Dumbledore's possession.
Ten years later, Dumbledore gave Harry Potter the Cloak of Invisibility as a Christmas present anonymously and told him to "use it well." This would be one piece of advice that Harry would use quite well over his school life and beyond, as the Cloak of Invisibility aided Harry on countless trips and missions, including his hunt for Lord Voldemort's Horcruxes. It was not until 1998, that Harry learned the true nature of his own Invisibility Cloak and its true identity as the Cloak of Invisibility, as spoken of in the legend of the Deathly Hallows. As Ignotus' last remaining descendant, the Cloak was rightfully Harry's and was kept by him after Lord Voldemort's defeat. Harry resolved to pass it on to his children one day, just as his ancestors had done.
Before the start of his eldest son's sixth year at Hogwarts, Harry gave James Sirius the Cloak as a present. After accidentally turning his hair pink with a joke comb that his Uncle Ron had given him, James complained that he'd have to use the Cloak to hide his hair. The Cloak was stolen from James' trunk by his younger brother Albus during the same year, who used it to hide from Professor McGonagall in the library.
Step 1: Background of Making, It in Reality!
Many of you must have heard about Artificial Intelligence and to someone involved in related studies would have also heard/learnt about OpenCV. And the experience of making the invisible cloak is made possible through OpenSource technologies such as python(programming language) and OpenCV (a software library).
OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code.
Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan.
Step 2: Prerequisites(supplies)
Now, let start learning how we can make this magical experience in real-time.
For that we would need a computer installed with OpenCV or you could use GoogleColabNotebook(may be i would try there as well and update my post).
I shall provide my machine details which i used to work on this model.
Machine : Laptop with webcam and .avi file support
OS : Ubuntu 16.04
Required Software installations:
Python >= 3.6
OpenCV, latest version
Any (preferably) man-size plain cloth of any colour(except white).
Step 3: Installation Procedures : Python and OpenCV
Now, lets install Python 3.6 and OpenCV.
Python 3.6 is included in the universe repository of Ubuntu 16.10 and Ubuntu 17.04, so you can install it with the commands below.
sudo apt update
sudo apt install python3.6
You may follow any reliable guide available in web applicable for your OS version.
For my OS version Ubuntu 16.04, i will be installing python 3.6 from the source code. You can alternatively install it from PPA from python.org/github repository.
Step 4: Installation Procedures : Part1 - Python 3.6
We can install any python version >=3.6 for this model.
1. First, we need to install some build dependencies using the commands below.
sudo apt install build-essential checkinstall
sudo apt install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev
2. Then, download Python source code from : https://www.python.org/downloads/release/python-360/
https://www.python.org/ftp/python/3.7.4/Python-3.7... directly downloads the source code.
3. Next, extract the tarball.
tar xvf Python-3.7.4.tar.xz
4. Now cd into the source directory, configure the build environment and install.
$ cd Python-3.7.4/
$ sudo make altinstall
5. make altinstall command skips creating symlink, so /usr/bin/python still points to the old version of Python and your Ubuntu system won’t break.
Once that’s done, you can check Python installation by using the command (as in screenshot):
Step 5: Installation Procedures : Part2 - OpenCV(Building OpenCV From Source)
This process may seem a little complicated at first, but once you succeeded in it, there is nothing complicated.
Required build dependencies
We need CMake to configure the installation, GCC for compilation, Python-devel and Numpy for building Python bindings etc.
sudo apt-get install cmake
sudo apt-get install gcc g++
Step 6: Installation Procedures : Part2 - OpenCV(Building OpenCV From Source)
As i do work with python2, i am updating dependencies for python2 as well.
sudo apt-get install python-dev python-numpy
sudo apt-get install python3-dev python3-numpy
Step 7: Installation Procedures : Part2 - OpenCV(Building OpenCV From Source)
Next we need GTK support for GUI features, Camera support (v4l), Media Support (ffmpeg, gstreamer) etc.
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev
(GTK is a free and open-source cross-platform widget toolkit for creating graphical user interfaces. Find more about it here: https://www.gtk.org/)
sudo apt-get install libgtk2.0-dev
sudo apt-get install libgtk-3-dev
Above dependencies are sufficient to install OpenCV in Ubuntu machine.
Step 8: (Optional) Installation Procedures : Part2 - OpenCV(Building OpenCV From Source)
Below are the list of optional dependencies i have installed:
sudo apt-get install libpng-dev
sudo apt-get install libjpeg-dev
sudo apt-get install libopenexr-dev
sudo apt-get install libtiff-dev
sudo apt-get install libwebp-dev
sudo apt-get install libjasper-dev
libjasper-dev for JPEG2000 format(only for ubuntu 16.04)
Step 9: Installation Procedures : Part2 - OpenCV(Building OpenCV From Source)
Download the latest source from OpenCV's GitHub Repository:
And for that, you need to install Git first:
$ sudo apt-get install git
$ git clone https://github.com/opencv/opencv.git
will create a folder "opencv" in current directory.
The cloning may take some time depending upon your internet connection. It took 3 minutes, for me.
Navigate to the downloaded "opencv" folder.
Create a new "build" folder and navigate to it.
$ mkdir build
$ cd build
Step 10: Installation Procedures : Part2 - OpenCV(Building OpenCV From Source)
Configuring and Installing
Now we have all the required dependencies, let's install OpenCV.
Installation has to be configured with CMake. It specifies which modules are to be installed, installation path, which additional libraries to be used, whether documentation and examples to be compiled etc. Most of this work are done automatically with well configured default parameters.
Below command is normally used for configuration of OpenCV library build (executed from build folder):
$ cmake ../
You should see these lines in your CMake output (they mean that Python is properly found). Refer the image file in this step, listing interpreter, libraries for python2, python3 etc.
Now you build the files using "make" command and install it using "make install" command.
$ sudo make install
Bingo!!!Installation is over. All files are installed in "/usr/local/" folder.
Open a terminal and try import "cv2".
import cv2 as cv print(cv.__version__)
Step 11: Do's and Dont's : Hint to Note Down
- Do not open webcam separately(when you check for position in case) when you run the program. By mistake, i did the same and found that my program did not start the webcam when i execute it.
- Do not alter the camera position when started to execute the code.
- Do not be in the camera for initial 10-20 seconds, for the camera to capture the background. After the time passes, you may enter into the frame.
- Do check the HSV/HSB variant numbers(range of low and high), of the specific cloth colour we use as cloak. And that HSV/HSB would be used as range(RGB) in the program, of that specific colour.
- Do try both indoor and outdoor for better noise reduction, due to luminosity of RGB.
- If there are any issues faced, do check the openCV version and recompile the FFmpeg package. Do check this thread for more information about it:
More information about FFmpeg: https://ffmpeg.org/about.html
Step 12: Source Code, How Will It Work?
For convenience, i have added the red colour cloth with masking image to github repository:
Logic of the model:
- We are extracting each frame of the video, with the help of segmentation.
- We separate the background and foreground of the image.
- We replace the foreground of a particular colour(here red in this case) with the background which gives the illusion of getting disappeared.
Workflow of this project:
1. Importing needed libraries and generate the output video
2. Recording and caching the background for each frame
3. Detecting the red portion in each frame
4. Replacing the red portion with a mask image in each frame
5. Creating the magical output
I will be giving explanation for some parts of the code, for better understanding.
Step 13: Part 1 - Overview of the Source Code - Importing Needed Libraries and Generate the Output Video
Below are the libraries that are imported for this program:
- cv2 is for computer vision operations
- time for time-related operations in the program
- numpy for numerical related computations
We are writing the video in fourcc and saving the (intermediate) output video.
Step 14: Part 2 - Overview of the Source Code - Recording and Caching the Background for Each Frame
We are replacing the red coloured pixels with the background pixels to create the invisible effect in the video. For doing this, we have to store the background image for each frame.
- For capturing the current frame, we have used Cap.read() function and stored it in the variable named ‘background’.
- The variable ‘ret’ is used to store Boolean True/False, ret will give True value if the frame is read correctly otherwise it will give False.
- We are capturing the background in a for loop so that we have several frames for the background as averaging over multiple frames also reduces noise.
Step 15: Part 3 - Overview of the Source Code - Detecting the Red Portion in Each Frame
We shall focus on detecting the red part in the image.
We will convert the RGB (red-blue-green) to HSV(hue-saturation-value)/HSB(hue-saturation-brightness) because RGB values are highly sensitive to illumination. After the conversion of RGB to HSV, it is time to specify the range of color to detect red color in the video.
In Hue value, the red color is in range 0-30 as well as 150-180. To avoid the detection of skin as the red colour we will use 0-10 and 170-180 range and combine both with the OR operator(+ in python).
The RGB value of true RED color is (255, 0, 0) but in real world images there is always variations in the image color values due to various lightening conditions, shadows and, even due to noise added by the camera while clicking and subsequently processing the image.
Step 16: Part 4 - Overview of the Source Code - Replacing the Red Portion With a Mask Image in Each Frame
Now, we have a red part of the video in the ‘mask’ image, we will segment the mask part from the frames. We will do a morphology open and dilation for that.
Step 17: Part 5 - Overview of the Source Code - Creating the Magical Output
The final step will be the replacement of the pixels of the detected red colour region in the frames with the pixel values of the static background which was saved earlier.
Step 18: How to Execute the Program!!
I have stored the red colour masking code, as redcode.py in the source folder. So open the terminal in the source folder, and execute as:
Now, the output as a video gets recorded as output in .avi format in the same folder.
Step 19: To Try at Different Light Environment!
The output may vary depending on the external light. I tried the photo shoot at:
indoor - dull lighted room of mine
outdoor - mild sunlight (due to raining often) at terrace
entrance of house - does not have direct sunlight but it is bright enough
Apart from all these, i found that corridor output produced less noise than the rest. So much more findings is required to improvise the output/algorithm.
The little patches that are found to appear on my skin at places are due to light variance. At times, the skin colour has also variant of mild red, so it is been considered as shades of red and the program does consider the part, in negligible percentage.
Step 20: And for Other Colours!
These are source code, for other colours. Yet to test them. If anyone interested, could try and please do post me the result. Do reach me, if you face any struggle in replicating this project.
low_blue = np.array([94, 80, 2])
high_blue = np.array([126, 255, 255])
blue_mask = cv2.inRange(hsv_frame, low_blue, high_blue)
blue = cv2.bitwise_and(frame, frame, mask=blue_mask)
low_green = np.array([25, 52, 72])
high_green = np.array([102, 255, 255])
green_mask = cv2.inRange(hsv_frame, low_green, high_green)
green = cv2.bitwise_and(frame, frame, mask=green_mask)
Every colour except white
low = np.array([0, 42, 0])
high = np.array([179, 255, 255])
mask = cv2.inRange(hsv_frame, low, high)
result = cv2.bitwise_and(frame, frame, mask=mask)
Step 21: And.....
Participated in the