Introduction: Tech-Magic: Home Automation Using Interactive Wand From Wizarding World of Harry Potter

About: Tinkerer, dreamer, nerd. Professionally, I write code for robots (the hardware kind).

Last Christmas I visited the Wizarding World of Harry Potter at Universal, Orlando and was blown away by the detailed replication of their real counterparts in Europe, and also, the magic created by muggles. I purchased one of these 'interactive'/ muggle magic wands that made stuff happen on strategic spots throughout the park, and though it felt like going back to the basics of wandlore, it was a really neat setup! The funny tip of this wand is actually an IR reflector and the magical spots around the park had an IR transceiver system capable of tracking the wand using these reflectors. Now, being the tech nerd that I am, I decided to create some of my own magic using this wand. So follow along if you want to create some too.

Please note that I built this project using stuff I already owned. So some of my choices might not be the best or even feasible for everyone. Therefore, instead of elaborating on my code, I will be explaining each step's logic in detail so you can build your own version using your own tools. But, if you will be using the same tools, then you should be able to do so easily by following the overview.

*Familiarity with coding and setting up IDEs is assumed*

Project Code:https://github.com/sanni-t/techMagicApp

Equipment & tools used:

1. Windows PC (Win 7+)

2. Kinect (v2) for Windows [or XBox One Kinect + windows adapter]

3. Interactive wand from Universal Studios' Wizarding World of Harry Potter

4. Hue Lights(w/ bridge)

5. Spotify premium account

6. A pair of Simblee Boards (this or that)

PLEASE NOTE: This instructable contains moving images (a.k.a gifs in the muggle world) which don't seem to show well on mobile browsers. So it's best viewed on a computer/ tablet

Step 1: Setting Up Visual Studio & Kinect

As done by the engineers at Orlando, we are going to use an IR transceiver to track the wand too. More about how this works in the next step.

So now,

1. Install Visual Studio 2015 (Community version is free)

Project is not tested for other versions so if you plan on using v2017, you might have to modify some of the code.

2. Install Kinect 2.0 SDK.

3. The SDK should come with a program called Kinect Studio. Use it to test your Kinect.

4. Once Kinect is set up, open Visual studio and create new project (C++ console application)

5. Add Kinect dependencies as shown in the video above.

6. Add OpenCV 3.1 (a computer vision library) to the project-

  • Go to: Tools -> NuGet package manager -> Manage NuGet packages for Solution -> opencv3.1 (lookup using search bar)
  • Refer to OpenCV's website to familiarize yourselves with the library.

7. If you are going to connect to http services, you will need to add cURL (a library for transferring data with URLs) the same way-

  • Go to: Tools -> NuGet package manager -> Manage NuGet packages for Solution -> Add:
    • rmt_curl (version 7.51.0)
    • rmt_zlib (version 1.2.8.7)
  • Refer to cURL website for more info on how it works

8. To test your setup, add the header files: KinectHandler.h, ImageProcessor.h and httpService.h from techMagic directory to your project, include them in the main header file and build the project (make sure to not just copy them to your project folder but also add to it by choosing 'Add existing file' in solutions explorer). If you get errors, go back and check if all packages have been installed properly. If everything compiles, congrats! You can proceed to the next steps.

Step 2: Project Outline

There are quite a few IR cameras available in the market, but we need one that has an IR light transmitter too. The Kinect for Windows or Kinect v2 fits this bill perfectly. It has quite a powerful transceiver, giving us a longer range for wand placement, at an affordable price. Note that the older Kinect version won't work because it projects IR light in a dot grid-array (as opposed to Xbox One Kinect's steady IR light, as shown in this blog). This makes it difficult for enough light rays to be incident on the small wand tip and be reflected off of it to be detected by the receiver.

Picture 2 shows how the wand tip looks illuminated in Kinect's IR camera

*Picture 3gives the code outline*

Before proceeding to the next steps copy the file 'config-template.h' to the solutions folder, rename it to config.h and add to your solution. This file will hold all your personal configuration data like ip addresses of wireless devices, api account ids, authentication keys etc. So if you are saving your project to a public repo then you would want to add them to the ignore list.

NOTE: Use the DEBUG macro setting in config.h to change to/from a debug mode (better term- unfinished mode) to select whether you want to use wand movements to trigger actions or use keypresses instead. Useful while setting up device interactions.

Step 3: Image Processing

Picture 1 explains the process of image processing to detect wand movements/ gestures (code source- ImageProcessor.cpp)
Picture 2 shows how blob detection function detects the wand tip as a 'blob'

The problem with using just blob detection is that it picks up reflections from the surroundings as well. So we minimize this interference by performing background elimination (Picture 3). This gets rid of almost all static reflections. Reflections from items like eye glasses and watches would still cause interference though, so, you would want to take them off while testing.

Next, to filter out unwanted blobs that weren't removed in background elimination, we check if the blob is moving at a reasonable speed. Spurious blobs(noise) jump in and out of frame, as opposed to our wand movement, which is expected to follow a pretty constant speed.

Once we eliminate all the noise, we will hopefully get a nice and clean trace of the wand.
NOTE: To make the entire wand detection code work you will need to add ImageProcessor.cpp to your solution, include appropriate header file and class instance in your mainProgram.h (see techMagic.h) and copy spellmodel.yml to the project folder.

Step 4: Spells, Wand Movements and Spell (character) Recognition

Picture 1 shows the wand movements for the four spells we will be using.
This picture serves another important purpose- it's used in opencv's machine learning algorithm to train for character recognition. Upon successful training, the program creates a .yml file which is then referred to while performing character recognition. This blog gives a nice explanation of how this works.

*The ImageProcessor code contains training modules too so you can add your own spells. I will update this page once I clean up this section of code*

Practice the spells until you can draw them convincingly a good number of times so that the algorithm can find a proper match in the trained model.

Update:

I have updated the code to enable 'spell training', that is, train the machine learning algorithm to recognize your spells. To do this, use the brach 'addSpellTraining' from the techMagic repository.

In here you will find some updated options in the config file. The steps for spell training are:

1. Save images to use for spell training..
In the config file, set ENABLE_SAVE_IMAGE to true. Upon running the code now, you will be able to use your wand to draw more freely. It'll feel a lot more like drawing on some high viscosity liquid where the trace will slowly disappear as u keep drawing more. I find that this makes it easier to correct my mistakes by just holding the wand stationary for a few seconds, which erases the trace, and starting the drawing all over again. When you draw a spell satisfactorily, you can hit the SPACE bar to save it to the project directory (you can change this path in the config file). The spell drawing/image will be saved as a 64x64 pixel png image. Save as many images as you would like for each spell. The more you use, the larger the sample set for training would be, which will increase the spell recognition model accuracy. The code uses a default value of 20 images per spell, which, i have found to be a good balance between accuracy and number of samples

2. Create the master image to be used for spell training..
Once you have saved the images for spell training, you would now stitch the images together using either a photo editing software or writing your own script. You can use the above image that I used for spell training as a reference to create your own set. If you don't wan't to replace the spells from my image but just append a few more, just edit the image to add yours to it.

3. Run the training algorithm:
When you have your edited image prepared, switch ENABLE_SPELL_TRAINING in the config file to true and edit the file names for GESTURE_TRINER_IMAGE and TRAINED_SPELL_MODEL if required. Run the code and voila! you should have your .yml model file ready to use in normal mode!

Step 5: Interfacing With Hue Lights

- Go to Hue API documentation page and follow the steps to get your hue's ip address and your username. Add them to config.h as shown in the picture above.
- Add httpService.cpp to your solution. You can also lookup the json data corresponding to different light configurations that you want and make changes in the .cpp file accordingly.
- The default setting is to turn on/off the light numbered 1

Step 6: Interfacing With Spotify

NOTE: Using the play/pause feature described here requires a premium account.
1. Go to Spotify api tutorial and follow all the steps to the end to get your app's authentication string and refresh token. ** IMP: Use the Spotify files provided in the techMagic repo instead of the OAuth example from Spotify repo (Picture 1) **

2. When you get to the last step ('Running the application') on the above tutorial page and log into your account, your auth string and refresh token will be output to the command line interface. Copy these two strings to the corresponding fields in config.h as shown in picture 3 above.

3. To play a specific track/playlist/album, copy its Spotify URI to the SPOTIFY_PLAY_CONTENT field in config.h

4. Test the code by performing the correct spell or by using 'm' keypress in debug mode

Step 7: Blinds Controller

1. Motor used: https://www.pololu.com/product/2365

2. You basically want to attach something to the motor shaft to be able to couple with the blinds's tilter. I just used a spare gear attachment I had and drilled a hole along its axis of rotation.

3. Then I used a bendable metal strip to hold the motor in a slight angle so the tilter is along the motor's rotational axis. It's important to use something sturdy but bendable to allow for the motor to adjust its alignment as required.

4. If you are new to Simblee, I recommend reading its documentation. Basically it is a BLE module which also has its own mesh communication 'SimbleeCOM' protocol. I used this feature to be able to control two devices- the blinds and the toy robot, using one central module.

5. Logic: A central Simblee board, connected to computer, listens to commands sent by techMagicApp and broadcasts them on SimbleeCOM. The blinds' controller Simblee board listens to messages on SimbleeCOM and when it encounters the messages intended for it (e.g., 'TURN_BLINDS'), it moves the motors. When DEBUG is true in techMagic, you can use the keys 1,2,3 to rotate motors 1-3 clockwise, and keys !,@,# to rotate them counterclockwise. This is a handy trick to initialize the blinds' position.

6. The blinds controller schematic is attached below and the code for it and the central Simblee board are in the 'Arduino files' folder in the project repo. The Serial class (Serial.h and .cpp) in techMagic project facilitates serial communication.

*I didn't add the walking toy robot since it's a whole another project and its interface is the same as the blinds controller.

Step 8: Put It All Together

That's all it is! Put it all together and you should have a project folder quite similar to the techMagic repo (sans the documentation pictures). Get your wands out and bring objects surrounding you to life!

End Notes:
Going through the code while following this instructable will help understand the project better. I have tried to make my code quite easy to read with a lot of comments, still, improvements and comments are welcome. This project was only a fun experiment and I thought of sharing it so we can collectively make the world more magical. It's not a turnkey project but I hope creative tech enthusiasts would be able to learn some valuable stuff from it about technology-powered Wandlore ;)

Step 9: Fin

Special thanks to:
..my friends Alexis and Natasha for helping me with documentation
..my roommate Aish, for pointing me towards cURL
..and my cousin, Manasi for the amazing trip to the Wizarding World!

Thank you for reading!!

Wireless Contest

Participated in the
Wireless Contest