Introduction: RoboPhoto - a Mosaic Generator for the Public

RoboPhoto is a real-time photomosaic generator

RoboPhoto creates a photomosaic of its users – while you wait.

By using modern digital techniques like image-processing, face-recognition and artificial intelligence, RoboPhoto is capable of creating a photomosaic of all visitors walking past and pressing it’s button – in real time.

Each time the button is pressed, a photo is taken of the person at hand. Instantaneously each photo is scanned and interpreted by RoboPhoto. The RoboPhoto software will then alter all individual image – so that it will become part of a larger image, and then print this altered picture onto a sticker labelled with a set of coordinates that indicate the location of each photo within that larger image. Each visitor is then asked to place their own photo-sticker onto a larger canvas containing only a corresponding grid.

During the operation of RoboPhoto, a new image will be created. A photomosaic composed of these individual photographs that will mimic a predefined ‘target-image’.

RoboPhoto also operates in single-user mode. When configured this way, RoboPhoto creates a full mosaic of a single user.

Supplies

  • A Windows 10 PC with Visual Studio and IoT packages installed
  • A Raspberry Pi 3B+ with Microsoft Windows 10 IoT installed
  • A colour label printer (Brother VC-500W)
  • A big red pushbutton mounted on a pedestal for user input
  • A HDMI screen for user-feedback
  • A Microsoft Xbox Kinect v2 camera – stolen from my son- to take photographs
  • A network (Wifi, LAN)
  • A target grid. A sheet of paper with a grid printed on it -filled with coordinates. This paper grid is used as canvas where visitors can stick their photograph on the designated coordinates. And so eventually they will form together the end-result: a beautiful new photomosaic.

A icrosoft Kinect 2.0 camera was used because it can take depth-images. This feature is used to create a virtual greenscreen on each individual photograp. This way RoboPhoto can repainted the background on each individual photograph to match the colour of a target piece within the mosaic-to-be.

Step 1: How It Operates

RoboPhoto is an installation containing of a pedestal with a big red button on it, a computer with label printer attached, and a small IoT device handling the User Interface (screen and button). In my case: a RaspBerry 3B+.

  1. RoboPhoto operates within an public accessible location and is (after switching it on) self-operating. When running, passing visitors are encouraged by RoboPhoto to press it’s big red button.
  2. Whenever that big red button is pressed, RoboPhoto will take a photograph of the visitor that just pressed the button with the Kinect camera.
  3. Then RoboPhoto will use its advanced A.I. and image processing skills to alter each photo to match a piece within the mosaic-to-be. To achieve this, RoboPhoto repaints the background of each photo to match the colour of a target piece within a pre-loaded image. After the editing, RoboPhoto prints out the edited photo onto a sticker together with a set of coordinates that pin-point the location of this one sticker within the mosaic.

  4. Then the user is asked to place the sticker on the mosaic target-sheet.

  5. And thus – after many pleople have visited - a new piece of art will emerge. To create a mosaic you will need a lot of individual pieces.I got decent results running 600 pieces

RoboPhoto can also operate in single-user-mode.

In this configuration RoboPhoto creates a full mosaic of ouf edited photographs from one single user. After hitting the button, RoboPhoto will shoot about >600 different photographs of the user, and then edit and arrange them all to form one single new mosaic, created after a pre-selected target-image.

Step 2: Assembling the Hardware

As shown in the picture above, the Win 10 PC is connected to the Kinect camera. Kinect must be connected by USB 3.0. At the time I created RoboPhoto - no Raspberry Pi with USB 3.0 was available.*

The PC is also used to handle printing to the attached labelprinter. In my case a Brother VC-500W. A fairly cheap household colour labelprinter. It is, however, very very slow. Better use a professional one if you can.

The Big Red Button is attached to a Raspberry Pi 3B+. Only4 wires are attached to the GPIO. This is the only soldering needed done in this Instructable. The Pi also provides feed-back to our visitor by means of a 7'' TFT screen over HDMI.

To tidy it up, I built a wooden pedestal that holds all these components.

Next to the pedestal, against the wall, a sheet of paper containing the target-grid and coordinates is placed (A1/A2). Because the label printer I used maxes out by labelwidth = 2,5 cm, all squares in this grid measure 2,5cm x 2,5cm.

*Today, Raspberry Pi4 does offer USB3.0. Alse W10 can be run on the device. So it should theoretically be possible to create a RoboPhoto v2.0 without the use of a PC. Perhaps Covid '19 will provide me with enough time on my own to publish such an Instructable soon.

Step 3: Writing the Code

Code

RoboPhoto was created with VisualStudio as a solution with two projects:

  1. A Windows Forms application on the PC is hosting a TCP server and handling Kinect input
  2. A Raspberry Pi 3B+ hosting a TCP client within a UWP headed application (set as startup-app) to handle Button press events and provide the user with feedback through its 7'' TFT screen.

In the diagram above, I've tried to give you an idea of what my soft is doing. The Visual Studio I wrote to create this (absolutely 100% working) RoboPhoto solution is provided with this Instructable. However I must warn everyone downloding this file: The code I wrote is far from pretty and often bound to my dev-PC. So I encourage everyone to create a better, nicer, and steadier solution.

https://1drv.ms/u/s!Aq7eBym1bHDKkKcigYzt8az9WEYOOg...

Network

In the example code, the Pi's code is deployed through Visual Studio to an IPAddress in my network. You should probably change this to fit your own. To do this - right-click the ARM client project after opening the solution in Visual Studio, then choose properties and cange the value Remote machine to the IPAddress of your own Pi. Also you need to allow traffic going from client to server on port 8123 within the Windows Firewall on the server (PC). If you run the solution from Visual Studio, it should ask you to do it for U.

While coding I've had lots of trouble getting the W32 & UWP to communicate properly. I got it working by using two seperate classes in client & server: resp MyEchoClient.cs (in the ARM client) and ConnectionClient.cs (hanlding client connections in the server).

Mosaic files - custom class

RoboPhoto creates mosaics to mimic a target-image. This target-image, and all individual photographs that together make up the mosaic-to-be, as well as some other properties of each RoboPhoto is stored in files in a filesystem. My accompanied code uses a set of files & folders in directory c:\tmp\MosaicBuilder. Within this folder, the code will read all subfolders with a foldername that starts with [prj_] as mosaic project folders. Within all these [prj_] folders it will try to open a projectfile named [_projectdata.txt] that contains all information required for each project.

Such a projectfile consists of:

  1. the full path & filename of the target-image of this project
  2. the full path where individual photographs (pieces) of this project are stored
  3. Number of columns the mosaic will contain
  4. Number of rows the mosaic will contain

Example projects are provided in the zip file: \slnBBMosaic2\wfMosaicServerKinect\bin\x86\Debug\prj_xxx

In the C# server code, all mosaic handling is done via a custom class: BBMosaicProject.cs


Microsoft Kinect v2.0 - Greenscreen

To just take photographs any camera will do. But I have used Microsoft Kinect v2.0 to combine colour images & depth images. This way, a greenscreen effect can be created. The background in all colour images received from Kinect will be replaced with a uniform green surface (BBBackgroundRemovalTool.cs).

A reference to Microsoft.Kinect was added to the serverproject.

EMGU

Because we need to be sure a person is on the photograph that was taken when the button was pressed, facial recognition capabilities were added to RoboPhoto.

https://www.nuget.org/packages/Emgu.CV/3.4.3.3016

Only when a person is within the picture, the greenscreen in this picture will be replaced by a uniform coloured surface, with colour-codes equal to the avarage colour of the targetpiece in the mosaic-to-be this picture will become.

Step 4: Thank You

Thank you for reading my Instructable. This was my first. I hope you enjoyed it.