A Raspberry Pi Multispectral Camera

8,630

93

25

Posted

Introduction: A Raspberry Pi Multispectral Camera

A multispectral camera can be a handy tool to detect stress in plants, or recognise different species in lieu of the differences in the reflectance signatures of plants in general. If combined with a drone, the camera can provide the data for quick NDVIs (Normalised Difference Vegetation Index), create mosaics of farms, forests or woodlands, understand nitrogen consumption, create yield maps and so on. But multispectral cameras can be costly, and their price is directly proportional to the sort of technology they implement. A traditional approach to spectrometry is to use several cameras with long or short bandpass filters that allow the required spectrum to pass trough while blocking the others. There are two challenges to that approach; first, you need to trigger the cameras at the same time, or as close as possible; and second, you need to register (merge images layer after layer) the images so they can form one final composite with the desire bands in it. This means that a big deal of post-processing needs to be done, consuming time and resources (using expensive software such as arcmap, but not necessarily). Other approaches have dealt with this in different ways; recent technology developments at processor level have allowed the creation of scan CMOS sensors with band filters integrated in the sensor's layout. Another approach is to use a beam splitter (prism) that would direct the different beams of light to a different sensor. All these technologies are extremely expensive and therefore out of the reach for explorers and makers. The Raspberry pi compute module and its development board offer a cheap answer to few of these questions (not all though).

Step 1: Enabling the Cameras

Make sure you follow the steps for setting up the cameras in the CM as indicated in the following tutorials:

https://www.raspberrypi.org/documentation/hardware...

Trigger both cameras at the same time using:

sudo raspistill -cs 0 -o test1.jpg & raspistill -cs 1 -o test2.jpg

Use the following topic if for any reason it didn't work:

https://www.raspberrypi.org/forums/viewtopic.php?f...

Further instructions in case you are starting from scratch with the CM here:

https://www.raspberrypi.org/documentation/hardware...

Step 2: Wireless Serial Communication

Buy a set of telemetry radios like these:

http://hobbyking.co.uk/hobbyking/store/__55559__HK...

These radios have four wires: Ground (black), TX, RX, VCC (red). Peal off one extreme of the cables and use female connectors that fit the GPIO pins. Connect the black connector to ground, red to 5V, TX to pin 15, and RX to pin 14 of the J5 GPIO header of the compute module development board.

Make sure you set the baud rate to 57600, and that your host computer has recognised and added the radio as COM (in Windows use the device manager for that). If using Putty, chose serial, the COM port (3, 4 or whatever it is in your computer), and set the baud rate to 57600. Switch your CM on and after it finishes loading, click enter in your computer if you don't see any text coming through the connection. If you notice any garbled text, go and check /boot/cmdline.txt. The baud rate should be 57600. if any further problems arise, please check the following tutorial:

http://www.hobbytronics.co.uk/raspberry-pi-serial-...

Step 3: The Cameras...

You can actually use the cameras in their original configuration, but if not, you will need to modify them in order to accommodate the M12 lenses. Bear in mind that the raspberry pi cameras V1 and V2 are slightly different, so, old M12 holders won't work on new cameras. Also, there was some problems when triggering the new cameras in parallel, if you experience any of this problems please check this topic in the raspberry pi forum:

https://www.raspberrypi.org/forums/viewtopic.php?t...

In any case, a sudo rpi-update should fix the issue.

The M12 lens holder can be 'grind' with a Dremel in order to fit the connector of the CMOS sensor with the camera board. Unscrew the original lens, and place the new lens over the M12 holder. For better results you can actually get rid altogether of the original lens adapter, but it might not be worth the work in light of the risk that entails to damage the sensor. I destroyed at least six camera boards before managing to get rid of the plastic holder that sits above the CMOS sensor.

Step 4: Wifi Connection and Extra Storage

The CM development board has just one USB port; as a result of that you have to use it very wisely, e.g. wifi connection. If you want to go around that, you will have to use your soldering iron skills and attach a dual USB connector under the development board, where the USB is soldered. If you are using the same I have

https://www.amazon.co.uk/gp/product/B00B4GGW5Q/ref...

https://www.amazon.co.uk/gp/product/B005HKIDF2/ref...

Just follow the cable order in the picture.

Once done, attach your wifi module to the dual port, power on the CM and see if the wifi module is working correctly.

It is easier to attach an SD card than a USB drive, so buy something like this:

https://www.amazon.co.uk/gp/product/B00KX4TORI/ref...

To mount the new external storage, follow this tutorial carefully:

http://www.htpcguides.com/properly-mount-usb-stora...

Now you have 2 USB ports, extra storage and wifi connection.

Step 5: Print the Case

Use ABS

Step 6: Put the Pieces Together

Before you assemble the camera, connect a monitor and keyboard to the CM, and focus the lenses. The best way to do that is to use the following command:

raspistill -cs 0 -t 0 -k -o my_pics%02d.jpg

That runs the camera forever, so observing your screen, tight the lens until is focused. Remember to do that with the other camera by changing the -cs command from 0 to 1.

Once your lenses are focused put a small drop of glue between the lens and the M12 lens holder to prevent any movement of the lens. Do the same while attaching the lenses to the case. Make sure that both lenses are aligned as much as possible.

Use a drill to open a hole on the side of the case and put through the radio antenna. Place the radio securely by using double face tape and connect it to the GPIO.

Place the CM development board inside the case and secure it with 4 10mm metal hexagonal extenders. Secure the camera connector adapters so they don't bounce freely inside.

Step 7: Configure Dropbox-Uploader, Install the Camera Script.

Install dropbox_uploader following the instructions provided here

https://github.com/andreafabrizi/Dropbox-Uploader

Use a script similar to that in the picture.

Step 8: Final Product

The final camera can be placed under a medium size (650 mm ⌀) drone or even smaller. It all depends on the configuration. The camera is no more than 350-400 grams.

To power the camera, you will have to provide a separate battery, or connect the camera to the power board of your drone. Be careful not to exceed the power requirements of the CM board. You can use the following items to power your camera:

https://www.adafruit.com/products/353

https://www.amazon.co.uk/USB-Solar-Lithium-Polymer...

You can also build the support, and the anti-vibration dampers according to your drone specifications.

Once you have taken the first pictures, use a GIS program such as Qgis or Arcgis Map to register your images. You can also use matlab.

Happy flight!

Share

Recommendations

  • Epilog Challenge 9

    Epilog Challenge 9
  • First Time Author Contest 2018

    First Time Author Contest 2018
  • Sew Warm Contest 2018

    Sew Warm Contest 2018
user

We have a be nice policy.
Please be positive and constructive.

Tips

Questions

24 Comments

Hi. Congrats for this fantastic work. Just a couple of questions before I purchase some pieces for my own project.

a) Regarding focus... have you focused them at the infinite or will I have problems with different flight altitudes?

b) These are progressive shutter cameras... did you experience any blurring effect or aliasing? what speed did you test it for?

c) As it is virtually impossible that both cameras get the same FOV you should align and, probably, crop both of them to make them fit.. did you use any software for this?. In such case... on board or after downloading photos to the PC?

I am planning to build a global shutter camera like yours and those questions are still pending to me.

Thanks in advance!

Jose

Hi @Kankamuso,


a) You can definitely focus it at infinite.
b) Yes, I did experience blurring. It's not suitable for taking pictures while the drone is moving. You could potentially stop over your AOI and take some photographs. The more serious problem is that the cameras would not trigger at the same time; there is a lag between of around 8 miliseconds, which is disastrous if you are moving too fast.
c) FOV should not be a problem, at least if you are flying over 50m. You don't need to crop the images, but you'll have to align them either using a commercial software like Arcmap, or following few tutorials to use Opencv with C or Python for that purpose. Either way, you should get good results. You'll also have to take in consideration things like vignette effect, row gradient and irradiance calibration if you DO want to make something serious out of it.
d) Remember that you get what you pay for, so do not get too excited about excellent results with this system. A commercial multispectral camera costs around $5K, at least 10 times more than this simple camera, but on the other hand, you get monochrome sensors, perfect sync, and technical assistance. I still believe that thus system should be enough to intro hobbyists into multispectral imaging, learn the basics, and then move on.

I hope this helps.

So this is getting complicated ! XD. I currently have some DJI cameras on board my drones. Some of them are used for NDVI calculation and are both CMOS and RGB based but the IR filter is removed. Therefore I supposed I can use a filtered RGB to compute the NVDI (at least a variant). I want to go seriou with this so I will have a look at the FPGAs option. Not know where to start though... I also own a Parrot Sequoia but it is failing so I decided I can do something similar as a computer scientist but I lack expertise on this field as well as special purpose optics.

In fact I feel there is still place for a-so-compact and inexpensive camera for drones work... don't you think so?

Thanks a lot!

The reality is, these things are complicated. The problem with RGB sensors is that they let you play with images but, as you progress and need to do more complicated analyses, you'll realise you need a sensor that is capable of picking up the right wavelength. Read this post for example: https://publiclab.org/notes/khufkens/11-02-2015/ov5647-raspberry-pi-camera-spectral-response-quantum-efficiency/
Now, I'm a bit intrigued. If you own the DJIs cameras and a Parrot Sequoia, they should be more than enough for NDVIs. What is exactly that you want to do? What do you mean by the Parrot Sequoia failing?

The sequoia is a second hand one and whenever I connect the sunshine sensor it hangs on shooting :-(. I still don't know the cause but something is causing it to reboot ort just freeze (and drops wifi connection). So it is unusable at this time. The DJI cameras are single mountable on my drones (Matrice 100 and Matrice 200) so I have to decide whether I get a NDVI or a visible orthomosaic so I have to fly twice to get both. This is an enormous waste of time (and batteries) so a single camera having it all would be nice. But, with all of that, my main motivation is testing my habilties and creating something on my own. If I can use it later commercially it would be an extra and welcome outcome :-). But... I do not have much time and there is not much information out there. At this moment I have access to FPGAs but I don't know how to connect available cameras ("cheap" ones with global shutter). I have seen the ArduCam products and they seem promising for my needs, being the lag the only aparent problem at this moment...

In short, I am looking to create something and see it work ! :-)

The radiance sensor is not fully usable with those cameras, the only function that really does is to add some more information to the metadata of the images. I think you should play with the camera without connecting it to the radiance sensor. You can use a lambertian surface to calibrate the camera. It will work better than the sensor, and it's recommended by Micasense. You don't really need an RGB mosaic in most cases. In fact, you can create a real color image with the Sequoia, mixing the four bands. It may not look as nice and sharp as the RGB mosaic of the Zenmuse Z3 (I don't know which camera you own).
This is my platform BTW.

fullsizeoutput_2c.jpegfullsizeoutput_7.jpegfullsizeoutput_5.jpegfullsizeoutput_a.jpegfullsizeoutput_2a.jpeg

Nice one!, BTW, what are you using the jetson for? (I hve been working on HPC for a lot of years an my group has some of them currently). And the black boxes?. My cameras are X3 (two of them, with and without NIR filter), one X4S and a new one XT (this is coming this week) for inspection purposes.

I can also see we are not the onle ones dealing with persistent dust everywere... XD.

Drop me an email at maykef at gmail dot com

Thanks a lot!. Delay is a serious problem as I am planning to use it with a flying wing. I am intrigued as how do the commercial cameras solve this issue... Perhaps an intermediate buffer... don't know. I will definitely go for a global shutter camera (not really sure if monochrome or just a RGB and a NIR. Any suggestions based on your experience?

Regards,

Jose

Hi Jose,

It depends on what issue you refer to. Camera sync? That's not difficult if you can design your electronics from scratch. But that costs money. The cameras I've seen have FPGAs for each of them, so obviously you are holdingthe info there while channeling through your BUS to the sim card. But they do much more than just that.

As I said above, RGBs are not suitable for multispectral imaging; and most modifications I've seen are just for amateur projects. If you are serious about multispectral imaging you need to have all your bands separated, calibrated and aligned.

Good luck with your peoject!