Autonomous Drone With Infrared Camera to Assist First Responders

About: Im just a regular 15 year old who likes to do random science and engineering experiments! If you have any questions about anything at all, my email is

According to a World Health Organization report, every year natural disasters kill around 90,000 people and affect close to 160 million people worldwide. Natural disasters include earthquakes, tsunamis, volcanic eruptions, landslides, hurricanes, floods, wildfires, heat waves and droughts. Time is of the essence as the chance of survival starts to go down with every minute that passes. First responders can have trouble locating survivors in houses that are damaged and put their life at risk while looking for them. Having a system that can remotely locate people would greatly increase the speed at which first responders are be able to evacuate them from buildings. After researching other systems, I found that some companies have created robots that are land based or have created drones that can track people but only function outside of buildings. The combination of depth cameras along with a special infrared cameras can allow for accurate tracking of the indoor area and detection of temperature changes representing fire, people, and animals. By implementing sensors with a custom algorithm on an unmanned aerial vehicle (UAV), it will be possible to autonomously inspect houses and identify the location of people and animals to rescue them as quickly as possible.

Please vote for me in the Optics contest!

Step 1: Design Requirements

After researching the technologies available, I discussed possible solutions with machine vision experts and a first responder to find the best method to detect survivors in dangerous areas. The information below lists the most important features required and design elements for the system.

  • Vision Processing - The system needs to provide a fast processing speed for the exchanged information between the sensors and Artificial Intelligence (AI) response. For example, the system needs to be able to detect walls and obstacles to avoid them while also finding people who are in danger.
  • Autonomous - The system needs to be able to function without the input from a user or an operator. Personnel with minimum experience with UAV technology should be able to press one or a few buttons to the have the system start scanning by itself.
  • Range - The range is the distance between the system and all other objects in proximity. The system should be able to detect hallways and entrances from at least 5 meters away. The ideal minimum range is 0.25 m so that close objects can be detected. The greater the detection range, the shorter the detection time for survivors.
  • Navigation and Detection Accuracy - The system should be able to accurately find all entrances and not hit any objects while also detecting the sudden appearance of objects. The system needs to be able to find the difference between people and non-living objects through various sensors.
  • Duration of Operation - The system should be able to last 10 minutes or longer depending on how many rooms it needs to scan.
  • Speed - It should be able to scan the entire building in less than 10 minutes.

Step 2: Equipment Selection: Method of Mobility

The quadcopter was chosen over a remote control car because although the quadcopter is fragile, it is easier to control and change in height to avoid obstacles. The quadcopter can hold all the sensors and stabilize them so that they are more accurate while moving around into different rooms. The propellers are made of carbon fiber which are heat resistant. The sensors direct away from walls to prevent accidents.

  • Remote Control Land Vehicle
    • Pros - Can move quickly without falling and is not affected by temperature
    • Cons - The vehicle would put the sensors low to the ground covering less area at a time and can be blocked by obstacles
  • Quadcopter
    • Pros - Lifts sensors into the air to get a 360 view of surroundings
    • Cons - If it runs into a wall, it can fall and not recover

Step 3: Equipment Selection: Microcontrollers

The main two requirements for the microcontrollers are small size to reduce the payload on the quadcopter and speed to process the information input rapidly. The combination of the Rock64 and the DJI Naza is the perfect combination of microcontrollers as the Rock64 has sufficient processing power to quickly detect people and keep the quadcopter from running into walls and obstacles. The DJI Naza compliments it well by doing all of the stabilization and motor control that the Rock64 can’t do. The microcontrollers communicate through a serial port and allow for user control if necessary. The Raspberry Pi would have been a good alternative but since the Rock64 had a better processor and better connectivity to the sensors listed in the next table, the Pi was not selected. The Intel Edison and Pixhawk were not selected because of the lack of support and connectivity.

  • Raspberry Pi
    • Pros - Can detect walls and fixed objects
    • Cons - Struggles to keep up with data from all sensors so cannot see entrances quickly enough. Can't output motor signals and does not have any stabilizing sensors for the quadcopter
  • Rock64
    • Pros - Able to detect walls and entrances with little latency.
    • Cons - Also able to guide the system throughout the house without running into anything using all of the sensors. Unable to send signals quickly enough to control motor speed and does not have any stabilizing sensors for the quadcopter
  • Intel Edison
    • Pros - Able to detect walls and entrances with some lag
    • Cons - Older technology, many of the sensors would need new libraries which is very time consuming to create
  • DJI Naza
    • Pros - Has integrated gyroscope, accelerometer, and magnetometer, to allow for quadcopter to be stable in the air with micro adjustments to motor speed
    • Cons - Unable to do any sort of vision processing
  • Pixhawk
    • Pros - Compact and compatible with sensors used in project by using the General Purpose Input Output (GPIO)
    • Cons - Unable to do any sort of vision processing

Step 4: Equipment Selection: Sensors

A combination of several sensors is used in order to obtain all of the information required to find people in dangerous areas. The two main sensors selected include the stereo infrared camera alongside the SOund Navigation And Ranging (SONAR). After some testing, I have decided to use the Realsense D435 camera because it is small and is able to accurately track distances up to 20 meters away. It runs at 90 frames per second which allows many measurements to be taken before making a decision about where objects are and which direction to point the quadcopter to. SONAR sensors are placed on top and bottom of the system to allow the quadcopter to know how high or low it is allowed to go before making contact with a surface. There is also one placed facing forward to allow the system to detect objects like glass which the stereo infrared camera sensor can’t detect. People and animals are detected using motion and object recognition algorithms. FLIR Camera will be implemented to help the stereo infrared camera track what is living and what is not to increase efficiency of scanning in adverse conditions.

  • Kinect V1
    • Pros - Can track 3D objects easily up to 6 meters away
    • Cons -Has only 1 infrared sensor and is too heavy for quadcopter
  • Realsense D435
    • Pros - Has 2 infrared cameras and a Red, Green, Blue, Depth (RGB-D) camera for high precision 3D object detection up to 25 meters away. It is 6 cm wide allowing for easy fit in quadcopter
    • Cons - Can heat up and may need a cooling fan
    • Pros - Beam that can track locations up to 40 meters away in its line of sight
    • Cons - Heat in environment can affect measurement precision
    • Pros - Beam that can track 15 m away but is able to detect transparent objects like glass and acrylic
    • Cons - Only points in one line of sight but can be moved by the quadcopter to scan area
  • Ultrasonic
    • Pros - Has a range of up to 3 m and is very inexpensive
    • Cons - Only points in one line of sight and can be out of range of distance sensing very easily
  • FLIR Camera
    • Pros - Able to take depth pictures through smoke without interference and can detect living people through heat signatures
    • Cons - If anything interferes with the sensors, the distance calculations can be incorrectly calculated
  • PIR sensor
    • Pros - Able to detect change in temperature
    • Cons - Unable to pinpoint where the temperature difference is

Step 5: Equipment Selection: Software

I used the Realsense SDK alongside the Robot Operating System (ROS) to create a seamless integration between all the sensors with the microcontroller. The SDK provided a steady stream of the point cloud data which was ideal for tracking all the objects and the boundaries of the quadcopter. ROS helped me send all of the sensor data to the program that I created which implements Artificial Intelligence. The AI consists of object detection algorithms and motion detection algorithms which allow the quadcopter to find movement in its environment. The controller uses Pulse Width Modulation (PWM) to control the position of the quadcopter.

  • Freenect
    • Pros - Has a lower level of access for controlling everything
    • Cons - Only supports the Kinect V1
  • Realsense SDK
    • Pros - Can easily create the point cloud data from the information stream from the Realsense Camera
    • Cons - Only supports Realsense D435 camera
  • FLIR Linux Driver
    • Pros - Can retrieve data stream from FLIR camera
    • Cons - Documentation is very limited
  • Robot Operating System (ROS)
    • Pros - Operating system ideal for programing camera functions
    • Cons - Needs to be installed on a fast SD card for efficient data collection

Step 6: System Development

The “eyes” of the device is the Realsense D435 stereo infrared sensor which is an off-the-shelf sensor mainly used for robotic applications such as 3D mapping (Figure 1). When this sensor is installed on the quadcopter, the infrared camera can guide and allow the quadcopter to move autonomously. The data generated by the camera is called a point cloud which consists of a series of points in a space that have information about the position of a certain object in the vision of the camera. This point cloud can be converted to a depth map that shows colors as different depths (Figure 2). Red is further away, while blue is closer meters.

To ensure this system is seamless, an open-source operating system called ROS, which is typically used on robots, was used. It allows to perform low-level device control, and to access all sensors and compile data to be used by other programs. ROS will communicate with the Realsense SDK which allows to turn on and off different cameras to track how far away objects are from the system. The link between both allows me to access the data stream from the camera which creates a point cloud. The point cloud information can determine where boundaries and objects are within 30 meters and an accuracy of 2cm. The other sensors such as the SONAR sensors and the embedded sensors in the DJI Naza controller allow for a more accurate positioning of the quadcopter. My software uses AI algorithms to access the point cloud and through localization, create a map of the entire space surrounding the device. Once the system is launched and begins scanning, it will travel through hallways and find entrances to other rooms where it can then do a sweep of the room specifically looking for people. The system repeats this process until all of the rooms have been scanned. Currently, the quadcopter can fly for around 10 minutes which is enough to do a full sweep but can be improved with different battery arrangements. The first responders will get notifications when people are spotted so that they can focus their efforts on select buildings.

Step 7: Discussion and Conclusion

After many trials, I had created a working prototype that fulfilled the requirements listed in Table 1. By using the Realsense D435 stereo infrared camera with the Realsense SDK, a high resolution depth map of the front of the quadcopter was created. At first I had some issues with the infrared camera not being able to detect certain objects like glass. By adding a SONAR sensor, I was able to overcome this problem. The combination of the Rock64 and DJI Naza was successful as the system was able to stabilize the quadcopter while being able to detect objects and walls through custom created computer vision algorithms using OpenCV. Although the current system is functional and fulfills the requirements, it could benefit from some future prototypes.

This system could be improved by using higher quality cameras to be able to more accurately detect people. Some of the more expensive FLIR cameras have the ability to detect heat signatures which can allow for more accurate detection. The system could also be able to function in different environments such as rooms that are dusty and filled with smoke. With new technology, and fireproofing, this system could be sent in houses that are burning and quickly detect where the people are so that first responders can retrieve the survivors from danger.

Thanks for reading! Don't forget to vote for me in the Optics contest!



    • PCB Contest

      PCB Contest
    • Make it Glow Contest 2018

      Make it Glow Contest 2018
    • First Time Author

      First Time Author