Introduction: LoRa + Neural Network Security System

This blog shows how to build a security system based on detecting or 'inferring' the location of a person using a neural network USB pluggin stick on a Raspberry Pi. An Arduino type receiver is built from scratch to get data from the Raspberry Pi transmitter via the long range LoRa radio frequency protocol and show/sound an alarm. I've tried to show some of the thinking behind the project by laying it out 'as it happened' and there are a full set of python, arduino and PCB etc. files for implementation. The most up to date files will be here on GitHub:

Level of difficulty: Good, but not expert, Raspberry Pi skills required. Soldering skills = easy.

Most info on the interwebs about LoRa seems to point to the 'Things Network', but what if we just want basic data transfer between 2 devices and don't need the whole world to witness our data? Peer to peer is quite possibly the answer.

In this case, a Raspberry Pi with a Dragonino LoRa/GPS hat will send data about the security state of a remote location (eg farm gate) to tell us if people are coming in or even if someone has stolen one of your cows. The receiver is an Arduino MKRWAN 1300 which has a dedicated LoRa chip soldered onto it. BE WARNED: This Arduino is a 3.3V device and will be destroyed by applying 5v to any (or most of) of the pins. Also, never operate either the Dragino hat or the Arduino device without an antenna attached! As far as code is concerned, both instances are really simple, although it took me a while to work out that I had to flash the Arduino with a firmware upgrade to get it to work properly. After plugging in the Dragonino hat, the Raspberry Pi was processed as follows:

$ wget <a href="<br"> <a href="</a"> <a href="</a"> <a href="</a"> <a href="</a"></a>>>>>>$ unzip master
$ cd rpi-lora-tranceiver-master/dragino_lora_app
$ make
$ cd rpi-lora-tranceiver-master/dragino_lora_app &&./dragino_lora_app
$ ./dragino_lora_app sender</p>

This data is NOT secure, but there are python scripts that can be used if needed.
NB. The RPi MUST have a proper power supply and SPI needs to be activated in the settings. OS used is Raspian stretch. Setting up the Arduino is just as easy. There's just a couple of things to watch out for: Firstly, the Rpi, in my case attempted to transmit 'HELLO' every 3 seconds on 868.1 MHz, so the Arduino needs to be configured accordingly ...... 868.1 MHz = 8681 x 105 = 8681E5. Other regions eg USA will use different sets of frequencies. Download Arduino lora libraries here: (both are required)

After installing the libraries in the normal way, open the MKWAN example set and up load 'MKRWANFWUpdate_standalone' to the Arduino and open serial console. You should see the update as it progresses. Next, find the 'LoRa' example set and select 'LoRaReceiver' and upload. Dont forget to edit the frequency as mentioned before! Open the serial console and you should see the HELLO sent from RPI.

Step 1: Components

Step 2: Port the Code Over to Python

Since the neural network module using Python 3 i thought it would be a good idea to get the LoRa transmiiter hat on the Raspberry Pi, the Dragonino, to also be controlled using this version of Python. Fortunately, people have already done this and it's well documented here:

However, there are a couple of extra steps that are skipped, so I'll write out the whole procedure here:

1. Remove the SD card from the RPi and insert it into a suitable PC.

2. Copy and paste the config.txt file from the /boot folder to your desktop folder.

3. Change the permissions using chmod 777 in command line, or whatever is convenient, and edit the file by adding:


to the very top.

4. Save, and paste back onto the SD card into boot again. This is the only way to quickly and easily edit this file!

5. Download the Python files from here: , extract, and open up the '' in a text editor.

6. Use the following values in board_config:

DIO0 = 4     
DIO1 = 23     
DIO2 = 24     
DIO3 = 21     
LED = 18     

...... Oh, I nearly forgot, if you're in Europe, we use 868 (ish) Mhz so where it says:

low_band = true

in board_config, this needs to be changed to 'false'. It's pretty self explanatory if you read the comments next to it.

7. Find the '' file and edit it by adding the following, being careful to use exactly 4 spaces to be compatible with Python formatting:

    MAX_SPEED_HZ = 5000 
class SPI_MODE:
    SPI_MODE = 0b01 

8. Find the file and find the line:

spi = BOARD.SpiDev()

insert these lines right underneath it:

spi.max_speed_hz = SPI_BAUD_RATE.MAX_SPEED_HZ 
spi.mode = SPI_MODE.SPI_MODE 

9. Open a terminal and CD to the directory that contains the '' file eg cd /home/pi/Desktop/dragonino/psySX127x-master/

10. Where '869' is the frequency in MHz and '7' is the spreading factor, run the beacon program using: python -f 869 -s 7

11. Tune the Arduino to 8690E5 and you should see something like this in the serial console:

Received packet ' ' with RSSI -33

Received packet ' ' with RSSI -23

Received packet ' ' with RSSI -33

Received packet ' ' with RSSI -26

Received packet ' ' with RSSI -25

It's working!

12. If you want to see something a bit more meaningful, open '' with text editor and, near line 65, find:


Change this to:


Step 3: Making Data Available for Transmission in Python

I've not programmed in Python before, but compared to C++ it already seems much easier and a lot more intuitive.

I worked out that the Lora beacon was the probably right place to start tinkering and that it needed a 'list' of numbers that represent ANSII characters and that these can be decimal or hex values. The actual phrase in the beacon program that transmits the data is on line 65:


where j is the payload list, which would normally look something like this for 'Hello World':

([72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100])

To get 'Hello World' into this list format the following code is used:

import array as arr<br>import numpy as np
c= 'Hello World'
g = arr.array('i',[])
for h in range(len(c)):
for x in c:
  n = n+1
  y = ord(x)
  g[n] = y
j = np.array(g).tolist()

The code converts each of the characters in the string to an integer in an integer array, denoted by the letter 'i', of length c, corresponding to the number of characters and spaces in the string. The extend command extends the array to accept more integers. Next, for each of the characters in the string the 'ord' command does the actual character to integer transformation and 'g[n] = y' dumps it into the right place in the array. Last is the array to list command that turns the whole array into the list format. Simples!

The new LoRa beacon file is called Tegwyns_LoRa_Beacon in the files section of this blog and, assuming it's located in the same place as the original beacon file, it would be run from the command line with:

cd /home/pi/Desktop/dragonino_python_fix/pySX127x-master/ && python3 -f 869 -s 7-

At this stage it's a good idea, but not essential to use an SDR-RTL USB dongle to detect and analyse the transmitted signal using software like Cubic SDR and it's spectrum analyzer function.

Step 4: Getting 'Person Detector' Working Using the Neural Compute Stick

A couple of key things to note before getting started:

The correct Neural Compute stick for Raspberry Pi is the NCSM2450.DK1 and currently (2018) no other Intel sticks will work on the Pi.Be careful which version of the stick SDK or APi is downloaded - V2 and above is NOT for Raspbian stretch, only Ubuntu 16.04.


1. I installed the full version of the latetst version 1 of the SDK and the APi and it did not take too long to install:

$ sudo apt-get update
$ sudo apt-get install
$ git clone
$ cd /home/pi/ncsdk && sudo make install
$ cd /home/pi/ncsdk && sudo make examples
2. Test that the stick is working ok:
$ git clone
$ cd /home/pi/ncappzoo/apps/hello_ncs_py
$ python3 

3. Download this file: and paste it into the /home/pi/ncappzoo/caffe/SSD_MobileNet folder. Do not change it's name or extention.

4. Make and run the demo using the following commands:
$ cd /home/pi/ncappzoo/apps/security-cam
$ make run

..... Obviously a camera is required. Mine is a USB logitech and it worked straight away.

Step 5: Combining Two Python Packages to Get LoRa and Security Cam Working Together

It's been a bit of a battle, with a steep Python learning curve, but
finally I created a single Python file that enables a time stamp, detection class, confidence and bounding box coordinates to be sent to the Arduino base station. Obviously, there's still a number of dependancies in various directories - some close by and others deeply embedded in the Python system somewhere, but here's my 'top of the stack' code:


# ****************************************************************************
# Copyright(c) 2017 Intel Corporation. 
# License: MIT See LICENSE file in root directory.
# ****************************************************************************

# Detect objects on a LIVE camera feed using and send data over LoRa network
# Intel® Movidius™ Neural Compute Stick (NCS)

import os
import cv2
import sys
import numpy
import ntpath
import argparse

import mvnc.mvncapi as mvnc

from time import localtime, strftime
from utils import visualize_output
from utils import deserialize_output
from pySX127x_master import basic_test01

import datetime
import array as arr
import numpy as np
from time import sleep

from pySX127x_master.SX127x.LoRa import *
from pySX127x_master.SX127x.LoRaArgumentParser import LoRaArgumentParser
from pySX127x_master.SX127x.board_config import BOARD

#from pySX127x_master import Tegwyns_LoRa_Beacon


parser = LoRaArgumentParser("A simple LoRa beacon")
parser.add_argument('--single', '-S', dest='single', default=False, action="store_true", help="Single transmission")
parser.add_argument('--wait', '-w', dest='wait', default=1, action="store", type=float, help="Waiting time between transmissions (default is 0s)")

myString = ""
#from pySX127x_master.Tegwyns_LoRa_Beacon import *

class LoRaBeacon(LoRa):

    def __init__(self, verbose=False):
        super(LoRaBeacon, self).__init__(verbose)
    def on_rx_done(self):
        print(map(hex, self.read_payload(nocheck=True)))

    def on_cad_done(self):

    def on_rx_timeout(self):

    def on_valid_header(self):

    def on_payload_crc_error(self):

    def on_fhss_change_channel(self):
    def start(self):
        global args
        stamp = str(
        text=bytearray('PING LoRa Test PI: ' + stamp + ('  ') + myString,'utf-8')
        self.write_payload([0x00, 0x00, 0x00, 0x00] + list(text))
lora = LoRaBeacon(verbose=False)
args = parser.parse_args(lora)

#lora.set_pa_config(max_power=0x04, output_power=0x0F)
#lora.set_pa_config(max_power=0x04, output_power=0b01000000)

#assert(lora.get_lna()['lna_gain'] == GAIN.NOT_USED)
assert(lora.get_agc_auto_on() == 1)

print("Security cam config:")
print("  Wait %f s" % args.wait)
print("  Single tx = %s" % args.single)

# "Class of interest" - Display detections only if they match this class ID
CLASS_PERSON         = 15

# Detection threshold: Minimum confidance to tag as valid detection
CONFIDANCE_THRESHOLD = 0.60 # 60% confidant

# Variable to store commandline arguments
ARGS                 = None

# OpenCV object for video capture
camera               = None

# ---- Step 1: Open the enumerated device and get a handle to it -------------

def open_ncs_device():

    # Look for enumerated NCS device(s); quit program if none found.
    devices = mvnc.EnumerateDevices()
    if len( devices ) == 0:
        print( "No devices found" )

    # Get a handle to the first enumerated device and open it
    device = mvnc.Device( devices[0] )

    return device

# ---- Step 2: Load a graph file onto the NCS device -------------------------

def load_graph( device ):

    # Read the graph file into a buffer
    with open( ARGS.graph, mode='rb' ) as f:
        blob =

    # Load the graph buffer into the NCS
    graph = device.AllocateGraph( blob )

    return graph

# ---- Step 3: Pre-process the images ----------------------------------------

def pre_process_image( frame ):

    # Resize image [Image size is defined by choosen network, during training]
    img = cv2.resize( frame, tuple( ARGS.dim ) )

    # Convert RGB to BGR [OpenCV reads image in BGR, some networks may need RGB]
    if( ARGS.colormode == "rgb" ):
        img = img[:, :, ::-1]

    # Mean subtraction & scaling [A common technique used to center the data]
    img = img.astype( numpy.float16 )
    img = ( img - numpy.float16( ARGS.mean ) ) * ARGS.scale

    return img

# ---- Step 4: Read & print inference results from the NCS -------------------

def infer_image( graph, img, frame ):
    #from pySX127x_master.Tegwyns_LoRa_Beacon import LoRaBeacon
    #from pySX127x_master import Tegwyns_LoRa_Beacon
    # Load the image as a half-precision floating point array
    graph.LoadTensor( img, 'user object' )

    # Get the results from NCS
    output, userobj = graph.GetResult()

    # Get execution time
    inference_time = graph.GetGraphOption( mvnc.GraphOption.TIME_TAKEN )

    # Deserialize the output into a python dictionary
    output_dict = deserialize_output.ssd( 
                      frame.shape )

    # Print the results (each image/frame may have multiple objects)
    for i in range( 0, output_dict['num_detections'] ):

        # Filter a specific class/category
        if( output_dict.get( 'detection_classes_' + str(i) ) == CLASS_PERSON ):

            cur_time = strftime( "%Y_%m_%d_%H_%M_%S", localtime() )
            print( "Person detected on " + cur_time )
            print(".... Press q to quit ..... ")

            # Extract top-left & bottom-right coordinates of detected objects 
            (y1, x1) = output_dict.get('detection_boxes_' + str(i))[0]
            (y2, x2) = output_dict.get('detection_boxes_' + str(i))[1]
            #print (y1, x1)
            # Prep string to overlay on the image
            display_str = ( 
                labels[output_dict.get('detection_classes_' + str(i))]
                + ": "
                + str( output_dict.get('detection_scores_' + str(i) ) )
                + "%" )
            print (display_str)
            print (y1, x1)
            print (y2, x2)
            # Overlay bounding boxes, detection class and scores
            frame = visualize_output.draw_bounding_box( 
                        y1, x1, y2, x2, 
                        color=(255, 255, 0),
                        display_str=display_str )
            global myString
            myString = display_str + " , " + "(" + str(y1) + "," + str(x1) +")" + "," + "(" + str(y2) + "," + str(x2) +")"

            # Capture snapshots
            photo = ( os.path.dirname(os.path.realpath(__file__))
                      + "/captures/photo_"
                      + cur_time + ".jpg" )
            cv2.imwrite( photo, frame )

    # If a display is available, show the image on which inference was performed
    if 'DISPLAY' in os.environ:
        cv2.imshow( 'NCS live inference', frame )
# ---- Step 5: Unload the graph and close the device -------------------------

def close_ncs_device( device, graph ):

# ---- Main function (entry point for this script ) --------------------------

def main():

    device = open_ncs_device()
    graph = load_graph( device )

    # Main loop: Capture live stream & send frames to NCS
    while( True ):
        ret, frame =
        img = pre_process_image( frame )
        infer_image( graph, img, frame )

        # Display the frame for 5ms, and close the window so that the next
        # frame can be displayed. Close the window if 'q' or 'Q' is pressed.
        if( cv2.waitKey( 5 ) & 0xFF == ord( 'q' ) ):

    close_ncs_device( device, graph )

# ---- Define 'main' function as the entry point for this script -------------

if __name__ == '__main__':

    parser = argparse.ArgumentParser(
                         description="DIY smart security camera PoC using \
                         Intel® Movidius™ Neural Compute Stick." )

    parser.add_argument( '-g', '--graph', type=str,
                         help="Absolute path to the neural network graph file." )

    parser.add_argument( '-v', '--video', type=int,
                         help="Index of your computer's V4L2 video device. \
                               ex. 0 for /dev/video0" )

    parser.add_argument( '-l', '--labels', type=str,
                         help="Absolute path to labels file." )

    parser.add_argument( '-M', '--mean', type=float,
                         default=[127.5, 127.5, 127.5],
                         help="',' delimited floating point values for image mean." )

    parser.add_argument( '-S', '--scale', type=float,
                         help="Absolute path to labels file." )

    parser.add_argument( '-D', '--dim', type=int,
                         default=[300, 300],
                         help="Image dimensions. ex. -D 224 224" )

    parser.add_argument( '-c', '--colormode', type=str,
                         help="RGB vs BGR color sequence. This is network dependent." )

    ARGS = parser.parse_args()

    # Create a VideoCapture object
    camera = cv2.VideoCapture( )

    # Set camera resolution
    camera.set( cv2.CAP_PROP_FRAME_WIDTH, 620 )
    camera.set( cv2.CAP_PROP_FRAME_HEIGHT, 480 )

    # Load the labels file
    labels =[ line.rstrip('\n') for line in
              open( ARGS.labels ) if line != 'classes\n']


# ==== End of file ===========================================================

Step 6: Upgrading Camera to Raspberry Pi NoIR V2

Obviously, we're going to want to use this gadget in the dark with
infra red lights, so we need a decent camera with IR capabilities ie no IR filter. Fortunately, these cameras are a lot cheaper an more compact than the Logitech USB and they're also very easy to install:

Firstly, check that camera is enabled in the RPi settings and then, after plugging it into the board, check it works with:

$ raspistill -o image.jpg
Next install the following Python dependencies
$ sudo apt-get install python3-picamera
$ pip3 install "picamera[array]"
$ pip3 install imutils

Lastly, use the Pi cam version of the security_cam file, downloadable from here:

Run the file using the following command:
$ cd && cd /home/pi/ncappzoo/apps/securityCam && python3

The security camera is now ready to test out in the wild, although obviously not in the rain! It'll be interesting to see what the transmitter range will be with a decent antenna :)

One thing to note is that the Noire camera gives very different colour balance reults than the USB camera and this is perfectly normal due to the lack of IR filter on the lens.

Other than waterproofing, another issue is where to collect the captured photos produced when the device spots a person - maybe a USB stick, to prevent filling up the raspberry Pi SD card?

Step 7: Save Camera Snapshots to a USB Drive

The security cam Python file can be easily adapted to save the repeated
snapshots of detected people by modifying line 235 to something like the following, where my USB drive is called 'KINSTON':

photo = ( "/media/pi/KINGSTON" + "/captures/photo_" + cur_time + ".jpg" )

In reality, using this USB stick slowed down the program rather drastically! Images can be transferred / deleted easily from the Pi if another PC is used to SSH into the Pi once it's deployed. A better solution might be to create a partition on the micro SD card so that if it becomes full of images, it wont clog up the OS and camera python scripts.

Step 8: Reciever PCB Population

Other than the Arduino MKRWAN 1300, the PCB features a L293E
chip for stepping up the voltage and current required for the alarm system, which is itself a block of 8 LEDs and 3 buzzer beeper chips. Attempting to run these devices directly off the Arduino would instantly frazzle the device!After assembly and testing, the whole thing worked perfectly and, after a bit of experimentation, the best resister for the red LEDs was 39 Ohms.

Although most of the components are surface mount, they are all very
large and no stencil is required. After checking the polarity of the LEDs, the PCB was pasted up with solder, populated with the surface mount components and chucked in the toaster oven for cooking to 260 degrees C. Using a hot air gun is possible, but not recommended.

Step 9: Preparing for Deploying the Camera

As can be seen, this unit is very easy to assemble and just needed to be
located in a waterproof case with a transparent front ( ) . The above components were wedged tightly into the box using thick cardboard and a bit of judicious Origami.

NB. The camera MUST be up the right way round for the neural network to work properly.

Step 10: Arduino MKRWAN 1300 Code

Nothing too fancy about the code, except I wanted the tone of the beeper
to change according to how close the detected person was to the camera. This would be useful for discerning the difference between someone posting a letter in the postbox and someone actually walking up the drive. The code uses string analysis functions to, firstly, confirm that the data is coherent by searching for the word 'Box' and then finding the two pairs of coordinates that represent the detection box. If the detected person is close to the camera, the area of the detection box is greater and the resulting alarm tone is of higher frequency:

String myString =" ";
String myStringReversed =" ";

void setup() 
  pinMode(4, OUTPUT);
  digitalWrite(LED_BUILTIN, HIGH);
  digitalWrite(4, HIGH);
  digitalWrite(LED_BUILTIN, LOW);
  digitalWrite(4, LOW);
//  while (!Serial);

  Serial.println("LoRa Receiver");

  if (!LoRa.begin(8690E5)) {
    Serial.println("Starting LoRa failed!");
    while (1);

void loop() 
  //delay (1000);
  // try to parse packet
  int packetSize = LoRa.parsePacket();
  if (packetSize) 
    // received a packet
    Serial.print("Received packet '");
  digitalWrite(LED_BUILTIN, HIGH);
  digitalWrite(4, HIGH);
  digitalWrite(LED_BUILTIN, LOW);
  digitalWrite(4, LOW);
    // read packet
    myString =" ";
    myStringReversed =" ";
    int i = 0;
    char c;
    while (LoRa.available()) 
      //c[i] = (char);
      myString = (char) + myString;
      //Reverse the string:
      c = myString.charAt(0);
      myStringReversed = myStringReversed + c;
    //Serial.print("My string: ");Serial.print(myString);
    // print RSSI of packet
    //Serial.print("' with RSSI ");

void processString()
    Serial.print("My string reversed:");Serial.print(myStringReversed);
    // print RSSI of packet
    Serial.print("' with RSSI ");
    int len = myStringReversed.length();
    int j=0;
    char a,b,c,d;
    String coord1 = " ";
    String coord2 = " ";
    String coord3 = " ";
    String coord4 = " ";
    int k =0;
    char x = ',';
    int z=1;
    int y=1;
    int r=1;
    int s=1;
    int v=0;
    while (j < len) 
      a = myStringReversed.charAt(j);
      b = myStringReversed.charAt(j+1);
      c = myStringReversed.charAt(j+2);
      if((a=='B')&&(b=='o')&&(c=='x'))                           // The word 'box' has been identified in the string - k is now greater than 0.
        k = j+5;
        Serial.print("Character B was found at: ");Serial.println(j);
    if (k>0)
      v =0;                                                      // int V stops perpetual loops occurring.
        if(myStringReversed.charAt(k)==x)                        // Build up string 'coord' until a comma is reached.
        coord1 = coord1 + myStringReversed.charAt(k);
        //Serial.print("coord1: ");Serial.println(coord1); 
      v =0; 
        if(myStringReversed.charAt(k)==')')                        // Build up string 'coord' until a comma is reached.
        coord2 = coord2 + myStringReversed.charAt(k);
        //Serial.print("coord2: ");Serial.println(coord2); 
      v =0;
      k=k+3;                                                     // Takes account of two brackets and a comma.
        if(myStringReversed.charAt(k)==x)                        // Build up string 'coord' until a comma is reached.
        coord3 = coord3 + myStringReversed.charAt(k);
        //Serial.print("coord3: ");Serial.println(coord3); 
      v =0;
        if(myStringReversed.charAt(k)==')')                        // Build up string 'coord' until a comma is reached.
        coord4 = coord4 + myStringReversed.charAt(k);
        //Serial.print("coord4: ");Serial.println(coord4); 
    Serial.print("coord1: ");Serial.println(coord1);
    Serial.print("coord2: ");Serial.println(coord2);
    Serial.print("coord3: ");Serial.println(coord3);
    Serial.print("coord4: ");Serial.println(coord4);
    int coord10 = coord1.toInt();
    int coord20 = coord2.toInt();
    int coord30 = coord3.toInt();
    int coord40 = coord4.toInt();
    int area = (coord40 - coord20) * (coord30 - coord10);
    Serial.print("Box area: ");Serial.println(area);

Step 11: Detecting Other Objects and Animals

The '' file is a generic 'live-object-detection'
script and can be modified very easily to detect a total of 20 different objects.

If we look at line 119:

# "Class of interest" - Display detections only if they match this class ID
CLASS_PERSON         = 15
To detect dogs, just change this to:
# "Class of interest" - Display detections only if they match this class ID
CLASS_DOG         = 12
Also, line 200 needs to be changed as well:
        # Filter a specific class/category
        if( output_dict.get( 'detection_classes_' + str(i) ) == CLASS_PERSON ):
Although the person class worked really well and was quite impressive, the dog class was a little bit underwhelming and not nearly as good as some of the other models I've tested. Nonetheless, here's the full list of classes available with this model: Aeroplane Bycycle Bird Boat Bottle Bus Car Cat Chair Cow Diningtable Dog Horse Motorbike Person Pottedplant Sheep Sofa Train TVmonitor

Step 12: Testing the System

After putting the system into headless mode and logging in via a laptop using SSH, I was able to test the system 'in the field'. Initially, there was a bug that caused the camera to turn off after 15 minutes and this was cured by installing 'screen' on the Raspberry Pi and typing 'screen' into the command line before initiating the python files. What screen does is it opens another terminal on the Pi, so it has it's own live terminal which does not get shut down when my laptop terminal shuts down. It's a really good solution and avoids having to mess about with other 'run on boot' solutions which can potentially wreck the whole system.

A separate video camera was set up in my office, 200 m away, synchronised with the main camera and focused on the receiver gadget (bottom right of video). On testing, the system did not respond to the dog in the camera frame, but did respond to me (a person) ….. Success!

I'm going to upgrade the whole system to the Movidius neural stick 2 at some stage and use a much bigger micro SD card with partitions to prevent captured images clogging everything up.

PCB Contest

Participated in the
PCB Contest