Introduction: Seroma: Server Room Manager

Seroma is a all-in-one server room manager that allows users to check on the status of the servers (temperature and humidity), the access logs of the server room, as well as monitor the server room itself for any security breaches.

Step 1: Login to Your AWS Account.

  1. For ours, we logged in through the AWS educate student gateway as we have a student aws account.
  2. Head over to the “AWS Account” tab on the navigation menu at the top-right.
  3. Click on “Go to your AWS Educate Starter Account”
  4. Open Console to access your AWS Management Console.

Step 2: Getting Started With AWS IOT "things"

  1. Search for “AWS IoT” in AWS services search bar.
  2. Click on “Get started” to proceed to the AWS IoT Console dashboard where you can view all the IoT devices registered in your AWS account.

Step 3: Registering an AWS IOT "thing"

  1. In the navigation bar, navigate to manage your IoT “Things”.
  2. Click on “Register a thing” if you do not have a thing yet. (If you already have a thing then click on the “Create” button on the top right of the screen next to the search tab.)
  3. Click on the first button called “Create a single thing”.
  4. Type “RaspberryPi” as the name of the thing. For this step, no input other than the “Name” is required. After doing so, click next.

Step 4: Activating a Certificate

  1. On the next step, click on the “create certificate” button.
  2. Download and store the 4 download links in the next page into a working directory or folder. To save the root CA file, right click and save-as.
  3. Click on “Activate” and a success message should appear.
  4. Use friendly names for the files by removing the numbers in front of each file name, and renaming the root CA file to "rootca.pem".
  5. Click on "Attach a policy" to proceed.

Step 5: Adding a Policy to Your Certificate

  1. On the next page, if you do not have a policy, they will prompt you to make one on the “Create a Policy” button.
  2. If you already have an existing policy, click on the “Create new policy” button below.
  3. Insert the following information into the policy creation form.

    Name: RaspberryPiSecurityPolicy

    Action: iot:*

    Resource ARN: *

    Effect: Allow

  4. Your policy should then appear in the “Policy” tab under “Security”.
  5. Next, go to the “Certificates” tab that is also under “Security”, and attach your policy to the certificate you created previously.

  6. On the next page, click on your policy and then click “Attach”.

  7. In the Details page of the thing you created, under the “Interact” tab, there is a REST API endpoint which should be copied and saved.

  8. AWS should now have a Thing that is attached to a policy and has a certificate.

Step 6: Initial Set-up for AWS SNS Topic

SSH into Raspberry Pi and install AWS CLI using following pip command:

sudo pip install awscli

AWS CLI includes command completion feature but it is not installed by default. Use following command to install command completion feature on Raspberry Pi’s CLI interface:

complete -C aws_completer aws

Configure AWS CLI with Access Key ID, Secret Access Key, AWS Region Name and Command Output format using the following command:

aws configure

The console will then prompt you to fill in the following information:

pi@raspberrypi:~ $ aws configure

AWS Access Key ID [None]: "Put your User's Access Key ID here"

AWS Secret Access Key [None]: "Put your User's Secret Access Key here"

Default region name [None]: eu-central-1

Default output format [None]: json

pi@raspberrypi:~ $

Step 7: Creating File

  1. Create a JSON file with the above IAM policy with the file name
  2. Create the role using the AWS CLI using the following command

aws iam create-role --role-name my-iot-role --assume-role-policy-document file://iot-role-trust.json

Step 8: Creating Iot-policy.json File

  1. Create a JSON file with the above policy with the filename iot-policy.json.
  2. Create the role policy using the AWS CLI using the following command:

aws iam put-role-policy --role-name IoTRole --policy-name iot-policy --policy-document file://iot-policy.json

Step 9: Create an AWS SNS Topic (Part 1)

  1. In the AWS services search bar, Search for “SNS” service or head to
  2. As you have no topics now, click “Create new topic” to create a topic.
  3. Type in your topic name and display name and click “Create topic” and a new topic would appear when all the steps are successful.
  4. Click on the “Actions” drop down button and “Edit topic policy”.

Step 10: Create an AWS SNS Topic (Part 2)

  1. Set the policy to allow everyone to publish and to subscribe, as this is limitations of an AWSEducate account.
  2. Subscribe to this topic to receive updates published to this topic.
  3. Change the protocol to “Email” and enter your email at the end point.

  4. Head over to your email where you typed your endpoint, click on the confirmation link to confirm your email subscription to subscribe to the topic.

  5. Navigate to “AWS IoT” services, on the navigation menu at the left, click on “Act”. This page is where your rules are displayed and available for you to view and edit. Currently, there is no rules for your IoT thing, click on “Create a rule”.

Step 11: Create an AWS SNS Topic (Part 3)

  1. Type a name in the Name field for your rule. In the Description field, type in a description for your rule. Continuing to the Message source section, we would choose the most updated SQL version in the “Using SQL version” section. Type * in the attribute to select the entire MQTT message from the topic, in our case our topic is “TempHumid”.
  2. Then add a “SNS” notification action for your rule. Then, click “Configure action”.
  3. In the “Configure action” page, choose the SNS topic which you just created and the message format as RAW. After that, choose the role you just created using AWS CLI and click “Add action”.
  4. Your action will be configured and return to “Create a rule”.
  5. Click edit if you wish to edit the rule.

Step 12: Create a Bucket on Amazon S3

  1. Search for S3 in the AWS search bar.

  2. In the Amazon S3 page, click on the “Create Bucket” button to get started.

  3. Fill up the pop-up form that appears with the following information:
    • Bucket Name: seroma-bucket (this must be unique across all existing Amazon S3 buckets)
    • Region: US West (Oregon)
    • Copy Settings: (Ignore)
  4. For steps 2 to 3, simply skip it by clicking “Next” as there is nothing to be changed. On step 4, click “Create bucket”.
  5. After creation, you should see your bucket on the home page.

Step 13: Generate an AWS Policy (Part 1)

  1. Click on the bucket you created to enter the above page, then proceed to “Bucket Policy” under the “Permissions” tab.
  2. Next, click on the “Policy Generator” link on the bottom of the page to generate your AWS policy.
  3. In the form, input the following values:
    • Type of Policy: S3 Bucket Policy

    • Effect: Allow

    • Principal: *

    • AWS Service: Amazon S3

    • Actions: GetObject

    • Amazon Resource Name (ARN): arn:aws:s3:::seroma-bucket

  4. After filling in the information, click on Add Statement.
  5. Click on the “Generate Policy” button.

Step 14: Generate an AWS Policy (Part 2)

  1. Copy the codes generated and click close.

  2. Return to your Amazon S3 Bucket Policy editor and paste the previously copied codes.

  3. Add a “/*” into the codes right behind the Resource codes, like in the image above, then click save.
  4. After doing so, your bucket will be successfully set up and ready for use.

Step 15: Creating Tables for DynamoDB

  1. Search for DynamoDB in the AWS Services search bar
  2. Click on "Create table" and create 3 tables with the information below: (Only the "table name" and "primary key" are changed)
    • accesslog, pk datetimevalue
    • roomstatus, pk datetimevalue
    • staffdata, pk username

Step 16:

This section contains the code for, which writes all data regarding the server room itself every minute. This includes the temperature, humidity, motion (images and videos if true), and access logs. It also writes data to a Google Spreadsheet, data to DynamoDB, images and videos (if any) to S3, display information on the LCD screen, sends an SMS and Email when there is a suspected breach, or when the temperature or humidity is irregular.

To run python files, change directory to where the file is located and type in console: "sudo python <filename>"

Pic 2: Functions declared to allow SMS and Email alerts, and uploading to S3

Pic 3: Variables declared for functions and RPi to work

Pic 4: Start of the loop that gets the temperature and humidity values from the RPi. It also writes the data to a Google spreadsheet

Pic 5: Security part of the loop. It will only activate from 7pm to 7am (off hours). It will check for motion in a one minute span. If motion is detected, it will take an image and video, uploads it to S3, while also writing information to DynamoDB for reference later. Afterwards, it will send an SMS and Email if anything is irregular.

Pic 6: The end of the loop. It also writes data to DynamoDB and sends alerts accordingly. The last line of the loop will make the script sleep until the next minute is reached.

Step 17:

This section contains the code for, which adds the functionaity to track when a member of staff accesses the server room. It is also part of the security aspect of Seroma, where a member of staff is not allowed to access the server room after office hours, to prevent a data breach. It also send an Email and SMSs all staff if a breach is suspected.

Pic 2: Start of the RFID reader logic. Whenever a card is scanned against the reader, the unique id (uid) of the card is taken. Afterwards, we try to find the uid value of the card in the staffdata table to see if the card belongs to any of the staff.
Pic 3: If the uid of the card exists in the database, it will check if it is during office off-hours. If it is, it will alert the rest of the employees through SMS and Email the subscribed email addresses. If it is still during office hours, it will write a row to the accesslog table in the database with the relevant data. It will also display a welcome message on the LCD display.


Step 18:

This is the file. We will be using the Flask framework for the web portal. The HTML files to be put in /templates are attached as well.

Pic 1: First route for Flask defined. It will redirect the user to the login page if they're not logged in, and the dashboard page if they are. Also defines a function to be used in the livestream feature

Pic 2, 3, 4: Routes for Flask. It gets data from the DynamoDB table and then returns them to the HTML files so that they can be used there.

Pic 5: Last 2 routes for Flask. It handles the logout function and the livestream function. It also specifies the port the website will run at.


Step 19:

This section includes the code for Seroma’s telegram bot. It uses the telepot library to tap on Telegram’s Bot API. It works by accepting the queries it gets and displaying the respective information to the user. The user can type ‘help’ for a full list of commands.

Pic 1, 2: To set up a telegram bot, you need to use BotFather. Just run through the instructions to get the HTTP API that we need in our code.

Pic 4: Example of a function that takes a certain number of rows of data from the database based on the user's request

Pic 5: How we take the user's input and decide what to run accordingly.

Step 20: Livestream (

We have implemented a new feature for our server room monitoring system, a live stream of what is going on in the Server Room, this can be accessed at any point of time, anywhere.
How this live stream works: It is a feature that is done in Flask, together with the Pi Camera. Video frames are downloaded as it is happening in real life, so you can actually see that there is a slight delay (1-2 seconds) as video frames are downloaded and pieced together. This could not be done without threading, as the background thread reads frames from the camera and storing the current frame. Piecing all this frames together would then output a live stream.

Pic 2: This is a separate file where all the video frames are stored and as you can see, we are using picamera module to access our raspberry pi camera as that is what we are most familiar with. We have a class Camera so that we could import function as if it is a livestream and not multiple images piecing together, hence in the main application file would take it as a live stream without having to worry about what is happening behind the scenes.

Pic 3:This is part of our file where the live stream part is coded. The main class we imported for this is the Camera from our file at the top of our file. We defined a function at our root directory, gen, however it is only comes into use when we head over to /video_feed where our live stream is at, where it will loop through this function and return the live stream on the webpage.