Neural Network Powered Planetarium Using Python, Electron, and Keras

Introduction: Neural Network Powered Planetarium Using Python, Electron, and Keras

In this instructable, I'll show you how I wrote an automatic 3D planetarium generator, using Python and Electron.

The video above shows one of the random planetariums the program generated.

**Note: This program is in no way perfect, and in some places not very pythonic. The neural net discriminator is only ~89% accurate, so some odd images will make it into the planetarium**

Specifics

The planetarium queries a NASA API for space-related images, and utilises a convolutional neural network to determine if the image is suitable for processing. The program then uses OpenCV to remove the background from the image, and finally the images are stitched together into one large equirectangular image. This image is then saved, and an Electron Node.js application opens the image, and uses the PhotoSphere.js package to view the image in a planetarium style 3D format.

Dependencies

Python:

  • Keras
  • Pillow
  • cv2
  • Numpy
  • Requests
  • urllib
  • Random
  • time
  • io

Electron:

  • PhotoSphere

Step 1: Setting Up Your Environment

Installing Electron and Python

First, ensure you have node.js and npm installed (if not, you can download here)

Next, you need to install Electron. Open a command prompt, and enter the following command:

npm install electron -g

Next, you need python, which can be downloaded here

Setting up a Virtual Environment

Open a command prompt, then enter the following commands to set up your virtual environment:

pip install virtualenv
virtualenv space
cd space
scripts\activate

Installing Python Dependencies

Run these commands in the command prompt to install your python dependencies:

pip install keras
pip install pillow
pip install numpy
pip install requests
pip install opencv-python
If you want to train the network yourself, make sure to set up GPU acceleration for Keras

Step 2: Querying the NASA Search API

Overview

NASA has a lot of really useful APIs that you can use with your projects. For this project, we will be using the search API, which allows us to search NASA's image database for space-related images.

The Code

First, we need to define a python function to accept an argument that will act as the search term:

def get_image_search(phrase):
pass

Next, we will convert the search term to URL format, then use the requests library to query the API:

 def get_image_search(phrase):
	params = {"q": urllib.parse.quote(arg), "media_type": "image"}
        results = requests.get("https://images-api.nasa.gov/search", params=params)

Finally, we will decode the collection+JSON string that the API returned to us, and extract a list of links to images related to the search term:

 def get_image_search(phrase):
	params = {"q": urllib.parse.quote(arg), "media_type": "image"}
        results = requests.get("https://images-api.nasa.gov/search", params=params)
        data = [result['href'] for result in results.json()["collection"]["items"]]

There we go! We now have a code snippet that can query the NASA image search API, and return a list of links to images related to our search term.

Step 3: The Convolutional Neural Network

Overview


The job of the neural network is to classify whether an image is of something in space, or whether it is not. To do this, we will use a convolutional neural network, or CNN, to perform a series of matrix operations on the image, and determine how space-y it is. I won't explain this all, because there is a lot of theory behind it, but if you want to learn about neural networks, I suggest "Machine Learning Mastery"


The Code

First, we need to import our dependencies:

import os
#Fix for issue during train stepn oN GPU
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import tensorflow as tf
if tf.test.gpu_device_name():
    print('GPU found')
else:
    print("No GPU found")
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from PIL import Image
import numpy as np

Next we need to define our model:

img_width, img_height = 1000, 500

train_data_dir = 'v_data/train'
validation_data_dir = 'v_data/test'
nb_train_samples = 203
nb_validation_samples = 203
epochs = 10
batch_size = 8

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

model = Sequential()
model.add(Conv2D(32, (2, 2), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

I have trained the model for you, but if you would like to train the model yourself, on your own dataset, then I have attached the training code. Otherwise, you can download the trained model's HDF5 file. Due to Instructables file restrictions, I have had to rename it with a ".txt" extension. To use it, rename the file to a ".h5" extension, and load it with this code:

model.load_weights("model_saved.h5")

To use the network to predict how space-y an image is, we will define this function:

def predict(image_path):
    img = image.load_img(image_path, target_size=(1000, 500))
    img = np.expand_dims(img, axis=0)
    result=model.predict_classes(img)
    return result[0][0]

Step 4: Processing the Image

Overview

For image processing, I am using the OpenCV (cv2) library. First, we will blur the edges of the image, and then we'll remove the background by creating a mask and changing the alpha values of the darker colors


The Code

This is the part of the function that blurs the edges:

def processImage(img):
    RADIUS = 20
    # Open an image
    im = Image.open("pilbuffer.png")
    # Paste image on white background
    diam = 2 * RADIUS
    back = Image.new('RGB', (im.size[0] + diam, im.size[1] + diam), (0, 0, 0))
    back.paste(im, (RADIUS, RADIUS))

    # Create blur mask
    mask = Image.new('L', (im.size[0] + diam, im.size[1] + diam), 255)
    blck = Image.new('L', (im.size[0] - diam, im.size[1] - diam), 0)
    mask.paste(blck, (diam, diam))

    # Blur image and paste blurred edge according to mask
    blur = back.filter(ImageFilter.GaussianBlur(RADIUS / 2))
    back.paste(blur, mask=mask)
    back.save("transition.png")
    back.close()

Next, we will set the darker colours to transparent, and save the image temporarily:

    #Create mask and filter replace black with alpha
    
    image = cv2.imread("transition.png")
    hMin = 0
    sMin = 0
    vMin = 20
    hMax = 180
    sMax = 255
    vMax = 255
    lower = np.array([hMin, sMin, vMin])
    upper = np.array([hMax, sMax, vMax])
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    mask = cv2.inRange(hsv, lower, upper)

    output = cv2.bitwise_and(image, image, mask=mask)
    *_, alpha = cv2.split(output)
    dst = cv2.merge((output, alpha))
    output = dst
    with open("buffer.png", "w+") as file:
      pass
    cv2.imwrite("buffer.png", output)

Step 5: Stitching Images Together Into an Equirectangular Projection

Overview


This function takes multiple images and stitches them into a format that can be interpreted by the PhotoSphere.js package, using the PIL (pillow) library


The Code

First, we need to create an image that can act as the host for the other images:

new = Image.new("RGBA", (8000, 4000), color=(0,0,0))

Next, we need to iterate through the array of images (that have all been resized to 1000x500) and place them into the image:

    h = 0
    w = 0
    i = 0
    for img in img_arr:
        new.paste(img, (w, h), img)
        w += 1000
        if w == 8000:
            h += 500
            w = 0
        i += 1

Now we just wrap this up in a function that takes an array of images as its argument, and returns the new image:

def stitch_beta(img_arr):
    new = Image.new("RGBA", (8000, 4000), color=(0,0,0))
    h = 0
    w = 0
    i = 0
    for img in img_arr:
        new.paste(img, (w, h), img)
        w += 1000
        if w == 8000:
            h += 500
            w = 0
        i += 1
    return new

Step 6: The Full Python Script

This is the full python neural network script, which is saved as net.py, and imported into the main script:

# importing libraries
import os
#Fix for issue during train stepn oN GPU
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import tensorflow as tf
if tf.test.gpu_device_name():
    print('GPU found')
else:
    print("No GPU found")
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from PIL import Image
import numpy as np


img_width, img_height = 1000, 500

train_data_dir = 'v_data/train'
validation_data_dir = 'v_data/test'
nb_train_samples = 203
nb_validation_samples = 203
epochs = 10
batch_size = 8

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

model = Sequential()
model.add(Conv2D(32, (2, 2), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

model.load_weights("model_saved.h5")

def predict(image_path):
    img = image.load_img(image_path, target_size=(1000, 500))
    img = np.expand_dims(img, axis=0)
    result=model.predict_classes(img)
    return result[0][0]

This is the main python file, api.py:

import requests, sys, random, urllib.parse, cv2
from PIL import Image, ImageFilter
from io import BytesIO
import numpy as np
import net


def get_image_search(num, phrase):
    count = 0
    img_arr = []
    for arg in phrase:
        print(arg)
        print(f"Current image count: {count}")
        i = 0
        params = {"q": urllib.parse.quote(arg), "media_type": "image"}
        results = requests.get("https://images-api.nasa.gov/search", params=params)
        data = [result['href'] for result in results.json()["collection"]["items"]]
        print(len(data))
        if num > len(data): num = len(data)
        while count < num+1:
            if i == len(data)-1:
                break
            try:
                result = requests.get(data[i]).json()
                result = requests.get(result[0]).content
                img = Image.open(BytesIO(result))
                img = img.resize((1000,500))
                img = img.convert("RGBA")
                with open("pilbuffer.png", "w+") as file: pass
                img.save("pilbuffer.png")
                space = net.predict("pilbuffer.png")
                if space == 0:
                    raise Exception
                processImage("pilbuffer.png")
                img = Image.open("buffer.png")
                img_arr.append(img.copy())
                img.close()
                count += 1
                print(f"\n{arg}: Image {i} added ({count}/{num})")
            except KeyboardInterrupt:
                print("\nSkipping Image")
                break
            except:
                print(f"\n{arg}: Image {i} did not pass neural filter", end="\r")
                pass
            i += 1
        if count >= num:
            break
    print(f"\n{count} images retreived")
    return img_arr


def stitch_beta(img_arr):
    new = Image.new("RGBA", (8000, 4000), color=(0,0,0))
    h = 0
    w = 0
    i = 0
    for img in img_arr:
        #pbar.set_description(f"Processing image {i+1}")
        new.paste(img, (w, h), img)
        w += 1000
        if w == 8000:
            h += 500
            w = 0
        i += 1
    return new

def processImage(img):
    RADIUS = 20
    # Open an image
    im = Image.open("pilbuffer.png")
    # Paste image on white background
    diam = 2 * RADIUS
    back = Image.new('RGB', (im.size[0] + diam, im.size[1] + diam), (0, 0, 0))
    back.paste(im, (RADIUS, RADIUS))

    # Create blur mask
    mask = Image.new('L', (im.size[0] + diam, im.size[1] + diam), 255)
    blck = Image.new('L', (im.size[0] - diam, im.size[1] - diam), 0)
    mask.paste(blck, (diam, diam))

    # Blur image and paste blurred edge according to mask
    blur = back.filter(ImageFilter.GaussianBlur(RADIUS / 2))
    back.paste(blur, mask=mask)
    back.save("transition.png")
    back.close()
    #Create mask and filter replace black with alpha
    
    image = cv2.imread("transition.png")
    hMin = 0
    sMin = 0
    vMin = 20
    hMax = 180
    sMax = 255
    vMax = 255
    lower = np.array([hMin, sMin, vMin])
    upper = np.array([hMax, sMax, vMax])
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    mask = cv2.inRange(hsv, lower, upper)

    output = cv2.bitwise_and(image, image, mask=mask)
    *_, alpha = cv2.split(output)
    dst = cv2.merge((output, alpha))
    output = dst
    with open("buffer.png", "w+") as file:
      pass
    cv2.imwrite("buffer.png", output)
    #Edge detection and blurring


if __name__ == "__main__":
    search_terms = ["supernova", "planet", "galaxy", "milky way", "nebula", "stars"]

    #The search terms can be altered to whatever you want the planetarium to include

    img_arr = get_image_search(64, search_terms)
    print("Images retrieved and neural filtered")
    img = stitch_beta(img_arr)
    print("Images stitched")
    img.save("stitched.png")

Step 7: The Electron App

Overview

We will create a simple electron app that just positions and loads the PhotoSphere element. The main.js and package.json files are straight from the Electron website, and the HTML is a slightly modified version of the HTML provided on the PhotoSphere website. I have included the files, but renamed all to .txt, as Instructables does not allow these file types. To use the files, rename them with the appropriate extension.

The Code

main.js

const { app, BrowserWindow } = require('electron')

function createWindow () {
  const win = new BrowserWindow({
    width: 800,
    height: 600,
    webPreferences: {
      nodeIntegration: true
    }
  })
  win.loadFile('index.html')
}

app.whenReady().then(createWindow)

app.on('window-all-closed', () => {

  if (process.platform !== 'darwin') {
    app.quit()
  }
})

app.on('activate', () => {
  if (BrowserWindow.getAllWindows().length === 0) {
    createWindow()
  }
})

package.json

{
  "name": "space",
  "version": "0.1.0",
  "main": "main.js",
  "scripts": {
    "start": "electron ."
  }
}

index.html

Step 8: Execution

Creating the equirectangular image

To create the image, run the api.py script in the command prompt, with its virtual environment activated:

api.py

After the scripts have finished executing, run the electron app using:

npm start

Voila! Your planetarium is active! Thanks for reading :)
Space Challenge

Participated in the
Space Challenge

Be the First to Share

    Recommendations

    • Fandom Contest

      Fandom Contest
    • Summer Fun: Student Design Challenge

      Summer Fun: Student Design Challenge
    • Make it Fly Challenge

      Make it Fly Challenge

    Comments