Today we are gonna go through making an Augmented Reality web browser for Android.
This idea started when ExpressVPN asked me to do a sponsored YouTube video. Since this is my first one, I wanted to do something that was relevant to their product.
Pretty much immediately I thought, ohh I will just make an augmented reality web browser so we can browse the web in AR on a VPN. It can’t be that hard, right? Wrong. I set myself some limitations for this project because I wanted to use it to learn some new things.
Number one I wanted it to be for Android cause I always do stuff with IOS.
Number two I didn’t want to use any paid API’s, I wanted everyone to be able to just download this project and run it without having to pay for any stuff online. So no IBM Watson, no Google API, and nothing from the Unity Asset store.
LET'S GET STARTED!
Step 1: First Things First.
The first thing I wanted to get working was a good solution for speech to text so we could do the online searches with our voice. Also I think voice is a great method of interaction in AR, at least until we have a good hand tracking solution. I know that Android has some native speech to text functionality so a quick google search will help us find some plugins for Unity.
I first came upon this plugin for unity:
I tried this out and it worked great. The only problem was that when you use it with ARCore it generates a native popup box and seems to background Unity and you end up losing tracking.
This was less than ideal.
Step 2: Getting Speech to Text Working for Android.
So I started searching for some plugins that did not bring up the native pop up box and could not find much but I did end up finding this android library:
Now I know literally nothing about native Android development but I wanted to challenge myself so I figured I would just try to write some bridge code for this library and turn it into an Android plugin for use in Unity.
Again, this was a mistake and lead to hours of frustration.
Then finally it worked…
Step 3: Lessons Learned.
So there’s two things I learned in this process that are not immediately apparent from just googling how to make an Android plugin for unity.
Number one is you’ll probably need to get a reference to the Android app context if your plugin is going to do anything interesting. You can do this by adding the classes.jar file from your Unity install to your Android project as a library. So go to file project structure and then choose the dependencies tab for the app module. Here you can click the plus button to add the jar file. Go to your Unity build, playback engines, androidplayer, variations, mono, development, classes, and finally classes.jar. Change the scope to compile only. Now, in a new java file you can do:
and use that reference where ever you need it.
The next weird issue is that this voice functionality can only be run on the main thread or else you will get errors. In order to do this in Unity you have to tell the functions and the plugin to run on the UI Thread as an AndroidJavaRunnable like the picture above.
Step 4: Struggles.
At this point Im thinking Im an Android expert,
Im online applying for android dev jobs, I’m ordering android stickers and t-shirts. Life is good. Now I’m ready to move on to figuring out how to render a webpage in Unity. After doing a little research I see that the accepted solution is to use an Android WebView. This is just an Android class that allows you to render websites that are inter-actable inside an Android app without loading everything in the browser. Basically, it’s so you can keep users in your app. The first order of business is to see if anyone has made a unity plugin for this that is open source. I first try this plugin:
but it only renders a WebView to the Unity GUI layer so that’s not gonna work. Then I find this plugin for VR:
this allows you to render a WebView to a texture and its even interactable, which is great.
I thought this was the answer until I tried it and found out that is was blocking all my clicks from unity.
Step 5: Back to the Drawing Board.
I am just gonna try to make my on plugin for this, because all I really need is to send an image of the website to unity. Doing some research on that, I find out that I can save an android canvas to a bitmap and then encode it to a png and just send those bytes to Unity, there create a new texture with that array of bytes and I’m good. After another few hours of frustration and questioning my existence…
It finally worked.
So now I get can a screenshot from a website, so let’s see how it works with arcore…
I mean I am using a galaxy s7 which is not the newest phone, but this WebView stuff is still freezing the whole app and basically unusable. I’m assuming it’s because WebView and ARCore are both overloading the main thread but I don’t really know. Back to the drawing board. If we wanna make this work, we are going to have to off load the heavy lifting to some kind of server. After doing some Googling it turns out you can take a screenshot of a website with a library for Node.js called WebShot that uses Phantom JS which is a scriptable headless browser.
Step 6: Finally We Are Getting Somewhere.
Now I have to figure out how the hell to use Node.js….
Turns out you can make a Node.js script that listens on a particular port number and when it gets a hit it on that port it can return some information. We can test this out by creating a little hello world script that listens on port 3000. We can cd into the directory with the script and run it by doing node and then the script name. If we navigate to our IP address and then port 3000 in our browser we can see it return hello world. Now that I have a small grasp on node I can get it working on my server that I host my websites on which is hawkhost.com. I SSH into my server and try to run a few hello world node.js scripts…and nothing is working. After another few hours of messing around I find out that my particular hosting server only has two ports open for use, that is 3000 and 12001.
So using those ports and my hosting servers IP I can get a hello world example working. Next I install the WebShot module and create a small script that I can pass a URL and it will return me an image of the website at that web address. Now I can start that node script and send an http POST request from Unity to the specific IP and port number of my server which will return me a byte array that is the image of that website.
Thank GOD. Now another problem is when I close my terminal the process ends and quits listening. I do some more research and find a module called forever. NPM install forever and now I can navigate to forever and do forever start the script and it will continue running until I log in and stop it again.
Step 7: It Works!
Great. But it’s not cool enough.
When I think about the value of browsing the web in AR it comes from the addition of space. We are no longer confined to a single screen so I want to make something that allows me to visualize my search trail right in front of me. So let’s load that first search page and then crawl that page and extract every search result as a link, which we can then load as an image above our main screen. We can do this with another Node.js script that scrapes the first page of the Google results and run it continuously with forever. This could be done much more efficiently with the Google search API but rule number two for this project was no paid API’s so, we are gonna do it like this for now. Now that we have the images for each link we can load them on a bigger screen every time we click them and boom, we have a nice little browser here. It’s not fully functional but i’ll take it. Alright so if you want to run this project yourself go to my Github and download the expressVPN project:
Step 8: Getting Everything Working.
Open it up in Unity and let’s get everything running locally on your machine. First you need to find the IP address of your machine so if you are on mac just hold option and click the wifi symbol to reveal your IP.
Go back to unity and open up the browser controller script and put in your IP address there and copy it to your clipboard. Find the nodeScripts folder and put it on your desktop, open up the folder and change both extensions to .js. Open up each script and change the IP address to your IP. Now open up terminal and we have to install some things. Install HomeBrew if you don’t already have it.
-brew install node
-npm install webshot
-npm instal flatiron
-npm install union
-npm install cheerio
Now we can start both scripts so cd into the nodescripts folder and do node getimage.js And then open up a new terminal window and do node getlinks.js Leave both terminal windows running and go back to the editor. If we press play everything should work fine. We can also go to file, build settings, and hit build and run to get it on our phone! If you wanna stop the servers just hit control c or command q to close the whole terminal.
This is an entry in the
Epilog X Contest