Introduction: Build Your Own AI (Artificial Intelligence) Assistant 101
Remember the time, when you were watching Iron Man and wondered to yourself, how cool it would be if you had your own J.A.R.V.I.S? Well, It's time to make that dream into a reality.
Artificial intelligence is the next gen. Imagine how cool it would be if you had your friends over at your house, and all of a sudden you go, "Hey JARVIS, show me some memes.", and JARVIS goes "Sure, sir. Here are the latest Italian memes.", whilst showing you the funniest Italian memes. Cool right?
(Your friends would look up to you like you are Elon Musk.)
In this Instructable, I will show you how to build your very own Artificial Intelligence (AI) assistant using a free online tool (YAAY!) called API.AI
UPDATE 12/12/17 : API.AI has changed its name to "Dialogflow", but the working procedure and concept is still the same.
I will be going only through the basics, as the possibilities of API.AI is endless. My aim is to build an Assistant JARVIS who will respond to basic conversation like greetings, and can crack a few jokes. However, you can add features such as getting the weather details, setting alarms and much more.
API.AI is an organization that specializes in Artificial Intelligence and Natural Language Processing. It was acquired by Google (Hence the free) in 2014 and helps developers (You da Tony Stark now!) make AI assistants for a variety of needs. Its dynamic and easy to use interface allows everyone to develop bots for businesses, games, and much more. And now..
Enough Talk! Let's change the World!
P.S: I have added a .zip file of the AI that I'll be making in this tutorial (Refer Step 8) which you can upload and use as a head-start while toying with API.AI (or) you could start from scratch and go along with me :)
P.S.2 : It would give me great joy if you would vote this Instructable for the First-Time Author Contest. It's simple. Just click on the VOTE button xD. Thanks a Million !
Step 1: API.AI - What Can It Do?
API.AI is a framework for developing Artificial Intelligence bots that makes use of 'Natural Language Processing" (NLP). But what exactly is Natural language processing?
Consider this example,
You are in your first day of school learning Trigonometry (Tony Stark stuffs). You have no prior knowledge as to what the subject is about, what kind of questions you will be asked, or how to answer them. You know nothing! (Game of Thrones reference :P). Soon your teacher teaches you how to solve ONE kind of a problem, and you find that you can solve that problem on your own. You also find that you are able solve, on your own, all problems that follow a similar pattern, to the problem taught by your teacher. This is exactly how API.AI works.
In the beginning, your Assistant (a.k.a Bot, Friend, etc) starts afresh with no knowledge. By teaching your Assistant how to reply to specific phrases, you make the your Assistant self adaptable, such that it learns how to respond to those specific phrases, as well as other phrases that convey the same meaning.
API.AI is also super versatile i.e, your bots can be deployed to various platforms that support AI, with a single click. They also provide SDK's for Python, Ruby, C++, and much more. Facebook Messenger, Kik, Slack, Google Assistant, are a few examples, into which you can deploy your bot. Consider it as bonus, being able to control your Assistant via voice, as well as a text service. (Everything is Awesome!)
Step 2: Getting Started
(The previous step was more like a Step 0 :P)
So you're finally ready to make your own .
You will be greeted with a Terms of Service page after authentication. Click on "Accept".
You are now in the Console (A fancy word for Work-space) where you will manage and make your own AI Assistant.
In API.AI, Assistants are commonly referred to as "Agents". An Agent as a whole is your Assistant (Bot, etc.).For example, Siri as a whole is an Agent. Google assistant, Cortana, Siri, etc, are all different kinds of agents, having different personalities, based on how You (the Developer), programs them.
Click on "Create Agent" inside the blue box, below the Introductory video to make your very first Agent (I will be referring to the AI Assistant as Agent or by "JARVIS" from now on.)
P.S: Watch the Introductory video to get a better idea on the features API.AI offers.
Step 3: Birth of Your AI
Once you click on "Create Agent", you will be presented with a screen, to add basic details of your Agent. My agent details is as follows, but you can customize it to suit your needs:
- Agent Name: JARVIS
- Agent Description: Just Another Rather Very Intelligent System.
- Add Sample data: (Leave Blank)
- Language: English (API.AI supports many popular languages)
- Default Time Zone: (Choose your Location from the Drop Down)
You may choose to add sample data, such as Alarms, Easter Egg etc, but for the sake of this tutorial, I will be making JARVIS from scratch so that you developers can learn how to implement these features on your own. :D
Click on the "SAVE" button at the top-right to save your Agent. Get used to Saving your agent often, as API.AI doesn't provide an Autosave feature, and you have to manually save your agent every time changes are made.
CONGRATS!!! You now have your very own Assistant (Kinda) !!!
Step 4: Familiarizing Yourself With the Console
The Console is where all the magic happens.
The bar in the left (Intents, Entities, etc) is where you get to train your bot to respond to phrases a User might ask. The "Test Console" on the right is where you, as a user will test your assistant, to see if it is giving the right response.
In this tutorial, our focus is only about 2 sections : INTENTS & ENTITIES
INTENTS: This is where you provide details of phrases a User might ask, and train your Assistant on how to respond to those phrases.
ENTITIES: Consider these as variables that store data, which can be retrieved and used later. API.AI provides a vast range of prebuilt entities such as location, time, etc. You are also free to make your own entities to store names, movie lists, etc.
Default Fallback Intent: By now you must be wondering what the Default Fallback Intent (DFI) on your screen is. The DFI is an Intent that will be triggered if your Assistant fails to match the phrases said by the user. Think of it like you saying "I Don't Know", when someone asks you a question you have no idea how to answer.
Since right now, your Assistant is like a baby with no knowledge, talking to it would result in the DFI being triggered. Go ahead and try talking to it in the Test Console on the right.
P.S: You can ignore the Default welcome Intent for now. It is also an Intent, often used when integrated with FB Messenger, etc, and contains Phrases your bot would say first to initiate a conversation, when it is called.
Step 5: Making Intents
Now is when things start getting real.
Click on "Create Intent", at the top of the console, to create you very first Intent.
I will be naming this Intent, "startconvo.Hi" (At the topmost blank), and the purpose of this intent would be to respond to greetings such as Hi, Hello, etc.
Intents have 2 main sections:
USER SAYS: In this section, you'll provide various phrases a User may ask. The more phrases you add, the better your Assistant can learn and respond to similar phrases. (Try to add at least half a dozen phrases so that your Agent can understand and can recognize other similar phrases.)
RESPONSE: Here, you'll provide answers for the said User phrases. You can add multiple responses in this section, and your Agent will pick one at random.This is done to avoid redundancy, and make the conversation more natural-like. Responses can also be rich messages like Cards, Images, etc, that are displayed in devices that support them. (Refer to docs for more info: Rich Messages)
For JARVIS this is what the 2 sections contain:
User Says : Hi, Hey, Hello, Yo
Responses : My man! , Hey! , Hi There! , Yo Dawg!
Don't forget to Save after adding changes.
YOU NOW HAVE AN AI ASSISTANT (YAAAAAAAY!!!). Try talking to it in the test console.
P.S: If you are using Chrome Browser, you can click on the mic icon in the Test Console to talk to your Agent and get your response.
P.S.2: Notice how JARVIS responds when I say "Hey Jarvis!" (or) "Hola Jarvis!" even though I haven't fed that phrase in the User says section. (It's a Magic Trick! xD)
Step 6: Follow-up Intents
Now that you have an idea on how to make Intents, lets make some follow-up intents.
Follow up Intents, are a branch of a main Intent, that carry the conversation forward deeper into a specific topic. For example, Your Agent could be showing you a Cat Video, and if you say, "Show me more similar videos.", your Agent must show you more Cat videos only, and not something else. Hence in such cases, Normal Intents cannot be used as they refer to a much broader category. FOLLOW-UP INTENTS, follows up on the context your conversation was heading.
Let's play around with follow up intents. Create an intent with the name "Jokes" and populate the User says section with common phrases people say when they want to hear a joke. Ex: "Tell me a Joke" , "Make me Laugh", etc. Fill in the Response section with a few hilarious jokes. Don't forget to hit Save when done.
Now head over to the Intents tab, hover your mouse over the right side of the Jokes Intent, and click on Add Follow-up Intent. From the drop down list, select "More", and you will find a Follow-up Intent has been created under your Main Jokes intent. The purpose of this Follow-up Intent is to give more jokes, when the User says, "Another one". Hence, in this context, "Another one" means to tell another Joke (Not DJ Khalid xD).
Note: Even though the user does not SAY Joke, he/she IMPLIES it.
The follow-up intent you have created is just like your normal Intent. Add phrases how a User would ask for more Jokes, and fill in the Response section, with the same Jokes you used in the Main Intent.
Voila. You now have an Assistant, who is your personal Stand-up Comedian :)
Step 7: Adding Entities
Now that you have a talking Agent, let's teach it your name! (I will be using API.AI's inbuilt entities to keep this tutorial short, but you can add your own entities as well)
Create a new Intent and call it: Intro.mynameis
In the User says section, try adding the following phrases:
- My name is Antony
- I am Tony
- Call me Bruce
You will notice that, when you enter the above sentences, the names get highlighted by a color, and are stored in an Entity called "sys.given-name". This is another one of the many cool features API.AI offers. API.AI can recognize names, locations, time, etc, from phrases and can categorize and store them into their pre-built Entities automatically. (Such helps, Much Wows)
Some names may not be highlighted in the phrases, but you can add them to Entities by highlighting only the information to be stored, and selecting the respective Entity from the list.
In the Response section, you can call the values stored in the Entities, using the syntax $entityname
For JARVIS, the Response section goes like: Hi $given-name! A pleasure to meet you.
Step 8: Importing and Exporting Agents
Agents can be Imported/Exported, and it is always best to keep a copy of your agent on your local machine, in case some unforeseen error occurs.
Like I mentioned in the Introduction. Here are the files of the Agent we just built together. You can import the .zip file into API.AI and just follow along this crash course, whilst eating spicy Doritos. (Your Welcome ;) )
- Download the .zip file provided below.
- Create a new Agent in API.AI, and click on the Gear icon next to your Agent name, in the top left corner.
- Head over to the Export and Import tab, just below your Agent name.
- Choose Import from zip >> Select File >> and select the .zip file you had downloaded.
- Type in "IMPORT" in the box provided below, and click on Import.
- You have successfully Imported an Agent. (Sounds like The Matrix, right?)
You can now customize the Agent to your needs, by adding Intents and Entities.
Step 9: Integrations and Other Stuffs
API.AI offers one click integrations so you can deploy your app to various services such as Facebook Messenger, Google Assistant, Kik, Slack, etc. The process for implementing them vary across services, so here's the link to a detailed guide on how to implement them: Integrations with API.AI
The most common method of Integrating your Agent as a chatbot is to use the API.AI web interface.
- Click on the Integrations tab in the left hand side of the Console
- Under the One-click Integrations, toggle the Web Demo switch.
- Use the link provided to talk to your chatbot, or embed it to a website, or share it with the world.
Sharing the link with others helps you to understand how your Agent responds to different kinds of conversation, and fix errors when encountered with random talks.
API.AI offers a wide range of pre-built Agents such as Home Automation, Car System Control, Web Search, Flight Booking, etc, which can be imported as an Agent into your console to use as a head-start while working on your Awesome agent.
To Import pre-built agents:
Head over to the Prebuilt Agents tab on the left hand side of the console >> Hover over the Agent you wish to Import and click on Import >> Choose your Google Project (Leave blank to create a new Project) and click on Proceed to Agent. You have successfully imported a Prebuilt Agent.
Step 10: ...And That's a Wrap!
YOU'VE DONE IT. YOU'VE MADE YOUR VERY FIRST ARTIFICIAL INTELLIGENCE ASSISTANT!!! I'm so proud :")
Anyways, it was great fun writing my Very First Instructable. Please do Like, Share and Comment if you have any queries. Would be super glad to help!
Also comment down below if you liked this Instructable, or have suggestions on how to improve my writing. :)
And also, Sorry for the long Post :P. Here's a Meme xD.
Spread Love and Happiness. I'll talk to you guys in the next one.
(Subtle MKBHD Reference xD)
Nyamburakaranja made it!
We have a be nice policy.
Please be positive and constructive.
Can I add my own voice? I have created a digital one that I would like to use. Can I use it?
How did you create your own voice? There are many factors to consider while implementing your own voice to the Dialogflow dataset.
Apparently lyrebird.ai has their own process of integrating their services with your Chatbot. This can be achieved only by contacting their support center. The Lyrebird API is something that is new, and integrating it with your Chatbot would be an experiment for you to consider. :)
Does this program actually learn on its own? Please respond because it would be amazing if it does.
How do I get the AI to gather information for me?
It's AI. It's job is TO LEARN. It learns similar phrases by noting patterns in the speech of the user, and provides responses accordingly.
how to connect this bot with AI?
AND ALSO WITH INTERNET?
How do i change the Ai voice
You can't. Sorry! JARVIS voice could be a reality, probably in the next 5 years :P
I made it Jarvis but how to use it its offline or online
Please refer to Integrations (Step 9) to use your bot online.
Its not speaking
ok so can't i extract it into a .exe file
i don't want this to be used only as an extension