Digital Assistant

Introduction: Digital Assistant

Ever wanted your own personal secretary to do all the things you don't have time for?

Robotic Process Automation(RPA) is a handy piece of technology that allows people to automate their repetitive and menial tasks so that their invaluable attention can go to where it's needed. Now I know what you're thinking:

"Surely you don't contact your secretary by clicking a button to run a script!"

Well you're absolutely right, which is why this digital secretary can be contacted through slack or on a website.

I am going to be showing you how I created my own Digital Assistant using RPA and integrating it with a chatbot, a project idea that came to fruition with the creation of integration of chatbots in RPA provided by UIPath and Google Dialogflow. Built for Ms. Berbawy's Principles of Engineering Class, this project ultimately only requires that you have enough time, a little less than 50 dollars, and a basic understanding of programming concepts.


1. Working laptop or computer

2. Access to Google Dialogflow

Google Dialogflow is a tool that enables you to create chatbots for the purpose of creating a conversational user interface. The pricing for the Google cloud service can be found here.

3. Access to UiPath Studio and Orchestrator Services

RPA is the process of utilizing digital robots or systems in order to complete certain interactions with user interfaces. Robotic process automations often take in data and manipulate information in order to communicate with the user interface just like we do. UiPath is a provider of robotic process automation tools that aims to reduce the amount of time people spend on boring and repetitive tasks through the use of many unique tools and features. UiPath Studio and Orchestrator are applications that allow you to access and utilize these tools and features, both of which are completely free for individual use!

Step 1: Setup

Before I started creating my processes and chatbot, the first thing that I needed to do was set up everything that I needed in order to create my fully functional digital assistant.

1. UiPath Studio Installation

I followed these steps from the UiPath website to install UiPath Studio; the enterprise edition wasn't necessary for this project since I personalized all my processes.

2. UiPath Orchestrator Set-up

I then navigated to this documentation in order to get my UiPath Orchestrator portal set up and working. I created a dedicated 'tenant' for all my chatbot processes as well as set up my machine so that whatever processes I ran through orchestrator would be directed back towards my computer.

*Make sure you create an environment for all your processes to be under.

3. Google Dialogflow Set-up

For Google Dialogflow, I first made sure that I could create a new agent and access to some form of integration via the Dialogflow portal. I created my agent and chose a name for it. Here's the guide that I followed to get my chatbot agent fully set up.

Step 2: Creating the Processes

One critical part of this project is the creation of each individual process. What does this mean?

Well, in order for my digital assistant to complete specific tasks I needed to instruct it on what to do by providing it with code. I personally preferred to start working on the back-end (which in this case would be the UiPath commands) before I create an established front-end (the chatbot). However, this is ultimately a matter of preference and you can skip ahead to the other steps if you feel that it would make it easier for you.

I started my project by breaking down the task creation process into only 3 steps, all of which are based on the classic 5 stages of the design thinking process.

1. The Ideation Process

The first part of creating my UiPath process was obviously selecting a task that I wanted to automate. A key part of the tasks that I will be able to automate would be anything that I do in my daily life that is very repetitive and tedious. The first thing I did was compiling a list of things that I find to be repetitive and tedious throughout my day.

For example my average day contains repetitive tasks such as:

  • Uploading each of my assignments to my school's submission portal
  • Checking what assignments my teachers have posted
  • Logging in to each individual Zoom Meeting
  • Taking notes for some of my classes

*Note that most of my example tasks relate to school because that is what I spend most of my time focused on, but your tasks can be virtually anything ranging from checking the stock market to advertising your business by messaging countless people about it on social media.

So once I had formed a general idea of tasks that I was going to automate, the final step of the ideation process was narrowing down my large list into a couple of high quality or high feasibility tasks that I knew I could do using a bot or a script without too much trouble.

Of course, this step is optional if you have enough time to dedicate towards making each task, which can be ongoing for an indefinite period of time.

2. Planning

The next step I took was starting to breaking down each of my larger automation tasks into smaller pieces that a computer would feasibly be able to understand.

The three processes I chose to create were:

  1. Assignment Upload
  2. Homework Consolidation
  3. Zoom Meeting Log-in

Using my example process of uploading assignments into my school submission portal, the breakdown would be:

  1. Identify and open a browser window
  2. Navigate to the submission portal website
  3. Open a pre-existing excel sheet
  4. Grab login information from the excel sheet
  5. Enter each login field with the appropriate information
  6. Click log-in
  7. Navigate to the section of the website containing assignments
  8. Find the assignment that you need to submit the assignment for
  9. Click on the assignment
  10. Click on the submit assignment button
  11. Click on the upload file button
  12. In the file explorer, navigate to the homework folder
  13. Find the file that corresponds to the assignment
  14. Select the file
  15. Back on the submission site, wait for the file to finish uploading
  16. Click submit

In general, I needed to be extremely detailed when I was planning my tasks in order to make them easier to code later on without too much thought or contemplation of the tasks.

3. Prototype

Once I broke down my selected tasks substantially, I went ahead and opened up UIPath Studio to start actually creating my first process. Here is where that step-by-step outline becomes especially useful because UiPath has you code using blocks, or "activities", that can each perform an individual action(and occasionally using some, but it is not mandatory).

Also, when I was coding, I needed to make sure my processes have inputs and outputs, something that would be integrated with the chatbot part of the project.

For me, this was very intimidating at first but UIPath is extremely intuitive and I found that you should be able to pick up on most of the important nuances of the platform within a couple of weeks of consistent use. I personally went through a lot of the lessons on the UiPath Academy website, and I think that watching at least some of the video lessons there is extremely helpful when you are creating your own processes.

For reference, I went ahead and published my code for the Assignment Upload, Zoom Meeting Log-in, and Homework Consolidation bots on GitHub so if you get lost at any point you can go ahead and look at some of the activities and code that I made to create my processes.

If you do have any questions I found that the best place to go is the UiPath Forums, where many RPA developers might have already resolved your specific issue with someone else or if not would likely be willing to answer some of your questions.

*Note that coding in UiPath is somewhat different from coding in other languages as I found that since the process is actually happening on your computer live, the best way to ensure that you are able to have a functional process at the end is by constantly testing your process at every step of the way. This especially holds true if you are new to UiPath like I was.

4. Completion

Congratulations! You have created your first process! Now that you are finally done with the hardest part, and hopefully have a solid understanding of UiPath, you can begin creating new processes and automating more tasks using what you learned. Once you feel that you are finished, you can go ahead and hit "publish" to send your processes to your orchestrator that you set up earlier.

Step 3: Orchestrator

Hopefully you already set up your machine in orchestrator, but if you didn't, go back to Step 1 and ensure that your machine is linked to UiPath Orchestrator. If it is, you should be able to click the publish button in Studio and publish each of your processes to orchestrator. However, that's not all, as you still need to create those processes in Orchestrator.

When I first navigated to the Orchestrator cloud website , I needed to select the tenant that I was going to use for my processes, and then navigate to the "Automations" tab. Once I was there, I could create a new automation and under "Package Overview", you should be able to select your process and select the environment you set up earlier.

After clicking continue, I chose a name for my process and created on the spot, since I found that I didn't need to mess with any of the other settings as of right now.

After following these steps for each of my processes, I was able to successfully deploy them once my chatbot was complete.

Step 4: Creating Your Chatbot

Once I was finished creating any processes you want to for your digital assistant, I needed to now move on to the chatbot portion of this project.

I first navigated to the Google Dialogflow page and successfully created a new agent. Once on the Google Dialogflow home page after creating my agent, I was able to navigate to the "Intents" page, which was one of the key parts of the chatbot creation process as a whole.

But what are intents exactly?

Well, intents in Google Dialogflow are essentially how your chatbot knows what the user is talking about. For example, if you try to ask the chatbot for the weather right now, it won't understand the intent of the question, or what response it should give.

Before creating any of my custom intents, I saw that there are already two intents present: the Default Fallback Intent, and the Default Welcome Intent. In the Welcome Intent, I was able to add additional input phrases for my chatbot to receive from a user, as well as adding some unique responses.

After looking into what intents are and how they worked, I immediately began working on creating my necessary intents for each respective process.

Since I started with the first thing that my hypothetical user said or the first intention of the user, I didn't need to put anything underneath the "Contexts" section for the first initial intent, so I skipped ahead to the "Training phrases" section. Under here, I added any questions or phrases that my user might say when they want to trigger one of my processes.

For my homework submission intent, for example, my training phrases were all focused around the question "Can you submit my homework for me?". The main goal of adding multiple variations of training phrases is so that the language machine learning that is built into dialog flow is able to recognize a pattern.

Then, in the responses section, I added a response to the given question or phrase requested by my user. For my processes, this was usually something like a follow up question when I needed to take input, but when I didn't need to take any input at all, I found that the best thing to do was to have a follow up verification anyways. I found this helpful in the early stages of creating these Intents as it allowed me to see which Intent was getting triggered and by what phrases.

Looking at the homework submission question once again, my sample response was a verification of what the user's intent was.

Once I had either created a follow up question or a verification of the intent, I needed to be able to process the new intent of the user after asking the follow up question. Therefore, I needed to navigate back to the intents page, hover over the intent I created, and click "Add follow-up intent".

At this point the page should prompted me with many different options for the type of follow-up intent I wanted to create, and I selected the one that had the most relevant, since the follow up intent titles were pretty self explanatory (if the response from the user is "yes", for example). If my bot asks a yes or no question, it needs to be able to recognize and respond to two follow-up intents for both yes and no underneath the main initial intent. If my process needed the chatbot to ask a question looking for specific input, I needed to use the custom intent feature so I could create my own training phrases and responses.

Once messing around with the creation of my intents for a while, I had the general gist of how intents work and how to use them, so I continued adding follow up intents for each input that my program needed or for each yes or no question that my chatbot asked. As I started creating the different branches of intents, it started seem a bit complex, so at the top of this step I attached an image diagram above of what my Intent inheritance looked like for the homework submission bot I created.

Once all my intents were complete, I needed to figure out how I can process and move input in Dialogflow for each of my custom intents. When I created each custom follow-up intent, I accommodated for the possibility of the chatbot scraping the information that my process needed out of the user's response. Dialogflow made this very conveniently easy as I only needed to highlight the section of the response that contains the desired information for each individual sample training phrase that I created and then assign these values to a new "entity", which is essentially just an argument. The entity that you use to get and store these values from the user can be of a multitude of types, but for most cases you would want to use the "any" type, as it is the most generic*. There is an image of the page that I saw when creating my entities for inputs. How this works

Once I gathered all the information I needed, I just left the final response from the chatbot blank for now, since I would add it in the integration step.

*Note: all your input arguments or "entities" that you scrape in Dialogflow must have the same variable names as the inputs that you created for your process, otherwise the connection will not work.

Step 5: Integration

Once I finished creating both my process automations and chatbot user interface, all I needed to do now was integrate my processes with the chatbot so that they can be triggered by certain requests. Luckily, UiPath provides a website to easily link chatbots with processes without any actual coding required. The process of linking a Google Dialogflow chatbot and UiPath Orchestrator isn't too difficult, but I ended up using this documentation to walk me through on how to connect the two.

Once my chatbot and account were linked to the integration website, all I needed to do now was map my processes to the appropriate intents. Usually, I would be mapping the process to the last intent or whenever I extracted all the information I needed. After finishing mapping each process, I double checked all the necessary parameters within the chatbot and the code as well as ensuring that they are named correctly.

I now went back to the Dialogflow portal where my chatbot intents were all configured except for the final response, and now I could insert the output from my processes for each intent. After I connected UiPath and Dialogflow, I was able to transfer the output from the process into the chatbot by calling the output variable name in the response enclosed in the "<" and ">" symbols.

For reference, my output from my Homework Submission process is named "Complete" so in Dialogflow, the response field contains the text "".

I did this for each of my processes, making sure everything was spelled correctly and placed correctly because otherwise it won't print out the right output.

Finally, when I finished all of the integration on the portal and mapped the process, assuming I set up everything correctly, I was ready to move to the testing tab to try out my chatbot in a browser tab!

Step 6: Tips

Here are some issues I ran into and some tips that I discovered whilst testing and creating my code. I hope you find them to be useful, and if you have any questions about this project, go ahead and comment them down below--I will do my best to answer them.

Process Creation

-If you are trying to extract structured data from a webpage and the activity is not being created, try restarting your computer to clear the data/cache so that the function works again

-There are many different types of activities and therefore many different kinds of variable types that don't normally exist in your average programming language; you can see the necessary variable type for an activity by hovering over each field

- If you are having trouble getting text from a webpage, you can make edits to your selectors for a scrape function in the inspector tab on the right of the process when you select the activity

- Since UiPath is built widely using, you can look at the needed scripts or variable type conversions by researching the language itself and placing code inside the input fields of each activity

If you made it this far, thank you! I once again want to give a big shoutout to Ms. Berbawy for providing me with a platform to create this project in her Principles of Engineering class.

Be the First to Share


    • Arduino Contest

      Arduino Contest
    • Home Decor Challenge

      Home Decor Challenge
    • Explore Science Challenge

      Explore Science Challenge