Introduction: Event-Driven Programming in FTC
This year, our team has done a great deal of work with event-driven software development for our robot. These programs have allowed the team to accurately develop autonomous programs and even repeatable tele-op events. As the software work it calls for is complex, we decided to share the knowledge we've gained on developing event-driven code for FTC robots.
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: What Is Event-Driven Programming?
In general terms, event-driven programming, according to Techopedia, is the development of programs that respond to user input. In this sense, many programs are considered event-driven, including a team's tele-op program, which relies on inputs from a human-ran controller to conduct any action. However, in terms of the work that our team has been doing, event-driven programming is about creating software from various inputs; in other words, we document events based on the inputs of controllers and sensors, then we can queue these events and use the file to rerun the recorded event.
This method of developing programs for our robot has several advantages:
- It allows us to create accurate autonomous programs. Since we are creating the software in real-time while undergoing the event, the sensor values collected and used will be very accurate, as they come directly from the original event.
- It allows us to create autonomous programs quickly. Making autonomous programs is as simple as recording a series of events and adjusting the event as necessary.
- It allows us to create automatic processes for tele-op. For repeated actions in tele-op, event-driven programming allows for us to record these actions and assign the event to a button during the driver-controlled periods of matches. These automated events can be influenced by sensors to allow for their accurate execution.
Step 2: Logic Flow of Event-Driven Programming
The following depicts the logical flow of an event-driven program: red depicts the creation of an event, and blue depicts the calling of the event. For creating an event, a sequence of inputs is taken in through robot action and recorded as events; these events are written to a file. For calling an event, that file is read, and the inputs are sent to an event processor to turn the file code into robot action.
Step 3: Event Creator
Event creators are used to document actions or “events” based on a variety of sensors and buttons. As the robot does actions on the field, an event creator class is creating events for each one of those actions in parallel, referencing the event classified in an event class. After being created, the event is put in a queue of events in the events class: the first event takes the top spot, then the second event takes the top spot and pushes down any events under it, and this continues until the program stops. When the program is stopped, the events go out to a human-readable format file, such as a JSON file. This file can be used to better improve autonomous routines.
The example code above sets up the parameters for the event, which in this case is a turn using an IMU sensor. We then queue the event into the event queue. Finally, we truncate the event, which is essentially resetting the event so that we can use it to queue future events.
Step 4: Event Processor
Event classes take the human-readable file produced in the event creator class and does whatever each event queued tells it to do by calling methods outlined in an event processor class. The event processor class then tells the robot what event to replay. Whether it is a simple "drive forward" event or a complex event full of distances, turns, and strafes, the processor will replay any event given to it. This process is very useful during autonomous, since a team can record sensors and Tele-Op actions before to match, then simply replay the events in autonomous. This process is called Memory Replay. This allows an autonomous program to be 100% configurable through a single file. Once the event creator and processor is established, a team can simply change out autonomous routines through the human-readable file.
The example above first starts by checking the JSON file for an event, and then checking that event using a case statement to see what kind of event it is, in this case a turn using an IMU sensor. Once it can tell that it is a turn using the IMU event, it then deals with processing the event, which usually involves running the code that the event came from using variables from the event, passed in to replicate the event that was done before.