Introduction: Animatronics Face of Sasuke Uchiha
This is an animatronics face of Sasuke Uchiha, one of the main characters from Naruto. I tried to imitate few of his abilities in response to sensors similar to that in the anime. He has different eyes with different abilities that he can activate, so with the use of ultrasonic sensor, if someone comes closer than a set value or if someone grabs his sword he activates sharingan. For the sword, I've used photoresistor with an LED. If both of these conditions are triggered he activates the mangyako sharingan. If the touch sensor detects a touch then he shoots a fireball through his mouth. With these interactions he says his dialogue and the mouth moves in sync with the voice.
Supplies
Step 1: Character Selection and the Interactions
To build an animatronics we need to know the basic idea behind it. Animatronics is use of robotic devices to build a life like movements in inanimate objects. So first we have to decide what character are we building and what the interactions will be. So I have selected Sasuke Uchiha as my character. As for the interactions, it has the following interactions.
- Introduction: So the character needs to introduce himself if anyone comes in front of him.
- Sharingan: For certain sensor trigger he will activate his sharingan eyes.
- Mangyako Sharingan: For some other sensor trigger he will activate his mangyako sharingan.
- Fireball Jutsu: For the other sensor trigger he will cook his fireball and shoot through his mouth.
Along with interactions if we can make him say some dialogues then it will be more fun and realistic! So for this we have to sync the mouth movements with the voice lines.
Step 2: Select the Hardware for Each Interaction
Once you are done with the selection of character and its interactions you have to select the hardware that you will be employing for those interactions.
- Sensor selection:
- For the introduction part I used the ultrasonic sensor that returns us the distance of certain object from the sensor. So if I set some value and someone comes closer that that value it would trigger the introduction.
- For Sharingan and Mangyako Sharingan I used the photo resistor and ultrasonic sensor in combination to get those eyes activated.
- For the fireball I got the touch sensor to depict someone hitting him and in return he shoots the fireball.
- Actuator Selection:
- For turning the eye balls you need to turn for a specific angle so servo motor will be a better selection.
- Even for the mouth movement you have to do a to and fro movement for which servo motor will be a better fit.
Step 3: Designing the Structure
Now that you have the hardware figured out, its time to design the structure that will hold the entirety of the animatronics face. I designed a frame that houses all the three motors, the breadboards and the Arduino. I got the frame 3D printed and assembled. Besides this you can use any material to fix the rest of the required things on to the structure.
Step 4: The Algorithm for Logic
We are all set with the hardware. Now its time for the logic of our interaction.
- Initially we try to turn the eye servos in response to the photoresistor and the ultrasonic sensor.
- Then we can add the touch response to the code.
- Once that is done we can integrate the voice lines when each of the sensors are triggered.
- Now we can sync the mouth movement to the voiceover.
Step 5: Low Level Explanation of Logic
- Eye activation:
- For activating sharingan we have two sensor triggers with an or condition. Which means if either of the photoresistor or the ultrasonic sensor is triggered, sharingan will be activated. The condition for the ultrasonic sensor is that if it detects anything closer than 10 cm it will trigger and similarly for the photoresistor, if the voltage value is less than the threshold value it triggers.
- For mangyako sharingan to activate we have the and condition. Which means only when both of the above conditions are triggered mangyako will be active.
- Since the sensors are not highly reliable, we need to have some delay after one of the eyes is activated otherwise if the sensor triggers once and before completing the voice line and activating the eye has some other trigger the previous activity will be interrupted and the new activity will start which is not the ideal scenario.
- Fireball:
- For using the fireball the touch sensor must sense the touch. Once that is sensed then voice line will be active and mouth will open with a red LED turning on in mouth. With a small delay of 5 seconds the LED will turn off and mouth will shut.
Step 6: Integrating the Voice Lines
You can use the DFRobotDFMiniPlayer.h library to play the audio. To play any specific audio for a specific sensor trigger we can pass the integer and for every integer it reads a specific audio file from the SD card. So what the character does is first he says his dialogue and then does the action. So we will have him say a specific voice line and then have the motor commands after that.
Now to sync the mouth movements, we have to use a while loop with a flag that can read whether the mp3 player is running or not. If the mp3 player is playing the busy pin changes from 0 to 1 so we can use this as a flag and move the mouth until the busy pin is 1. This will sync your voice with the mouth movement.
You can refer the following link for more info on DFplayer
https://wiki.dfrobot.com/DFPlayer_Mini_SKU_DFR0299
Step 7: Wiring the Circuit
Now that we have the coding done its time to wire the circuit according to the pins declared. To make the setup more clean you can choose the pins for the components such that the wires do not tangle into each other and make a mess. You can choose the components that are on the upper half to be connected to the higher pins like 10,11,12.
Step 8: Testing
Now you have your animatronics ready and its time to test altogether the features that you have combined like eye movement, mouth movement and audio sync. Try to test all possible test cases and handle all possible scenarios that it may fail. For this you have to test multiple times. Also to avoid getting a ton of bugs at the final testing, what you can do is write a block of code and test it at the same time. By doing so once you combine all the code the only part that you need to debug is the integration part and not the actual hardware.