Introduction: ESP-12 Infra Red Blaster
Infra Red remote control blaster using esp8266
Transmits remote control codes received from Web supporting multiple output devices.
Built in simple web page mainly for testing.
Normal use is via POST messages which can come from web pages or from IFTTT / Alexa voice control.
Supports an Amazon Echo/Dot activate detector to mute / quieten as soon as activate word is spoken.
Commands are either single commands or sequences. Sequences can be stored as named macros which can then be used as commands or in other sequences.
Recent history and list of macros may be obtained via the web interface
Supports OTA uploading of new firmware and uses WifiManager library for initial wifi set up
Step 1: Hardware
Uses following components
- ESP-12F module
- 3.3V regulator (MP2307 mini buck regulator)
- MOSFET switches (AO3400)
- Infra Red emitter (3mm)
- Light Dependent Resistor GL2258 (Optional Alexa activity detector)
- Decoupling capacitor (20uF)
- USB female socket (preferably solder friendly with sleeve
- 3 pin IC socket strip for Alexa detector
- Mechanical parts (can be 3D printed)
Can be assembled into ESP-12F project box
- Attach regulator to USB connector and insert into box
- Make up IR driver onto small piece of vero board (3 wires, +5V, 0V gate input)
- Connect IR driver to USB +5V, 0V
- Insert 3 pin IC socket into project box if using Alexa detector. Connect to +3.3V, 0V and wire for input
- Make up ESP-12F with 2.2K from GPIO15 to GND, EN to Vdd, 4K7 GPIO13 to Vdd, Alexa input to GPIO13, IR driver to GPIO14 , 0V and Vdd to 3.3V
- Make up Alexa detector and support buffer if required.
Note it can be easier to program ESP-12F first if you have some sort of serial programming facility or temporary breadboarding facility like this to connect to the serial ports.
Subsequent programming can be done using the built in OTA update.
Step 2: Software
The ir Blaster uses an Arduino sketch available on github
This needs to adjusted to suit local conditions and then compiled in a esp8266 Arduino environment.
The following libraries needed, most are standard or can be added. The last two are included in the git.
BitTx (included in Git)
BitMessages (included in Git)
Items in the sketch to be changed include
- Authorisation code for web access AP_AUTHID
- Wfi manager password WM_PASSWORD
- firmware OTA password update_password
- New IR devices / button codes (see later)
Once this is done then it should first be uploaded using conventional serial upload.
As SPIFFS is used then the memory should be prepared by installing and using the arduino ESP8266 Sketch Data upload tool. This will upload the data folder as initial SPIFFS content
When the device can't connect to the local network (as will happen first time) then Wifi manager will create an access point (192.168.4.1). Connect to this network from a phone or tablet then browse to 192.168.4.1 You will get a web interface to connect to the local wifi. Subsequent accesses will use this. If local network changes then it will switch back to this config mode.
Subsequent update may be done by compiling an export binary in Arduino environment and then accessing the OTA interface at ip/firmware.
Step 3: Add Device / Button Codes
The device and button definitions are included in the BitDevices.h file in the BitMessages library. As supplied this contains details for the remotes I use. For other remotes new entries can be added.
As supplied it has 10 devices defined (NUMBER_DEVICES). 5 are fully defined and the other 5 are just placeholders. These could be used as starting points or new entries added.
Each device defined consists of 2 parts.
A char* table defines the button/codes for the remote with each entry being a pair (name/code). The last entry should have a name value NULL. The names can be anything meaningful, they are used to refer to the button press to send. The codes are normally hex strings, although escapes can be used to supply specific raw codes, bit timings if required.
An entry in the devices structure giving the basic characteristics and the reference to which button table to use.
Most remotes belong to one of 3 protocol categories (nec, rc5 and rc6). nec is probably the most common and has a simple header structure and bit timing. There is a slight variant of this which differs only in the header pulse timing. rc5 and rc6 are protocols defined by Philips but also used by some other manufacturers. They are a little more complicated and rc6 in particular has a special timing requirement for one of the bits.
To capture the codes for a new remote I use a IR receiver (TSOP) commonly used with plug in remote receivers. This does the basic decoding and gives a logic level output. They normally come with a 3.5mm jack with +5V, GND, DATA connections. I sacrificed one, shortened the lead and put it through an inverting 3.3V buffer to feed a GPIO pin on a Raspberry Pi.
I then use a python tool rxir.py (in git tools folder) to capture codes. To make it easier to use to capture a large number of buttons then the tool uses a text definition file to define the buttons on the remote and is just the names of the buttons in a group on the remote. For example, one might have a new Sony remote and one sets up 3 text files called sonytv-cursor, sonytv-numbers, sonytv-playcontrols each with the relevant button names in. The tool will prompt for the device (sonytv), the section (cursor) and which protocol to use (nec, nec1, rc5, rc6). It will then prompt sequentially for each button press and write results to a sonytv-ircodes file. Sections can be repeated if required to check captures are good. Bits from the .ircodes file can be edited into the BitDevices tables.
Step 4: Web Control and Macros
The basic web control is either a single get or a json post which may contain a sequence.
The get has 6 parameters
- auth - containing the authorisation code
- device - the name of the remote device
- parameter - the name of the button
- bits - an optional bit count
- repeat - an optional repeat count
- wait - a delay in mseconds before the next command can be executed.
The device can also be 'null' to get just a delay, 'macro' to use the macro referred to by the parameter, or 'detect' to use the Alexa detect feature (see later).
The post consists of a json structure like
The sequence can be any length and devices may be macro references.
The same structure may be used to define macros. Just include macro:"macroname", at the top level e.g. after auth.
Macros can be deleted by defining them with no "commands".
Step 5: Alexa Voice Control Using IFTTT
The simplest way to use the ir Blaster with Alexa is to use IFTTT as a gateway.
First port forward the port used to your blaster in your router so it is accessible from the internet. It can be good to use a dns service like freedns to give your routers external ip a name and make it easier to handle if this ip changes.
Set up an IFTTT account and enable the Maker Webhooks channel and the Alexa channel. You will need to logon to the Amazon site when you do this to enable the IFTT access.
Create a IF trigger using the IFTTT Alexa channel, choose the action based on a phrase and enter the phrase you want (E.g. volume up).
Create the action by choosing the Maker webhooks channel. Enter into the URL field something like
This action will be sent to the ir blaster where it will try to execute the macro volumeup. One can be specific device/buttons in here if wanted but I find it better to define and use macros because then the action sequence can be easily changed just by redefining the macro.
A separate IFTTT applet is needed for each command.
Step 6: Native Alexa Voice Skill
Instead of IFTTT one can build a custom skill inside Alexa development. This centralises all the processing in one place and means you don't have to create separate actions for each button.
You need to get register as an Amazon Alexa developer and you need to register with the Amazon AWS console lambda service. You will also need to look at the tutorials to understand the process a bit.
On the Alexa developer side you need to create a new custom skill, enter its trigger word and create a list of command words like volume up, guide, etc.
Alexa then send the phrase to a program running on the lamda service which interprets the phrase and makes a URL call to the Ir blaster to action it.
I have included the Alexa intent schema and the console lambda function I use in the git. The URL will need to be modified to reference the appropriate ip and have the right authorisation. To keep it simple the lambda functions calls a macro which has a space stripped lower case version of the phrase. It also tries to remove the trigger keyword which can sometimes be included. E.g. blaster VOLUME up will call a macro called volumeup if the trigger word was blaster.
Step 7: Alexa Activate Detector
Although the Echo / Dot voice recognition is good it can sometimes get confused if the sound is playing from say a TV unless you get close and speak loudly.
To improve this I added an activate detector to my Dot. As soon as the keyword (Alexa is said) the ring of LEDs light up. The detector feeds this into the blaster where it will use the alexaon macro to mute the TV, similarly at the end of processing a command the lights go off and the alexaoff macro restores the sound.
The 'detect' command can also be used to turn this on and off. So for example I use the initial turnon macro to enable the detection and the turnoff macro to disable it. This can also be used within the action macros to support a real mute and unmute coomand which would otherwise be problematic.
The physical detector is an light dependent resistor which the circuit supports. I mount mine on the Dot with a 3D printed bracket