In order to properly fake thermal imaging video, I had to develop a decent understanding of what it was and how it worked. The electromagnetic spectrum consists of several sections including (in order of wavelength) radio, microwave, infrared, visible light, ultraviolet, x-ray, and gamma rays.
There are basically three ways for a camera to "see in the dark". Light amplification is the most commonly used method. It is really just the gain on a camera and tells the camera to increase the amount of visible light it is collecting when the light source is faint. Also common on consumer camcorders is the nightshot feature, which lights the area in front of the camera with infrared light which can't be seen by human eyes, but which the camera can see. It then converts what it sees into video in the same way it normally would. These images tend to be pretty desaturated with a tint of green. Think Blair Witch Project, and you'll probably have the right idea.
That leaves thermal imaging, which uses cameras that do not operate with CCDs
(charge-coupled device). Instead most thermal cameras use a concept called FLIR
(forward-looking infrared) that captures thermal radiation and creates an image based on temperatures. These black and white images are then assigned color values and our easily recognizable thermal images are created. FLIR isn't the only thermal radiation system, but I'm guessing it's the most common, and the one I studied to understand the process.