Phenomenal Augmented Reality Allows Us to Watch How Things Are Watching Us!




About: I grew up at a time when technologies were transparent and easy to understand, but now society is evolving toward insanity and incomprehensibility. So I wanted to make technology human. At the age of 12, I...

In my childhood I discovered an interesting phenomenon: if I connected a light source to a sufficiently amplified television receiver, and waved the light around in front of a video camera, I could get the light to function as a 3D augmented reality display that would overlay the sightfield of the camera, as virtual information on top of physical reality. I grew up at a time when technology was transparent and easy to understand. Radios and televisions used transparent electronic lamps called "vacuum tubes" where you could see inside them and they revealed all their secrets. I then witnessed a transition into an era of opacity in which scientific principles were concealed inside integrated circuits with closed-source firmware. And while machines became more "secretive", we entered a new era in which they also began to watch us and sense our presence, yet reveal nothing about themselves (see a short essay I wrote about this metaphenomenon). I wanted to be able to sense sensors and see what they can see. So I invented something I called the PHENONAmplifier, a device that amplifies physical phenomenon in a feedback loop of revelation.

It worked by video feedback, and because of the feedback loop, it solved one of the most difficult problems in augmented reality: alignment between the real and virtual worlds. As a result I was able to make artistic "lightpaintings" or "light drawings/graphings" as scientific visualizations where the degree of visibility of each sampled point in space to a surveillance camera could itself be made visible. I called this "Metasensing", i.e. seeing what a camera can see (sensing sensors and sensing their capacity to sense). As a professor, I teach this to the students in my Wearable Computing and Augmented Reality class every year, but a number of people have attempted to reproduce this interesting scientific result and have had difficulty (it is somewhat difficult to get all the conditions right for the phenomenon to occur). Therefore I came up with a very simple way to teach and explain this phenomenon. Each student builds a very simple one-pixel camera, and a very simple 3D augmented reality system with a light bulb or LED to learn about this interesting effect. Once understood, this effect has many uses both artistically and scientifically. See for example IEEE Consumer Electronics, 4(4) Pages: 92 - 97, 2015.

Step 1: Get and Prepare the Materials

Get the materials together and prepare them for assembly:

  • Some black cardboard or other material to make a simple box camera;
  • Glue, scissors, tape, or the like, to make the box (or black paint to paint an existing box black);
  • A lens (or you can also just make a hole in the box to let light in, e.g. like a pinhole camera);
  • Solderless "breadboard" or a circuit board, soldering iron, etc.;
  • Wires and connectors;
  • A suitable op amp such as TLC271 or TLC272;
  • A transistor suitable to drive the light bulb or LED of your choice (I used 2N6058 to drive a 50 Watt light bulb, or 2SD 261 to drive a quarter-Amp LED);
  • Heatsink, socket, and mica insulator if you're using the larger transistor;
  • A light bulb or LED (the LED may also require a series resistor or the like, if no current limiting or ballast circuit is built-in);
  • Resistors: one high valued resistor in the 1 to 100 megohm range works best (I used 47 megohms), and two low-valued resistors (e.g. 1000 ohms);
  • A capacitor of suitable value to limit the bandwidth of the feedback (I used 6800pF);
  • A photodetector, photodiode, solar cell, or the like, to use as the pixel (light sensor). Preferably this optical detector has sufficient surface area to provide a reasonable field-of-view for the 1-pixel camera.

Prepare the components by cutting off the leads to an appropriate length, especially the parts that are in the feedback circuit, e.g. the 47 Mohm resistor and the capacitor, as well as the photodiode.

To identify the polarity of the photodiode, connect it to a voltmeter. The anode terminal is the one that will provide a positive voltage when light is incident upon it. As shown in the picture, you can see more than 0.3 volts under illumination of a typical desk lamp. Sometimes the polarity is indicated by the lengths of the leads, so you might want to mark the positive (anode) lead with a red sharpie, as indicated, prior to trimming the leads to a shorter length.

Step 2: Make the Camera to Show the Feedback-based AR Phenomenon

Make a simple one-pixel camera. You can learn a lot from a 1-pixel camera because it forces you to think fundamentally about what a pixel is and what it measures or senses. This is a great way to learn about a camera in its simplest form. This is also what led me to the invention of HDR imaging (mathematically turning a camera into an array of light meters).

The camera can be made in a black cardboard box, or take an existing housing and paint it black inside, and affix a lens to the front of it. Focus the lens so that it forms an image on the place where the sensor is. The sensor is a photodiode.

So you will end up with a black box that has a lens on the front to allow light in, and two wires coming out, e.g. red and black, red for positive and black for negative.

If you have a dark enough room (and some black cardboard on your workbench, surroundings, etc.) you may also omit the box and just have a lens that's the right distance from the sensor, along with a couple of pieces of black cardboard on either side, to keep most of the light out of the space between the lens and the sensor. In one of the pictures you can see that I have merely leaned a piece of black foam rubber against the lens to block light from getting to the sensor without first going through the lens. In this case I used an HP plotter to move an RGB LED back and forth in a known plane of motion, and since I know the light will only be coming from the sides (not the top or bottom) I only need 2 sides to the "box".

In fact you can perform a quick test of the system with just a lens in front of the sensor (no box around it). See picture above (lens glued to side of heavy battery sitting on the workbench, so it can be focused by moving the battery forward and backward).

Next you will build the amplifier that the camera connects to. The camera connects to the amplifier and the amplifier connects to the light source.

Note also that the camera sensor (photodiode) should be reverse-biased (e.g. reverse-connected). The red wire (positive, i.e. anode) will be connected to the ground pin of the amplifier and the black wire will be connected to the amplifier's input.

Now let's build the amplifier!

Step 3: Build the "PHENOMENAmplifier" (Phenomenological Video Feedback Amplifier)

You will now learn how to build a very simple amplifier with massively high gain, such as to be able to exhibit an example of the "Phenomenological Video Feedback Effect" that allows an otherwise invisible physical phenomenon to be made visible, through electro-optical feedback with a moving light source.

The amplifier accepts input from the camera, and drives the light source. When the light is moved, it makes the phenomenon visible through a combination of two concepts:

  • Video feedback. Instead of the usual fractal patterns on a TV screen, we have only a one-pixel display. This gives rise to a reversal of how video feedback usually works: rather than moving a camera around and pointing it at a stationary screen, we move our "screen" (in this case our light is a 1-pixel "screen") around while the camera remains stationary; and
  • Persistence of Exposure (PoE). Persistence of Exposure can take place within another camera (e.g. set to long-exposure) or within the human eye itself, which functions very much like a camera.

Ordinary amplifiers that amplify voltage are unstable or unreliable at extremely high gains, and require many stages to get extremely high gain, but there are other kinds of amplifiers in which you can obtain an extremely high gain from one single stage! There are four main categories of amplifiers:

  1. voltage amplifiers. Voltage input gets amplified to voltage output;
  2. current (amperage) amplifiers. Current input gets amplified to current output;
  3. those that convert voltage to current; and
  4. those that convert current to voltage.

Amplifiers are characterized by their transfer function, i.e. "h" = "output" divided by "input".


So an amplifier (type 3) that converts voltage to current is called a "transconductance" amplifier, since the units of "h" (units of the output/input) = amps/volts = conductance. Vacuum tubes are examples of this kind of amplifier: grid voltage is converted to plate current.

The amplifier we'll use in this Instructable is the 4th fourth kind of amplifier. This is the one that I drew out on the chalkboard above (ECE516 class 2016 Jan 28). It is called a "TransImpedance Amplifier" or "TIA". It converts current to voltage, so the units of its transfer function "h" (units of output/input) are volts divided by amps, i.e. ohms (units of impedance).

In our application of the TIA, we're converting the photocurrent (current from the photodiode) to voltage. In this configuration the amplifier can operate reliably at extremely high gain with only one stage required! Here the gain is 47,000,000 volts per amp. Obviously if you put one amp in, you won't get 47,000,000 volts out, because it will saturate at the 12 volts or so supply voltage. But if you have 200 nanoamperes of photocurrent, you'll get about 9.4 volts or so output. Typically I use an 11-position rotary switch to select from the following resistors in a 1-3 sequence: 10k, 30k, 100k, 300k, 1m, 3m, 10m, 30m, 100m, 300m, and 1gigohm.

Connect the camera to the input of the amplifier as shown above. Additional pictures, including the 46 still images that correspond to the .gif animation above, are available here (link). Connect a voltmeter to the output of the op amp (pin 6 if you're using a TLC271). Cap the camera lens (e.g. with black tape that's black all the way through, or heavy black cardboard). You should see a very low near-zero voltage on the meter. Uncap the lens and let some light in. The output should vary linearly with the quantity of light. You can leave the meter connected, allowing you to use this apparatus as an accurate light meter.

You can vary the gain by varying Rf, depicted in the chalk drawing above. As the gain increases the circuit can become unstable. Use a feedback capacitor, Cf, in parallel with Rf, to reduce the gain at high frequencies, while keeping it high at low frequencies. Experiment with and without the capacitor to see its effect.

I prefer not to use a potentiometer (variable resistor) for Rf, because the wires or leads to and from it, along with its larger electromagnetic cross-section, can pick up stray noise signals.

Once you have confirmed that your 1-pixel camera is functioning as a light meter (which is the general philosophy of comparametric analysis), connect it to the transistor, in common emitter configuration as shown in the drawing. A base resistor for the transistor, Rt, limits the current into the base. I chose the 2SD261 for driving an LED because it has good current gain, and I didn't need a high frequency transistor. I found a surplus version that came with a nice heatsink. Double-check the pinouts. For the NEC D261 version of this transistor, the collector is in the middle (unusual for this TO-92 package where base is usually in the center). The one available on Digikey has a different pinout (on the Digikey version the base is in the middle).

Rb is a bias resistor. The purpose of this resistor is to keep the light glowing slightly even in complete darkness. This is required to initiate the metasensing optical feedback effect.

You may want to experiment with this value, or even substitute a potentiometer to be able to vary the "bias".

Now let's do some AR!

Step 4: Testing and Using the Metasensing AR Feedback System

In a dark room, you should be able to wave the light around. I like to use an LED because it responds quickly and allows you to wave the light around more quickly. Note that if the room is not dark enough, the LED might go to full brightness even when it is not in view of the camera. In this case you might have to decrease the gain from 47,000,000 ohms to something less. Try 1,000,000 ohms for example, until the system works properly, and then increase the gain a little bit to see how much gain you can get away with for a given light level.

If you have a darkroom, that's a good place to begin experimenting with what we call "Phenomenal Augmented Reality" (augmented reality of physical phenomenology).

With your eyes adjusted to the dark you should be able to observe the effect. Try not to let your eyes follow the light. You might want to affix a separate small indicator light, or stare at something like a darkroom timer with glowing hands, and as you wave the light back and forth it can be quite amazing and educational to see the phenonenological overlays imprinted on your retina by Persistence of Exposure.

If you have a photographic camera such as a DLSR, that has a "B" ("Bulb" or long-exposure) setting, try photographing the trails of light from the apparatus. A combination of flash and long exposure can also be quite amazing as the flash will make visible the system under test, while the long exposure will make visible the otherwise invisible virtual material (e.g. the sightfield of the camera under test). In this case you have 2 cameras: your test-camera (the one that's exhibiting the phenomenon) and your photographic camera (the one that's photographing the other camera that's exhibiting the phenomenon).

I'd love to see what you come up with!

Step 5: Conclusions and Going Further

Now that you've built a simple example of Phenomenal Augmented Reality, this opens the door to understanding a whole new world!

Try tracking the light source with Meta Spaceglasses (Metaglasses) and you can capture the sight into the eyeglasses and track and make a map of a camera and walk around and see its sightfield from multiple angles. See for example the "abakographic principle" applied to Metaglasses. You can also build robotic systems that implement the abakographic video feedback AR metasensing principle. Back in the 1970s I built something I called the "Sequential Wave Imprinting Machine" that allowed me to see radio waves, as well as surveillance camera sightfields by receiving the signals from cameras, through wearable computing and a wearable antenna array. In building these early amplifiers I used triacs to drive even bigger lights, such as arc lamps or other devices up to 12,000 watts (see above example in which the light is bright enough to melt snow to create a snow sculpture in which a surveillance camera's gaze burns through the snow).

Have fun and experiment safely, especially when melting snow or steel with your camera's gaze!

44 People Made This Project!


  • Sew Tough Challenge

    Sew Tough Challenge
  • Barbeque Challenge

    Barbeque Challenge
  • DIY Summer Camp Contest

    DIY Summer Camp Contest

46 Discussions


Reply 3 years ago

Yes, when you point a TV camera at a TV display you get fractal patterns. When you point a TV display at a TV camera you get Figure 1 of this paper:

What I did there is simply reverse what people normally do: people normally do video feedback by walking around with a camera plugged into a stationary TV. What i did back in my childhood was walk around with a TV receiving the output of a stationary surveillance camera (tuned to the same frequency to pick up its signal). This resulted in video feedback, creating what I call the "Abakographic phenomenological video feedback effect". Once I discovered that effect, I realized that I could replicate it with just a one pixel display rather than a whole TV set. So my display became just one light bulb, and then a linear array of light bulbs sequentially selected to feed back one-at-a-time, thus creating SWIM (Sequential Wave Imprinting Machine) which i made to also display the radio waves of the television carrier frequency as well.


Question 1 year ago

Thank you for this amazing project i will be doing it for my physics competition but i have a couple of questions:1) can we see the waves by the naked eye?
2)if it is not possible to see it by the naked how can I enable that feature?
3) Can you please tell me the full procedure with measurements....
Thank you....

2 answers

Reply 1 year ago

Yes the waves are very visible by the naked eye. If you wave a SWIM fast enough you can see it very clearly.


Reply 1 year ago

Thank You, Can you send me the full procedure of making this project and how to make the SWIM faster please because we are grade 10 we don't know much about electric circuits and what to do..... and how to use a security camera instead of the lens..
Thank You


1 year ago

Hi, I hope it's not too late to ask.

How do you achieve to sense the toilet proximity sensors field if you can't access their signal?

Thank you!

2 replies

Reply 1 year ago

Sensors like these transmit infrared energy.

More generally, some sensors transit energy whereas others don't transit so it is harder to pick up the signal.

There is a lot of literature about bug sweepers in general, and detecting cameras in particular.

My contribution is less about new detection methods and more about how to present the data once detected, i.e. concept of phenomenological augmented reality ("Real Reality"),



Reply 1 year ago

Thank you for your quick response.
I'm asking more about the hardware implementation of it. I want to reproduce this phenomena with my students. I understand the working principle explained in the 1-pixel camera instructable, but I'm unable to figure out the best way to make the IR FOV "detector" with a LED array.
The closest/simplest thing I can imagine is a wide-bandwidth IR sensor fixed to a single LED, and swyping that assembly both perpendicular and parallely in front of the proximity sensors, but how does the array work?

Thank you again :)


3 years ago

Hi there, just wondering, could this be used to determine if the "security cameras" that my neighbour installed on his house are actually filming my movements on my own property? One of the cameras is almost directly opposite my side door (which has a window in the door) and I fear he can "see" into my house. Would this device be able to determine if this is so, and could I do it without touching the neighbour's security camera? Thanks in advance for your advice.

1 reply

Reply 3 years ago

Thanks a very interest question, and touches specifically on the new field of Veillametrics which Ryan Janzen and I introduced,

[Janzen and Mann, IEEE CCECE 2014].

There's a lot of existing work on bug sweeping, but what we do is add a scientific visualization twist to this, e.g. an add-on to existing sensing methods that makes their results visible.

A lot of cameras these days are hidden inside domes to make it harder to see which way they are aimed. The watchers don't want to be watched!

Therein lies in inherent hypocrisy (i.e. a lack of integrity);

see also

and look at Fig. 1 of this paper; I'm sure you'll find it amusing:

Also a lot of cameras are being installed, so you might find your house under the veillance of a streetlight with a hidden camera in it, or a traffic camera that can see into your house, etc..


Reply 3 years ago

Yes, I made some versions of it that drive SLMs (Spatial Light Modulators), which, when soaked in an index-matching xylene bath, make a good clean diffraction grating.

You can see some of my other holography-related work in this paper:


3 years ago

first, your works are truly inspiring! thanks so much for this tutorial!

i have a general (and naive) question: i saw that you have been using your invention to visualize different forms of invisible wave. however, camera is a device that sense light (rather than emitting light/energy). when you visualize the fov of a surveillance camera/IR faucet , how do you know if the feedback is not from the IR emitter?

sorry that the concept of measuring veillance field is very fresh to me. it would be awesome if you could explain briefly why vision is measurable?

1 reply

Reply 3 years ago

Thank you for your interest in our work. There are various means by which surveillance devices can be detected, and there is a huge field of study and array of products out there for bug sweepers and other systems to find surveillance. My contribution is distinct from this: whereas none of the other work has ever provided a visualization of veillance, what I provide is a means for visualizing that veillance once detected or measured.

Veillametrics is also a new field of scientific measurement; see also


3 years ago



3 years ago

THIS. IS. AWESOME!!!!!!!!!


3 years ago

I will have to revisit this ible later. Too complex for a quick scan on the smrtphone.

Eh Lie Us!

3 years ago

This is pretty fascinating stuff. thanks


3 years ago

Thank you for the fascinating article and clear description of your process! A few years ago, I built a similar system for visualizing how sound moved through an environment. I am a sound designer by trade and was interested in seeing how my computer's monitors affected the sound from my speakers. I built an Arduino-based device that drove an RGB LED that was mounted at the end of a long stick along with a microphone. The color of the LED was deep red for lower frequencies on up to deep blue for the highs. The audio spectrum was represented by the visible light spectrum and the brightness was directly proportional to the loudness of the signal. It was essentially a single-pixel realtime analyzer.

I played pink noise from the center channel speaker and then slowly swept the stick up and down to paint a light painting of the dispersion of the sound using my DSLR's bulb mode. It was able to show the refraction of the higher frequencies over the top edges of my monitors while the lower frequencies were largely unaffected.

It was a fun experiment that I now regret not documenting to share with others.

1 reply

Reply 3 years ago

Thanks for the kind words and the comment!

Fascinating work. I'd love to see some of your pictures.

In my high school physics class, back when I was a student, I did a demonstration of interference patterns using 2 separated speakers playing the same note. Back and forth movement of an array of lights made visible the nodes and antinodes, e.g. constructive interference versus destructive interference through Persistence of Exposure. See

I'm a lot like you in the sense that I'd rather do something than write about it, so I ended up with many things I built but never documented. Now that I'm a professor, I find a need to communicate with my students and others, so I'm getting better at documenting things (and archving things like scanning some old 40 year old photographic films and plates and organizing the data, etc.).

Keep up the great work, and keep making and building and tinkering == tinkering as a form of inquiry!