In my childhood I discovered an interesting phenomenon: if I connected a light source to a sufficiently amplified television receiver, and waved the light around in front of a video camera, I could get the light to function as a 3D augmented reality display that would overlay the sightfield of the camera, as virtual information on top of physical reality. I grew up at a time when technology was transparent and easy to understand. Radios and televisions used transparent electronic lamps called "vacuum tubes" where you could see inside them and they revealed all their secrets. I then witnessed a transition into an era of opacity in which scientific principles were concealed inside integrated circuits with closed-source firmware. And while machines became more "secretive", we entered a new era in which they also began to watch us and sense our presence, yet reveal nothing about themselves (see a short essay I wrote about this metaphenomenon). I wanted to be able to sense sensors and see what they can see. So I invented something I called the PHENONAmplifier, a device that amplifies physical phenomenon in a feedback loop of revelation.

It worked by video feedback, and because of the feedback loop, it solved one of the most difficult problems in augmented reality: alignment between the real and virtual worlds. As a result I was able to make artistic "lightpaintings" or "light drawings/graphings" as scientific visualizations where the degree of visibility of each sampled point in space to a surveillance camera could itself be made visible. I called this "Metasensing", i.e. seeing what a camera can see (sensing sensors and sensing their capacity to sense). As a professor, I teach this to the students in my Wearable Computing and Augmented Reality class every year, but a number of people have attempted to reproduce this interesting scientific result and have had difficulty (it is somewhat difficult to get all the conditions right for the phenomenon to occur). Therefore I came up with a very simple way to teach and explain this phenomenon. Each student builds a very simple one-pixel camera, and a very simple 3D augmented reality system with a light bulb or LED to learn about this interesting effect. Once understood, this effect has many uses both artistically and scientifically. See for example IEEE Consumer Electronics, 4(4) Pages: 92 - 97, 2015.

Step 1: Get and Prepare the Materials

Get the materials together and prepare them for assembly:

  • Some black cardboard or other material to make a simple box camera;
  • Glue, scissors, tape, or the like, to make the box (or black paint to paint an existing box black);
  • A lens (or you can also just make a hole in the box to let light in, e.g. like a pinhole camera);
  • Solderless "breadboard" or a circuit board, soldering iron, etc.;
  • Wires and connectors;
  • A suitable op amp such as TLC271 or TLC272;
  • A transistor suitable to drive the light bulb or LED of your choice (I used 2N6058 to drive a 50 Watt light bulb, or 2SD 261 to drive a quarter-Amp LED);
  • Heatsink, socket, and mica insulator if you're using the larger transistor;
  • A light bulb or LED (the LED may also require a series resistor or the like, if no current limiting or ballast circuit is built-in);
  • Resistors: one high valued resistor in the 1 to 100 megohm range works best (I used 47 megohms), and two low-valued resistors (e.g. 1000 ohms);
  • A capacitor of suitable value to limit the bandwidth of the feedback (I used 6800pF);
  • A photodetector, photodiode, solar cell, or the like, to use as the pixel (light sensor). Preferably this optical detector has sufficient surface area to provide a reasonable field-of-view for the 1-pixel camera.

Prepare the components by cutting off the leads to an appropriate length, especially the parts that are in the feedback circuit, e.g. the 47 Mohm resistor and the capacitor, as well as the photodiode.

To identify the polarity of the photodiode, connect it to a voltmeter. The anode terminal is the one that will provide a positive voltage when light is incident upon it. As shown in the picture, you can see more than 0.3 volts under illumination of a typical desk lamp. Sometimes the polarity is indicated by the lengths of the leads, so you might want to mark the positive (anode) lead with a red sharpie, as indicated.

Step 2: Make the Camera to Show the Feedback-based AR Phenomenon

Make a simple one-pixel camera. You can learn a lot from a 1-pixel camere because it forces you to think fundamentally about what a pixel is and what it measures or senses. This is a great way to learn about a camera in its simplest form. This is also what led me to the invention of HDR imaging (mathematically turning a camera into an array of light meters).

The camera can be made in a black cardboard box, or take an existing housing and paint it black inside, and affix a lens to the front of it. Focus the lens so that it forms an image on the place where the sensor is. The sensor is a photodiode.

So you will end up with a black box that has a lens on the front to allow light in, and two wires coming out, e.g. red and black, red for positive and black for negative.

If you have a dark enough room (and some black cardboard on your workbench, surroundings, etc.) you may also omit the box and just have a lens that's the right distance from the sensor, along with a couple of pieces of black cardboard on either side, to keep most of the light out of the space between the lens and the sensor. In one of the pictures you can see that I have merely leaned a piece of black foam rubber against the lens to block light from getting to the sensor without first going through the lens. In this case I used an HP plotter to move an RGB LED back and forth in a known plane of motion, and since I know the light will only be coming from the sides (not the top or bottom) I only need 2 sides to the "box".

In fact you can perform a quick test of the system with just a lens in front of the sensor (no box around it). See picture above (lens glued to side of heavy battery sitting on the workbench, so it can be focused by moving the battery forward and backward).

Next you will build the amplifier that the camera connects to. The camera connects to the amplifier and the amplifier connects to the light source.

Note also that the camera sensor (photodiode) should be reverse-biased (e.g. reverse-connected). The red wire (positive, i.e. anode) will be connected to the ground pin of the amplifier and the black wire will be connected to the amplifier's input.

Now let's build the amplifier!

Step 3: Build the "PHENOMENAmplifier" (Phenomenological Video Feedback Amplifier)

You will now learn how to build a very simple amplifier with massively high gain, such as to be able to exhibit an example of the "Phenomenological Video Feedback Effect" that allows an otherwise invisible physical phenomenon to be made visible, through electro-optical feedback with a moving light source.

The amplifier accepts input from the camera, and drives the light source. When the light is moved, it makes the phenomenon visible through a combination of two concepts:

  • Video feedback. Instead of the usual fractal patterns on a TV screen, we have only a one-pixel display. This gives rise to a reversal of how video feedback usually works: rather than moving a camera around and pointing it at a stationary screen, we move our "screen" (in this case our light is a 1-pixel "screen") around while the camera remains stationary; and
  • Persistence of Exposure (PoE). Persistence of Exposure can take place within another camera (e.g. set to long-exposure) or within the human eye itself, which functions very much like a camera.

Ordinary amplifiers that amplify voltage are unstable or unreliable at extremely high gains, and require many stages to get extremely high gain, but there are other kinds of amplifiers in which you can obtain an extremely high gain from one single stage! There are four main categories of amplifiers:

  1. voltage amplifiers. Voltage input gets amplified to voltage output;
  2. current (amperage) amplifiers. Current input gets amplified to current output;
  3. those that convert voltage to current; and
  4. those that convert current to voltage.

Amplifiers are characterized by their transfer function, i.e. "h" = "output" divided by "input".


So an amplifier (type 3) that converts voltage to current is called a "transconductance" amplifier, since the units of "h" (units of the output/input) = amps/volts = conductance. Vacuum tubes are examples of this kind of amplifier: grid voltage is converted to plate current.

The amplifier we'll use in this Instructable is the 4th fourth kind of amplifier. This is the one that I drew out on the chalkboard above (ECE516 class 2016 Jan 28). It is called a "TransImpedance Amplifier" or "TIA". It converts current to voltage, so the units of its transfer function "h" (units of output/input) are volts divided by amps, i.e. ohms (units of impedance).

In our application of the TIA, we're converting the photocurrent (current from the photodiode) to voltage. In this configuration the amplifier can operate reliably at extremely high gain with only one stage required! Here the gain is 47,000,000 volts per amp. Obviously if you put one amp in, you won't get 47,000,000 volts out, because it will saturate at the 12 volts or so supply voltage. But if you have one microamp of photocurrent, you'll get 10 volts or so output.

Connect the camera to the input of the amplifier as shown above. Additional pictures, including the 46 still images that correspond to the .gif animation above, are available here (link). Connect a voltmeter to the output of the op amp (pin 6 if you're using a TLC271). Cap the camera lens (e.g. with black tape that's black all the way through, or heavy black cardboard). You should see a very low near-zero voltage on the meter. Uncap the lens and let some light in. The output should vary linearly with the quantity of light. You can leave the meter connected, allowing you to use this apparatus as an accurate light meter.

You can vary the gain by varying Rf, depicted in the chalk drawing above. As the gain increases the circuit can become unstable. Use a feedback capacitor, Cf, in parallel with Rf, to reduce the gain at high frequencies, while keeping it high at low frequencies. Experiment with and without the capacitor to see its effect.

I prefer not to use a potentiometer (variable resistor) for Rf, because the wires or leads to and from it, along with its larger electromagnetic cross-section, can pick up stray noise signals.

Once you have confirmed that your 1-pixel camera is functioning as a light meter (which is the general philosophy of comparametric analysis), connect it to the transistor, in common emitter configuration as shown in the drawing. A base resistor for the transistor, Rt, limits the current into the base. I chose the 2SD261 for driving an LED because it has good current gain, and I didn't need a high frequency transistor. I found a surplus version that came with a nice heatsink. Double-check the pinouts. For the NEC D261 version of this transistor, the collector is in the middle (unusual for this TO-92 package where base is usually in the center). The one available on Digikey has a different pinout (on the Digikey version the base is in the middle).

Rb is a bias resistor. The purpose of this resistor is to keep the light glowing slightly even in complete darkness. This is required to initiate the metasensing optical feedback effect.

You may want to experiment with this value, or even substitute a potentiometer to be able to vary the "bias".

Now let's do some AR!

Step 4: Testing and Using the Metasensing AR Feedback System

In a dark room, you should be able to wave the light around. I like to use an LED because it responds quickly and allows you to wave the light around more quickly. Note that if the room is not dark enough, the LED might go to full brightness even when it is not in view of the camera. In this case you might have to decrease the gain from 47,000,000 ohms to something less. Try 1,000,000 ohms for example, until the system works properly, and then increase the gain a little bit to see how much gain you can get away with for a given light level.

If you have a darkroom, that's a good place to begin experimenting with what we call "Phenomenal Augmented Reality" (augmented reality of physical phenomenology).

With your eyes adjusted to the dark you should be able to observe the effect. Try not to let your eyes follow the light. You might want to affix a separate small indicator light, or stare at something like a darkroom timer with glowing hands, and as you wave the light back and forth it can be quite amazing and educational to see the phenonenological overlays imprinted on your retina by Persistence of Exposure.

If you have a photographic camera such as a DLSR, that has a "B" ("Bulb" or long-exposure) setting, try photographing the trails of light from the apparatus. A combination of flash and long exposure can also be quite amazing as the flash will make visible the system under test, while the long exposure will make visible the otherwise invisible virtual material (e.g. the sightfield of the camera under test). In this case you have 2 cameras: your test-camera (the one that's exhibiting the phenomenon) and your photographic camera (the one that's photographing the other camera that's exhibiting the phenomenon).

I'd love to see what you come up with!

Step 5: Conclusions and Going Further

Now that you've built a simple example of Phenomenal Augmented Reality, this opens the door to understanding a whole new world!

Try tracking the light source with Meta Spaceglasses (Metaglasses) and you can capture the sight into the eyeglasses and track and make a map of a camera and walk around and see its sightfield from multiple angles. See for example the "abakographic principle" applied to Metaglasses. You can also build robotic systems that implement the abakographic video feedback AR metasensing principle. Back in the 1970s I built something I called the "Sequential Wave Imprinting Machine" that allowed me to see radio waves, as well as surveillance camera sightfields by receiving the signals from cameras, through wearable computing and a wearable antenna array. In building these early amplifiers I used triacs to drive even bigger lights, such as arc lamps or other devices up to 12,000 watts (see above example in which the light is bright enough to melt snow to create a snow sculpture in which a surveillance camera's gaze burns through the snow).

Have fun and experiment safely, especially when melting snow or steel with your camera's gaze!

<p>Hi,</p><p>I made this. Results look beautiful. </p><p>I made few changes in the circuit. I used LDR in resistor divider circuit instead of photodiode and I used opamp without feedback as a comparator.</p><br>
<p>A lot fun in making this :D</p><p>And I have a question, Sir, how to make the range of camera wider? You can see in my photos, the range of brighter part is narrow.(Maybe because of my lens I guess) </p><p>Believe me, it's really an inspiring project!</p><p>Here are my materials:</p><p>Photodiode: BPW20RF</p><p>Op amp: UA741</p><p>Transistor: S9018</p><p>Capacitor: 6800 pF</p><p>Resistor: 4.8M ohm * 1, 1K ohm *2, 2K ohm *1(to get a 3K ohm), 150 ohm *1.</p>
<p>Try experimenting with different resistor values.</p><p>Larger values will increase sensitivity.</p><p>See if you can get a nice-looking picture, like the examples.</p>
<p>Hello Sir this is an amazing project. I learned a lot during the process but could not see the wave properly when I flashed a LED . I am a novice in this field but hopefully learn more with time. I have put a picture showing my circuit. Any advice would be great. Thank You. </p>
<p>Hi! This is<br>an awesome project! I tried to construct the circuit, however the 12v power<br>supply I was able to obtain is only capable of providing 229 mA of current, and<br>the indicator LED I found is very far from the quarter-amp requirement, which<br>resulted in not enough brightness to enable the feedback cycle even when I<br>changed the voltage amplify ratio to 1/47 of the original. However, I was able<br>to measure the change of voltage around the LED and confirmed change in voltage<br>when I moved a flash light in and out of the &ldquo;view angle&rdquo; of my pinhole camera. In this case, do you have any<br>suggestion to take a step further to complete this apparatus?</p>
<p>I see what looks like an infrared photodiode in your circuit. Where is the LED (also infrared, presumably?); it doesn't seem to be visible in the circuit.</p>
<p>Oh yes, I'm sorry I didn't install in on to it when taking the picture, This is the complete version of my circuit. I can only find this IR photo diode and had no luck looking for an infrared LED. Therefore I was using a red LED, hoping it can produce sufficient IR to make the circuit work. However, the result was when I have everything set up, the change in voltage across the LED is only a few mV, making the out come not visible at all. Do you have any suggestions in this situation?</p>
<p>Try using a photodiode LED pair that have similar spectral responses.</p><p>If you're trying the sense veillance flux from an IR camera, it makes sense to use an IR LED. If you're trying to sense veillance flux from a visible light camera, it makes sense to use a visible LED (e.g. red, yellow, green, or blue, or white).</p>
<p>Hi there, just wondering, could this be used to determine if the &quot;security cameras&quot; that my neighbour installed on his house are actually filming my movements on my own property? One of the cameras is almost directly opposite my side door (which has a window in the door) and I fear he can &quot;see&quot; into my house. Would this device be able to determine if this is so, and could I do it without touching the neighbour's security camera? Thanks in advance for your advice.</p>
<p>Thanks a very interest question, and touches specifically on the new field of Veillametrics which Ryan Janzen and I introduced, <a href="http://veillametrics.com/Veillametrics_JanzenMann2014pub.pdf">http://veillametrics.com/Veillametrics_JanzenMann2...</a></p><p>[Janzen and Mann, IEEE CCECE 2014].</p><p>There's a lot of existing work on bug sweeping, but what we do is add a scientific visualization twist to this, e.g. an add-on to existing sensing methods that makes their results visible.</p><p>A lot of cameras these days are hidden inside domes to make it harder to see which way they are aimed. The watchers don't want to be watched!</p><p>Therein lies in inherent hypocrisy (i.e. a lack of integrity);</p><p>see also <a href="http://wearcam.org/declaration.pdf"> http://wearcam.org/declaration.pdf</a></p><p>and look at Fig. 1 of this paper; I'm sure you'll find it amusing:</p><p><a href="http://wearcam.org/suicurity.pdf">http://wearcam.org/suicurity.pdf</a></p><p>Also a lot of cameras are being installed, so you might find your house under the veillance of a streetlight with a hidden camera in it, or a traffic camera that can see into your house, etc..</p>
<p>you could adjust it to be a hologram projector</p>
<p>Yes, I made some versions of it that drive SLMs (Spatial Light Modulators), which, when soaked in an index-matching xylene bath, make a good clean diffraction grating.</p><p>You can see some of my other holography-related work in this paper:</p><p>http://wearcam.org/margoloh2538.pdf</p>
<p>first, your works are truly inspiring! thanks so much for this tutorial!</p><p>i have a general (and naive) question: i saw that you have been using your invention to visualize different forms of invisible wave. however, camera is a device that sense light (rather than emitting light/energy). when you visualize the fov of a surveillance camera/IR faucet , how do you know if the feedback is not from the IR emitter? </p><p>sorry that the concept of measuring veillance field is very fresh to me. it would be awesome if you could explain briefly why vision is measurable? </p>
Thank you for your interest in our work. There are various means by which surveillance devices can be detected, and there is a huge field of study and array of products out there for bug sweepers and other systems to find surveillance. My contribution is distinct from this: whereas none of the other work has ever provided a visualization of veillance, what I provide is a means for visualizing that veillance once detected or measured.<br>See http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2014/W17/papers/Mann_The_Sightfield_Visualizing_2014_CVPR_paper.pdf<br><br>Veillametrics is also a new field of scientific measurement; see also http://www.eyetap.org/docs/Veillametrics_JanzenMann2014.pdf
<p>I had a lot of fun! Thanks for sharing.<br>I tried different photoresistors to see how the sensor size can affect the &quot;quality&quot; (range of detection) on my 1-pixel camera.</p>
<p>Looks great. Definitely exhibits the expositive abakographic visual feedback effect.</p>
<p>could u make video for how u make it please ?</p>
<p>Here's a .gif image that shows an animation.</p><p>I took one picture after each part I placed on the breadboard.</p><p>I begin with a blank breadboard, then build out to a larger power transistor (on a separate heatsink out-of-frame) driving a large incandescent light bulb.</p><p>Then I pull off that bulb and transistor, and then insert the smaller transistor to drive the LED.</p>
<p>Here's a lower-resolution .gif in case that takes too long to load.</p><p>See <a href="http://wearcam.org/instructable_ECE516_lab3_2016/">http://wearcam.org/instructable_ECE516_lab3_2016/</a></p><p>for download of the .gif file plus the 46 still images that generated it (at full resolution).</p>
<p>This was a fun one for sure!</p><p>I set up two cameras, one with it's sight-field revealed in green light and the other with it's sight-field revealed in red light. The area where their sight-fields intersected was revealed in the combination of red and green light, which is yellow light.</p><p>An RGB LED did the light painting.</p>
<p>This is totally awesome. Really fantastic!</p><p>Keep up the great work.</p>
<p>A simple yet amazing project that demonstrates phenomenal AR! Had a great time making it :)</p>
<p>Excellent!</p><p>Keep up the great work!</p>
<p>Myself and Antony Irudayaraj made an implementation of the phenomenal Augmented Reality. We came up with an array of flashing lights that when swung across the camera would let us visualize its sight field.</p>
<p>Excellent!</p><p>Nice to see an implementation of SWIM (Sequential Wave Imprinting Machine).</p><p>Keep up the great work!</p>
<p>This is great: especially in the rightmost photo you can see that the two middle traces (the ones that are within the field-of-view of camera lens you're holding) are a lot brighter than the others near the camera, but as the traces sweep away from the camera they all start to brighten up. Nice use of PWM and SWIM (Sequential Wave Imprinting Machine) implementation!</p>
<p>Made this project with multiple photoresistors to detect whether if the LED light is out of sight (red), partially visible (blue) or fully visible (green). The Emitted light is drawn on a black canvas with long exposure shots. </p>
<p>This looks great!</p><p>See also some more examples in</p><p><a href="http://wearcam.org/ece516/">http://wearcam.org/ece516/</a></p><p>(click on Lab 3 link),</p><p>http://wearcam.org/ece516/ECE516_lab3_2016feb01/</p>
<p>THIS. IS. AWESOME!!!!!!!!!</p>
I will have to revisit this ible later. Too complex for a quick scan on the smrtphone.
<p>Tried at home last night but could not get the led to glow outside the range as the resistors on my hand are very limited. I will try it again definitely</p>
<p>This looks great!</p><p>Wonderful to see you having fun with this project!</p><p>Looking forward to seeing any more images you might post....</p>
<p>This is pretty fascinating stuff. thanks</p>
<p>Thank you for the fascinating article and clear description of your process! A few years ago, I built a similar system for visualizing how sound moved through an environment. I am a sound designer by trade and was interested in seeing how my computer's monitors affected the sound from my speakers. I built an Arduino-based device that drove an RGB LED that was mounted at the end of a long stick along with a microphone. The color of the LED was deep red for lower frequencies on up to deep blue for the highs. The audio spectrum was represented by the visible light spectrum and the brightness was directly proportional to the loudness of the signal. It was essentially a single-pixel realtime analyzer.</p><p>I played pink noise from the center channel speaker and then slowly swept the stick up and down to paint a light painting of the dispersion of the sound using my DSLR's bulb mode. It was able to show the refraction of the higher frequencies over the top edges of my monitors while the lower frequencies were largely unaffected.</p><p>It was a fun experiment that I now regret not documenting to share with others.</p>
<p>Thanks for the kind words and the comment!</p><p>Fascinating work. I'd love to see some of your pictures.</p><p>In my high school physics class, back when I was a student, I did a demonstration of interference patterns using 2 separated speakers playing the same note. Back and forth movement of an array of lights made visible the nodes and antinodes, e.g. constructive interference versus destructive interference through Persistence of Exposure. See <a href="http://wearcam.org/fieldary.pdf"> http://wearcam.org/fieldary.pdf</a></p><p>I'm a lot like you in the sense that I'd rather do something than write about it, so I ended up with many things I built but never documented. Now that I'm a professor, I find a need to communicate with my students and others, so I'm getting better at documenting things (and archving things like scanning some old 40 year old photographic films and plates and organizing the data, etc.).</p><p>Keep up the great work, and keep making and building and tinkering == tinkering as a form of inquiry!</p>
<p>Nice I like it you are smart</p>
That's a nice technique to see the range of a lens' view. From what I could gather from the instructions, you are creating a feedback loop that controls the brightness of an LED. The LED itself is seen through the lens and detected and the more it is visible, brighter it glows.<br><br>How did you manage to make an LED strip respond to the lens' view? Did you cycle through the LEDs in the strip to see the effect on the detector?
<p>Yes, I built an amplifier and system that cycles through lights. Back in 1974 (42 years ago) I made one that could drive 35 lights up to a total of 2500 watts, and it had various modes of operation, like forward, backward, auto (bidirectional) sensitivity control, bias control, etc.. I called it the &quot;SWIMwear&quot; (Sequential Wave Imprinting Machine wearable computer), because it sequentially imprinted Phenomenological Augmented Reality onto the retina (or film).</p><p>See <a href="http://wearcam.org/swim/"> http://wearcam.org/swim/</a></p>
<p>Woah. That's pretty badass considering the times when it was made.</p>
<p>It's not so much that machines are watching us, the thing is there are people run/watch the machines that watch us </p>
<p>Yes that's a very good point.</p><p>From the perspective of being watched, we have no way of knowing if it is machines or people through machines, or machine intelligence of people or people intelligence of machines, or self-aware machines watching us. The whole system is opaque in that regard. Marvin Minsky, Ray Kurzweil, and I wrote a paper about this kind of thing, e.g. that while the Singularity is near, the &quot;Sensularity&quot; is nearer (upon us in fact), &quot;Sensory Singularity&quot; because if machines could do bad and we're just waiting for the to become self-aware, right now we don't need to wait for people to be interacting through the machines to do potentially bad things, so we should really focus on the here+now of the Sensularity before we worry about the Singularity.</p><p>http://www.eyetap.org/papers/docs/IEEE_ISTAS13_Sensularity_Minsky_etal.pdf</p>
People are smart enough that they can come up with notions that don't make any sense.
<p>I hope anyone here has understood these images are fake. You can always try to reproduce them but don't be disappointed if you don't succeed. This is simply an almost open loop amplifier, which is called a comparator. Light on the photodiode (mounted reversed !) will light the LED, that's it. It's always good to laugh anyway.</p>
I'm not sure what you mean by fake, but imagine what you could do if you replace the photodiode with a microphone and wave it in front of a speaker, or an IR detector and using it on an automatic door opener, or a magnetic field sensor and a visualize a magnet. using the same principle demonstrated here you can visualize any physical parameter that has an electronic sensor, light, IR, heat, magnetic field, ultrasonic, sound, etc...
<p>This is called Larsen effect.</p>
<p>The Larsen effect refers to the feedback you get when you walk around with a microphone that is connected to a PA (Public Address) system and get too close to one of the speakers, for example.</p><p>I used to do the exact opposite of this. I put the microphone on a stand and walked around with a speaker. In parallel with the speaker I connected a light bulb. Then I had a camera take a long exposure while I walked around with the speaker+bulb. The bulb glowed brighter when and where there was more feedback and thus traced out the pattern of the microphone's receptive field, on a long exposure picture. This was a combination of the microphone itself (i.e. its &quot;polar pattern&quot;) and the room it was in (sound reflections, the environment, etc.). Another example I made in the early 1980s was a wearable bug sweeper with an array of LED light sources that I could use with long exposure photography to capture a &quot;bugginess field&quot;. I reported on this work in the literature [<a href="http://wearcam.org/tei2015/p497-mann.pdf">Mann etal</a>] This is somewhat different than the Larsen effect. This effect doesn't have a name yet, so I call it the &quot;abakographic phenomenological augmented reality feedback effect&quot;, or the like.</p><p>I've created a number of different experiments in which I've discovered various different forms of this effect, with audio, video, radio, water, and many other kinds of fields. The phenomenological visual augmented reality feedback effect (with video feedback) is just one of many examples I discovered and explored.</p><p>See also http://wearcam.org/fieldary.pdf</p>
<p>I teach this every year as part of my Wearable Computing and Augmented Reality (Intelligent Image Processing) course and my students don't seem to have any difficulty reproducing these results.</p><p>See for example this year's results:</p><p><a href="http://wearcam.org/ece516/ece516_lab2_2016jan18/">http://wearcam.org/ece516/ece516_lab2_2016jan18/</a></p><p>Here, for example, a student has built a camera and visualized its sightfield:</p><p><a href="http://wearcam.org/ece516/ece516_lab2_2016jan18/HeltonChen_feedbaq3.jpg">http://wearcam.org/ece516/ece516_lab2_2016jan18/He...</a></p>
You got a laugh out of this?
<p>SteveMann: You are really thinking out of the box with this stuff. It is<br> fantastic to be able to look at things so differently from others. I <br>understand how you could have mapped out the IR beam in front of those <br>faucets by using a stick with a number of IR- controlled LEDs mounted <br>along it. But how did you take this picture? Did you take multiple <br>exposures in a darkened room? (Digital or film?) How did you get all 3 faucets in one <br>picture?</p>

About This Instructable




Bio: I grew up at a time when technologies were transparent and easy to understand, but were evolving toward insanity and incomprehensibility. So I wanted to ... More »
More by SteveMann:PlankPoint™: The planking board that's a game controller, joystick, or mouse How to play Auld Lang Syne on hydraulophone (underwater pipe organ) Haptic Augmented Reality Computer-Aided Manufacture 
Add instructable to: