Introduction: HDR EyeGlass: From Cyborg Welding Helmets to Wearable Computing in Everyday Life

This Instructable is not a lesson on how to use existing HDR (High Dynamic Range) software. Instead it gives you a DIY (Do-It-Yourself) approach to writing your own HDR software, and creating your own systems that can potentially go beyond what's out there already, or at the very least give you a sense of personal fulfillment that can't be had simply by using an existing product. Long live DIY and power to the people!

Seen through the Glass, Darkly

My grandfather taught me how to weld when I was 4 years old, and it was a wonderful fun experience, exhilarating, but in some ways a bit terrifying, because you need to wear a helmet with a darkglass (a single pane of very dark glass through which both eyes see the world). The whole world seems almost completely black except for a little pinprick of blinding bright light. So from an early age I was giving a great deal of thought to how we can see, hear, and more generally sense and make sense of the world. I spent a great deal of my childhood inventing, designing, and building wearable computers to mediate my senses, through something I called "mediated reality".

Mediated Reality allows us to augment some aspects of the world, diminish other aspects, and more generally, modify the view of the world.

So I wanted to build a digital eye glass that could do these three things:

  1. Augment where appropriate: e.g. see in complete darkness plus annotate with virtual markers or make visible otherwise invisible fields and sensory information;
  2. Diminish where appropriate: e.g. tame down the bright areas of the scene such as the electric arc of my TIG welder, the glare of light bulbs, the sun, glints of specular objects, etc.; and
  3. Modify: help people see by modifying the visual field, not merely adding to it. This included things like sensory substitution, e.g. seeing into the infrared to be able to see where a workpiece was heating up, as well as being able to see radio waves, and the like.

More generally I envisioned the eye glass in conjunction with a general-purpose wearable computer for multisensory integration (HDR audio and video), as well as for synthetic synesthesia (sensory substitution) including the addition of new senses (e.g. adding a "Sixth Sense").

This is what led me toward the invention of HDR (High Dynamic Range) sensing for audio, video, radar, and metasensing.

Understanding HDR (High Dynamic Range) Sensing

There are many really great Instructables that teach you how to do HDR:

So what I'm going to provide here is how to understand HDR. This deeper insight will help you take and make better HDR pictures and videos and "audios" (HDR is great for sound recording too, making the HDR video experience complete!). This insight will also allow you to apply HDR to many other sensing tasks such as radar, sonar, and even metasensing (the sensing of sensors and the sensing of their capacity to sense).

Rather than providing or suggesting specific HDR software (I'm actually involved with a number of companies making software, hardware, and related systems), the purpose of this Instructable is to inspire you to create your own!

Try to think beyond the confines of existing SDKs and APIs, and come up with something unique, original, and fun.

Step 1: Collect a Plurality of Differently Exposed Records, Sort Them by Exposure, and Compute Comparagrams

Picture of Collect a Plurality of Differently Exposed Records, Sort Them by Exposure, and Compute Comparagrams

A simple example is a set of differently exposed pictures of the same subject matter. But you can apply this philosophy to just about any kind of recording.

For example, in my childhood, in the early days of audio recording, I remember that most audio devices were monophonic. In our household, the record player just had one speaker in it. So did the radio and television receiver. But as an audio hobbyist I had a stereo tape recorder, and I even rigged up a wearable computer to record stereo sound in the late 1970s. When recording monophonic material such as my own voice, I connected the two stereo channels in parallel (both fed from the same microphone), and set the left channel very quiet and the right channel very loud. Thus when I was speaking the left channel never saturated, but the right channel did. But when others far from me were speaking, the left channel was too quiet == lost in background noise. The right channel was just perfect. Later I could combine these two recordings to get a single recording having a massive dynamic range, way beyond what any sound recording device of that day could produce.

I'd discovered something new: a way to combine differently exposed recordings of the same subject matter to obtain extended dynamic range. I also applied this method to photography and video, e.g. to combine underexposed and overexposed video recordings.

I was also fascinated by Charles Wyckoff's pictures of nuclear explosions featured on the cover of Life Magazine. I was fascinated by Wyckoff's work from MIT, so I applied there and was accepted, where I became good friends with Wyckoff, and showed him my HDR audiovisual work.

Dynamic range versus Dynamage range:

In addition to audiovisual work, consider other kinds of sensing or metasensing. The main principle here applies whenever sensors can be overexposed without damage, e.g. whenever their dynamic range is greater than their dynamage range. For example, HDR video was not possible back in the old days when video cameras were easily damaged by exposure to excessive light.

Try to find a situation where a sensor saturates and provides poor readings, in the presence of overexposure, but is not damaged by the overexposure.

Modern cameras are like this, as are many microphones, antennas, sensors, etc..

At the top of this page is an example picture that I made from two differently exposed pictures of the same subject matter. The two pictures appear below it. The one on the left is taken with an exposure suitable for the bright background behind the people in the picture. The one on the right is taken with an exposure suitable for the architectural details of the building.

Saturation and cutoff

In the leftmost image, many of the details are cutoff in the shadow areas.

In the rightmost image, many other details are saturated in the highlight areas.

Try some of the different ways of combining differently exposed pictures, in order to combine these two images to get an image like the one at the top of this page.

Try capturing some of your own datasets in which there are differently exposed records.

Try to understand the mathematical relationship between these differently exposed records.

Let v1 be the first record (without loss of generality we can sort the records according to exposure, and so let's say v1 is the record with less exposure). Let v2 by a record of greater exposure, by a factor of some constant, k. There is an underlying quantity, q, which we are trying to measure, through some sensor response function, f. So we have v1=f(q(x,y)), let's say (e.g. if it is a picture or image as a function of (x,y) pixel coordinates), and v2=f(kq(x,y)).

Now we want to try and understand the relationship between v1 and v2, two differently exposed images. The fundamental way of doing this is through something called the comparagram, which is a powerful yet simple (fundamental) mathematical tool for comparing differently exposed recordings of the same subject matter.

Step 2: Understanding the Comparagram Is the Key to Understanding HDR

Picture of Understanding the Comparagram Is the Key to Understanding HDR

Once you understand the comparagram, you're well on your way to understanding the fundamental concept behind HDR, and behind comparametric sensing in general. Comparagrams are the key to understanding the relationship between differently exposed recordings through a sensing apparatus of any arbitrarily-shaped response curve.

The first thing to try and understand is the relationship between two images, v1 and v2, and then later between more than two images, by considering them pairwise. We get this understanding by computing the comparagram between pairs of images. A comparagram is a joint histogram of two records that differ only in exposure. Compute the comparagram of the two images above. You can write your own program to do this, or use one from our VideoOrbits toolkit, http://wearcam.org/orbits/v1.23.tgz

Alternatively, assuming you're using GNU Linux (like most sane Do-It-Yourselfers), you can compute a comparagram in Octave as follows:

Cg = full(sparse(v1+1,v2+1,1,256,256));
assuming you have two greyscale images that each have 256 greyscale values.

A more computationally efficient way of doing the computation is to use an external C-language file, called from Octave. Here is a simple Octave file that more efficiently (more quickly) computes comparagrams in Octave: http://wearcam.org/comparagram.cc

Compile it as follows:

$ mkoctfile comparagram.cc

You may need to install liboctave-dev if you get the following message:

The program 'mkoctfile' is currently not installed. To run 'mkoctfile' please ask your administrator to install the package 'liboctave-dev'

If you are your own administrator (as most GNU Linux DIY enthusiasts are), then install it:

$ sudo apt-get install liboctave-dev

Here, in our case, since the images are color (RGB) you will get 3 channels of comparagram data, one that compares the red channel of v1 with the red channel of v2, the next channel comparing the green channels, and the third channel comparing the blue channels, thus making the comparagram itself an RGB entity. You can also convert the images to greyscale and the comparagram will thus only have one channel, since the response function of the camera is roughly the same for each of the three channels.

Here is a textbook definition of the comparagram:

"The comparagram between two images is a matrix of size M by N, where M is the number of gray levels in the first image and N is the number of gray levels in the second image. The comparagram, which is assumed to be taken over differently exposed pictures of the same subject matter, is a generalization of the concept of a histogram to a joint histogram bin count of corresponding pixels in each of the two images. The convention is to have each pixel value from the first image plotted on the first (i.e., “x” ) axis, against the second corresponding pixel (e.g., at the same coordinates) of the second image being plotted on the second axis (i.e., the “y” axis). Since the number of gray levels in both images is usually the same (i.e., 256), the comparagram is usually a square matrix (i.e., of dimensions 256 by 256)." [Intelligent Image Processing, S. Mann, 2001].

If you're going to write your own comparagram program (which I suggest you do, so you learn about it better, and also in the true DIY spirit), here is a nice very simple example to help get you started:

Consider two pictures that are each 3 pixels high and 4 pixels wide:
v1=[
1 3 2 3;
3 2 1 2;
0 0 2 0
]

and

v2=[
2 3 3 3;
2 2 2 2;
0 1 3 0
].

The comparagram is a two dimensional array of size M by N where M is the number of grey values in the first image, and N is the number of grey values in the second image, where entry C[m, n] is a count of how many times a pixel in image 1 has greyvalue m and the corresponding pixel in image 2 has greyvalue n. In this case both images have 4 grey values, so the comparagram is a 4 by 4 matrix, given by:

Cg=full(sparse(v1+1,v2+1,1,4,4))

%(above line typed in Octave or Matlab):

Cg=[
2 1 0 0;
0 0 2 0;
0 0 2 2;
0 0 1 2
].

Summing across rows of the comparagram gives the histogram of the first image: h1 = [3 2 4 3] and summing down columns of the compragram gives the histogram of the second image: h2 = [2 1 5 4]. Summing all the entries in the comparagram gives 12, which is the total number of pixels.

A simple exercise to help you understand comparagrams, what they can do, and how to use them:

Here is a simple exercise that will help you understand comparagrams. Do this simple exercise and I can promise a new world of insight, a kind of "aha" moment that for many of my students has marked the beginning of a new way of looking at the world of comparametric sensing.

Take any image, like the one at the top of this page.

Go into an image editor like GIMP (I prefer Open Source GNU Linux) and select "Curves" from the "Colors" menu.

This lets you "Adjust Color Curves".

Since the image is greyscale (I suggest starting with a greyscale image) you're simply adjusting greylevels.

Create whatever shape you want, on the curve.

Now save the result under a new file name.

Compute the comparagram of this new image against the original image.

What you get (see above) is the curve that you created.

In other words, the comparagram extracts (recovers) the curve from the image data.

Note that in the literature the "X-axis" (first axis) runs from left to right and the "Y-axis" runs from bottom to top, whereas in computer files (arrays in the C programming language) index with the "X-axis" (first axis) going top to bottom, and the "Y-axis" going left to right, so you may have to rotate the comparagram 90 degrees to get it to line up with the Curves.

Try this a couple of times with a couple of differently shaped Curves.

Now you can understand the comparagram as a fundamental tool for understanding the relationship between images of identical subject matter that differing only in tonality or greyscale. Differently exposed images exhibit changes in tonality, which become evident in the coparagram. The comparagram captures the essence of two things simultaneously:

  1. A camera's response function;
  2. The difference in photographic exposure across multiple images.

Step 3: Align (register) the Records

Picture of Align (register) the Records

You have 2 choices at this step:

  • use a tripod (or surveillance camera) which is fixed; or
  • us a wearable camera and align the images.

Once aligned, you are ready to use the comparagram as a way of tonally aligning multiple differently exposed pictures of the same subject matter.

The comparagram is a record of how one of the pictures relates to the other. In a sense it is a recipe (lookup table) that allows you to convert one image to the other. If you can find the ridge along the comparagram, that gives you a lookup table to convert one way, along rows of the comparagram, or the other way along columns.

With HDR photography we often use a tripod or otherwise mount the camera securely in the environment. Likewise with surveillance video the camera is often affixed to a building. But with wearable cameras the camera is moving around, as part of an EyeTap or other Digital Eye Glass, for example. In this case, our alignment problem is a problem in spatial as well as tonal registration. The comparagram gives us the tonal alignment, but more generally we also need spatial alignment, e.g. as is done in panoramic imaging, http://wearcam.org/orbits/ Here is a research paper on that topic: http://www.eyetap.org/papers/docs/icip1996.pdf

Orbits are 360-degree maps such as spherical or other projections, that allow the eyeglass wearer to look around and see things from every angle.

Start simple: try an example with just two images.

Different exposures can arise naturally with automatic gain control (AGC) or automatic exposure, e.g. when you point a camera at something really bright like a light source, or, in the example above, an open doorway to a bright outdoor scene, the camera will "dim down" revealing highlight detail. As you swing the camera away from the light source (e.g. to the right in the example above), the camera will "brighten up", revealing shadow detail.

It is in the areas of overlap where interesting things happen. Here we get differently exposed images of the same subject matter, as the camera moves around.

Step 4: Combine the Aligned Images

Picture of Combine the Aligned Images

An important result in comparametric image analysis is being able to "stitch together" multiple images of the same scene to make sense of the world.

Once you have images that are aligned, you essentially have multiple measurements of the same quantity.

Think of it like a voltmeter where you measure voltages on the different settings of the meter, and then you want to combine them all together into a single reading, at each pixel.

Its kind of like voting, where each image gets to "vote" on what a pixel value should be at a particular location. But not all votes are equal. In the dark areas, we want the brighter images to have a stronger vote because they have a better rendition of those areas. In the darker areas we want to brighter images to have a stronger vote. In the midtones, we want the medium exposures to get the strongest vote but still have a good contribution from the light and dark images.

So we have a weighted sum, as indicated in the formula above. See http://wearcam.org/comparam.pdf

Alternatively, a much better and much faster way of combining the images is to use a method developed by Mir Adnan Ali and I, in which a very simple LUT (Look Up Table) is used. The LUT is much like a comparagram (same dimensions and same axes as a comparagram). In this way each pair of images is combined almost instantly (by quick and simple computation: merely look up the result), so it can run at video frame rates. This method runs about 5,000 times faster than any other HDR algorithm currently in use, and can also be implemented in FPGA.

If the result is being used for computer vision, machine learning, or the like (e.g. face recognition by computer) we're done: just present the recovered q(x,y) to the algorithm. If we want to print or display the result we'll want some spatiotonal mapping (filtering, such as sharpening) to compress the dynamic range down to the print or display medium. There are lots of filtering programs and software available, but in the true spirit of DIY, try to implement or write something of your own. That way you'll learn a lot more, and have more fun, regardless of how well your result turns out.

Step 5: DIY: Have Fun and Learn by Inventing Something New

Picture of DIY: Have Fun and Learn by Inventing Something New

By doing it yourself, you have the potential to have a lot more fun and also to learn a lot more.

Moreover, you'll be better prepared to envision and solve totally new problems or dream up completely new ways of applying these concepts. For example, we can apply HDR to scientific sensing and visualization. Above is an example of work undertaken jointly by Ryan Janzen on biological visual metasensing. Metasensing is the sensing of sensing (see my previous Instructable on Metasensing). Here we're visualizing vision. This is like a visual acuity test, and visual acuity varies widely over our field-of-view. So it is the perfect candidate for HDR metasensing. Here we use a pseudocolor scale to visualize a massive dynamic range in vision. With HDR sensing and metasensing, we can finally see and understand many physical phenomena beyond what was previously possible.

Step 6: Click on "I Made It!" If You Got As Far As the Comparagram

Even if you don't make it all the way through this Instructable, if you got as far as generating a comparagram, please share your results.

Try one or more of the following:

  1. Capture a plurality of differently exposed recordings, such as some differently exposed images;
  2. Construct a comparagram from an image with itself (this will help you understand comparagrams, and the result should be the histogram of the image along the diagonal);
  3. Construct a comparagram from two different images that have the same exposure. Yes take a picture twice but with exactly the same camera settings, and then compute the comparagram of the two images. You should see some off-diagonal elements due to statistical noise. The "diagonal" line has fattened!;
  4. Construct a comparagram from two differently exposed images, and upload the two images and the resulting comparagram under "I made it!";
  5. Use the comparagram to combine the images;
  6. Generate a CCRF from two images. This will have the same dimensions as the comparagram and the axes have the same meaning. The CCRF is a close relative of the comparagram and is an efficient way of representing and computing comparametic information;
  7. Use the CCRF to combine two differently exposed images of the same subject matter. Compare the computational efficiency and image quality with (5) above. You should find that the images look better and are computed thousands of times faster.

Comments

LangQ made it! (author)2016-06-21

Used the sample images in this Instructable and had a try according to my basic understanding. The first image is the processed image and the second one is the comparagram. The .m file attached is the code in Octave which used the funcion from Image package.

SteveMann (author)LangQ2016-06-21

Looks good.

A good next step would be using the certainty functions, then CCRF.

Looking forward to seeing what you come up with next.

JaysonP12 made it! (author)2016-05-12

Task1: 4 differently exposed images(row1: 1, 2, 3, 4)

Task2: comparagram of images with itself(row1: 5, 6)

Task3: comparagram of two different images with same exposure(row2: 1)

Task4: comparagram of two differently exposed images(row2: 2, 3, 4, 5)

Task4: Use the comparagram to combine the images(row3: 1+2 => 3) I wrote code in MATLAB to combine image 1 and 2 and get image 3. I think it works, 3 appears more details than 1 and 2 repectively.

I also wrote the function of comparagram in JAVA(row3: 4)

SteveMann (author)JaysonP122016-06-01

Looks great!

Keep up the good work!

As a next step, maybe try computing a CCRF.

RainS4 made it! (author)2016-02-18

Lingkai Shen

So I find an orange and try my camera

Image 1: the resultant HDR image from image 2,3,4

Image 2: properly exposed picture as determined by the camera

Image 3: under exposed picture with -1 exposure compensation

Image 4: over exposed picture with +1 exposure compensation

Image 5: comparagram between image 3 (under) and image 2 (proper)

Image 6: comparagram between image 2 (proper) and image 4 (over)

I wrote my own code for computing comparagram and HDR in MATLAB. My HDR algorithm is a more simplistic one by determining whether a pixel shows details better brighter or darker. It works! As you can see in the upper left of the orange in Image 1, the texture of the orange is more detailed than that part in Image 2 (which is over exposed).

SteveMann (author)RainS42016-02-18

This looks great.

Very nice to see some good results arrived at in a nice elegant and simple way.

In the Octave "m" file, you can get a good speedup if you use the "sparse" function, e.g. full(sparse(...

SinaPan made it! (author)2016-02-13

Thanks! This was a lot of fun. Learned a few new concepts and tricks a long the way.

After reading the book description of the comparagram, I wrote an algorithm which gave me slightly different results than what I expected. i.e. for the example in your descriptions I get:
Cg'=[

2 1 0 0;
0 0 2 0;
0 0 3 1;
0 0 0 3 ]. instead of:
Cg=[
2 1 0 0;
0 0 2 0;
0 0 2 2;
0 0 1 2].

which seems to have a slightly different distribution. I'm planning to further investigate the differences and will post an update here. However, for the remainder of the experiment I looked at "comparagram.cc" and patched up my algorithm.

I experimented with different exposures and observed how the comparagram changes.

SteveMann (author)SinaPan2016-02-13

This looks good. I like how you animated the comparagram, presumably across different k values.

Is this 2, 4, and 8?

Also you might try comparasum (adding or averaging comparagrams to get better statistics).

SinaPan made it! (author)SteveMann2016-02-16

If you're referring to the exposure level as K, then yes. All of the comparagrams are between two pictures with different exposure (contrast) setting. The first one is between -2 and -1; second one between -1 and 0; third one between 0 and +1 and finally the last is between +1 and +2.

I was able to get better results by getting the comparasum (averaging) and further using a high pass filter to show where most of the data is being concentrated.

SteveMann (author)SinaPan2016-02-17

Excellent!

This looks great!

By the way, I usually use lowercase "k" to represent that ratio between exposures, and I usually use uppercase "K" to represent the log of that ratio, e.g. I'd say "k=2" or "K=1". K=log2(k).

hanwang92 made it! (author)2016-02-08

Han Wang

Image 1: comparagram of same image

Image 2: comparagram of different images same exposure

Image 3: comparagram of different image different exposure

Image 4: combined images with simple weighted algorithm

SteveMann (author)hanwang922016-02-08

You've to 7 images so you should have 6 comparagrams, with 1 step between each.

Now take those six comparagrams and add them all up (or average them), and you'll have the "comparasum" or average. That should fill in much more.

What you have now looks like the comparagrams of only the darker images.

Comparagrams of lighter images show near the upper right (top areas), and so if you combine them all, it will fill out nicely.

MattK52 made it! (author)2016-02-08

I took a set of photos of Ryserson from my building in many different exposures and created a program in Matlab to create the comparagram. For the simple case I converted the images to grayscale and based it off of the one channel.


I ran it against varying differences in the exposure from no change in exposure ( the exact same photo ) to a huge change in exposure. The max change being from 1/10 of a second to 1/640 of a second. The comparagram goes from a straight line, to polynomial curve, and then the curve seems to increase in it's exponential factor.

SteveMann (author)MattK522016-02-08

Looks great. Thanks for sharing.

Next try doing a sum of like-comparagrams (i.e. of comparagrams for which the "k" value is the same). This will fill it in more all the way along.

MattK52 made it! (author)SteveMann2016-02-08

Alright here are the comparagrams mixed together, I ended up putting 11 images total together.

SteveMann (author)MattK522016-02-08

Looks good. As you can see, the data is filled out much nicer.

It looks like the 2nd image is darker than the 1st image.

Usually I begin by sorting the images from darkest to lightest.

MattK52 (author)MattK522016-02-08

The link in step 4 (http://wearcam.org/comparam.pdf) doesn't work :(

SteveMann (author)MattK522016-02-08

Thanks for letting me know.

I've now fixed it by making a symbolic link to this abbreviation:

htdocs> ln -s comparametrics.pdf comparam.pdf

Steve

Helton Chen made it! (author)2016-02-08

A very educational and fun lab! I did a series of comparagrams with different exposure gap (f(2q)vsf(q),f(4q)vsf(q),f(8q)vsf(q)) and another series of comparagrams for f(2q)vs f(q) but vary in exposure time (shutter), gain (ISO) or aperture. The results are quite fascinating!

I was able to then merge the images with the comparagram and CCRF. I had some calculation error in generating the CCRF, but the end result of the merged HDR image still turned out acceptable.

Will update my result once I generated a properly calculated CCRF.

SteveMann (author)Helton Chen2016-02-08

Looks great.

Nice to see you got the CCRF. Try building the CCRF with more statistics (larger number of images and comparagrams). You need about 100 or so pairs to get a really good CCRF.

maker_m made it! (author)2016-02-06

These are the constructed comparagrams from the photos I took just now. Because the contrast of first pair is very large, the curve in the comparagram is situated very close to the left. Photos for the second set have some minor differences, because people and cars were passing by the street and clouds were moving, contributing to the noise (wider band) in the comaragram.

Right now, I have a basic understanding about how to combine the 2 images using the response function and its derivative from unrolled comparagram to pick the pixel from either image based on their sensitivity, but haven't worked through all the math and programming to get to that point. -- Annie

SteveMann (author)maker_m2016-02-06

Your result looks good so far.

Try an example where there is only a slight difference in exposure, and also where the two images differ only in exposure.

As you do a few examples you'll develop a deeper understanding of the principles of multiple exposures....

maker_m (author)SteveMann2016-02-06

Photos from this set only differs in exposure, 1/2 and 1 sec respectively. The curve in comparagram looks a lot cleaner, I will try a curve fitting with this one.

SteveMann (author)maker_m2016-02-06

This looks great.

Now try "Comparasum".

That means take a whole bunch of pictures that differ by 1 f-stop (e.g. one is twice the exposure of the other).

The compute a whole bunch of comparagrams, all from pictures that differ by exactly 1 f-stop.

Then average (or merely sum) the comparagrams together.

What you get will approach the comparagraph in the limit, more or less.

maker_m (author)SteveMann2016-02-08

I gained more progress on getting the response function for f1 today. I first did a curve fit for one of my comparagraphs. Then I used the equation of the curve to unroll points to get the response function in log scale. Then I plot the derivative of response function. The next step would be overlaying different derivative curves together for different exposures to get the certainty function so that I can construct the HDR photos out it.

SteveMann (author)maker_m2016-02-08

This looks great.

The curves definitely have the right shape.

The certainty function should probably lilt a bit more to the right, though.

Try plotting also the certainty in the range (what you have here is domain certainty).

The certainty in range will go from 0 to 255 for a typical 8 bit image.

maker_m (author)maker_m2016-02-08

I tried to combine the above images using a step function as the certainty function (attached the result here). I would expect that once I properly construct the certainty function, the combined photo would look better (with less abrupt transitions).

SteveMann (author)maker_m2016-02-08

Good work so far!

Yes, if you can soften the transition (i.e. using a less abrupt certainty function), you'll get nicer gradation across regions.

SenYang made it! (author)2016-02-08

I have had some troubles trying to learn the principles of HDR at first, and I am motivated to making an application that attempts to educate others to reduce their learning curves to this subject. Currently the application animates both a comparametric plot, and one of the two images being compared with pixel trackers in real-time.

As you can see in the screenshots, as the darker parts of the images are being processed, the lower left corner of the comparametric graph is being built. As the brighter parts of the image is being processed, you can see the graph extending towards the right diagonal. This demonstrates the parts and significance of the plot, and how it can relate to the composition of the images (i.e. the left parts are more concentrated, since most of the image are in the dark.) It also demonstrates lighting and noise.

The user may also track and see the points identified on the plot as they move their cursor across the image, as shown.

I am currently finishing up my work on animating the creation of a camera response graph using comparametrics step by step during the lab today.

SteveMann (author)SenYang2016-02-08

Excellent! This is very nice to see.

I like the idea of creating teaching tools to teach the concepts of comparametric imaging!

marc_alain made it! (author)2016-02-07

The first set of photos that I worked with were of this armoire.

There are four comparagrams beside it. In order left-to-right then top-to-bottom:
- a comparagram of one image compared to itself
- a comparagram of two different images with same exposure values taken with the camera set at 50 ISO
- another pair of images with the camera set at 100 ISO
- another pair of images with the camera set at 400 ISO

Since the "fatness" of these lines indicate noise, I wanted to see whether I could capture the increased noise that the camera produces when the ISO is set at higher values. Now we have a tool that can identify the best camera in the house for low light conditions :-)

Afterwards, I took some photos of the school across the street. The trio across the top of the composite are the three original images, each taken in increments of two exposure values. The larger image below is my attempt at writing a weighting algorithm for combining images.

Finally, the last comparagram compares the under-exposed photo with the "properly" exposed photo.

SteveMann (author)marc_alain2016-02-07

Looks great! Try summing together comparagrams, e.g. of the medium and dark, and then of the light and medium images: sum the two comparagrams together.

Notice how this fills out the space more nicely along the curve....

KryptoTSD (author)2016-02-04

I might try this...

SteveMann (author)KryptoTSD2016-02-06

If you get as far as computing a comparagram, please share your results.

You can learn a lot by just capturing two images and computing their comparagram.

That's the first step toward understanding comparametric sensing and comparagraphic image compositing.

j.middlefinger (author)2016-02-03

Wow. I'm trying to pick my jaw up off the floor.. What a great write-up. I absolutely love the AR tig welding portion of the video. To have that kind of instant feedback would be incredible. Having to stop and sharpen tungsten all the time really gets in the way of practice. Are you still in the Boston area?

SteveMann (author)j.middlefinger2016-02-03

Thanks for the encouraging words!

In answer to your question, I now split my time between Toronto and Redwood City, California (near Stanford).

TheThinker (author)2016-02-03

Very nicely done! Inspirational.

KenY11 (author)2016-02-02

A whole new way to see beyond our senses!

Bullfrogerwytsch. (author)2016-02-02

Wicked cool

About This Instructable

8,160views

149favorites

License:

Bio: I grew up at a time when technologies were transparent and easy to understand, but were evolving toward insanity and incomprehensibility. So I wanted to ... More »
More by SteveMann:PlankPoint™: The planking board that's a game controller, joystick, or mouseHow to Play Auld Lang Syne on Hydraulophone (underwater Pipe Organ)Haptic Augmented Reality Computer-Aided Manufacture
Add instructable to: