## Introduction: Shooting for a Homepage Feature: Timelapse and Multi-exposure Photography the DIY Way (Make or Write Your Own Code!)

What I love about Instructables is that it is photo-centric: the first thing you see when creating a new Instructable is "Add Images", before any text entry dialog appears! In the world we live in today, pictures are everything.

My last Instructable was featured on the Instrucables homepage. The editors wrote "it's excellent, wonderful, and just plain awesome.".

People have been asking me how I made my pictures, so in this Instructable I teach how to make animated .gif images and multiple-exposures in a true DIY way, using very simple computer programs you can build or write yourself. I also note some simple observations like how to shoot for Instructables, e.g. make image borders with RGB=(246,246,246) and make sure images uploaded to Instructables do not exceed 10,000,000 bytes.

Since childhood I've been fascinated by the passage and stoppage of time, inspired by the work of Harold Edgerton (pictures of bullets going through apples, etc.) and I've been particularly interested in timelapse on the spacetime continuum of radio waves that normally travel at the speed of light, i.e. making them visible.

Whatever photographic subject you're shooting, timelapse is a good medium in which to express it, and especially if your project emits light, there are some great opportunities for multiple exposure photography and something I invented many years ago I call the Computer Enhanced Multiple Exposure Numerical Technique (CEMENT), a generalization of one of my other inventions, HDR (High Dynamic Range) Imaging.

Whereas HDR uses comparametric equations to analyze and combine differently exposed pictures of the same subject matter, CEMENT uses superposimetric equations to analyze and combine differently illuminated pictures of the same subject matter.

## Step 1: Make or Improvise an Optimal Environment for Shooting Great Photos

There are two key tricks to good documentation of an Instructable: (1) consistency (e.g. so pictures are in good alignment), and (2) good lighting.

For consistency, a good tripod is useful, but if you don't have a tripod, or if a tripod is getting in the way of the shot (getting in the way of your working, or casting shadows on your work), you can affix the camera to a home made mount. If I'm using a camera phone, I usually secure it to a support overhanging my work area.

I also usually affix things to the work area, e.g. I glue the breadboard to the desk temporarily (using hot melt glue -- enough to hold the board securely but not so much as to make it difficult to remove).

I have a DIY-style holography lab with an optical table that has threaded holes for securing objects, but any good solid workbench will work quite well. Try to avoid flimsy tables that shake between exposures. For lighting I prefer to experiment and improvise with cheap and simple home-made or improvised fixtures, e.g. simple lamp holders, lamps, etc.. DC light sources tend to give better results (less flicker, etc.). We like to build our own LED power supplies so we have better control of the lighting. In this way, for example, the lights can be blanked more quickly than simply cutting the power to a light that has a filter capacitor in it (and therefore a lot of afterglow). Blanking can even be done on a per-frame basis (e.g. to have the lights on in alternating frames of video using an LM1881 sync separator and therefore generate lightspaces). The more control you can establish over lighting, the better you can manage it creatively and artistically. Plus I just simply prefer the DIY approach to building my own systems rather than using expensive professional photography equipment.

In the old days I used to shoot with film which had some gate weave as the film moved around from frame-to-frame, requiring pins to register the sprocket holes, but with modern filmless cameras, getting good stability in an image sequence is much simpler.

For manual cameras I usually affix the zoom and focus with a small drop of hot melt glue to keep the lens from jiggling around while I'm working.

I also use manual exposure settings so that the exposure doesn't dance around from frame-to-frame as subject matter varies. A manually operated remote trigger is very helpful, to grab a frame for each step of a sequence. Typically I like to grab at least one frame for every component or wire inserted onto a breadboard. These can be downsampled later if desired.

In my studio, lab, or the like, I usually paint the walls black, and wear black clothing so that I have better control of the lighting. If you don't want to commit to black walls, you can temporarily use some black cloth to make a "blackground" or "blackdrop" (black background or black backdrop). I find that the choice of lighting is far more important than the choice of camera; most modern cameras have enough resolution, so the difference between a great picture and a good picture is in lightspace rather than imagespace.

For the pictures in my last Instructable I glued a piece of black acrylic to my table, and then glued my breadboard to the acrylic. I also glued two desk lamps to the acrylic and glued one floor lamp to the floor to keep it stable. After completing the circuit on the breadboard I pried the glue away so that I could wave the breadboard back and forth and show the radio waves as an augmented reality overlay.

The human eye does a really good job of integrating light, so that when you wave something back and forth the eye can see it nicely. But many cameras do a poor job of capturing a true and accurate rendition of what the eye sees.

If your project produces light, you have a really great opportunity to make it really shine, by using multiple exposures to capture the project the way the human eye perceives it. In my case I took a set of pictures in ambient light, which I named as filenames like a1.jpg, a2.jpg, a3.jpg, etc. (a sequence of images shot with ambient light). Then I shot another sequence of images in the dark, with longer exposures, to show light trails the same way that the eye sees them. I labeled these h1.jpg, h2.jpg, h3.jpg, ... for "head", and t1.jpg, t2.jpg, t3.jpg, etc., for "tail". The above example shows three ambient light images in the top row, three "heads" in the middle row, and three "CEMENTs" in the bottom row. Each CEMENTs was made by CEMENTing the two images above it.

## Step 2: CEMENT (Computer Enhanced Multiple Exposure Technique)

CEMENT (Computer Enhanced Multiple Exposure Numerical Technique) is a concept and simple computer program that I created about 30 years ago, in the 1980s, in FORTRAN and then ported to "C". I still use it regularly (several times a day in a typical workday) and in true DIY style it is best kept raw and simple (e.g. command line interface, nothing too fancy or sophisticated). This is all so simple in fact that you can easily write it yourself without being held prisoner to any API or SDK!

Yet it gives you a powerful tool for managing lighting and exposures.

Over the years I've found that pixel count (more megapixels) matters less than dynamic range, the range of lightspace, and lighting in general. My HDR eye glass only run at NTSC resolution, yet allow me to see better than most cameras, owing to a dynamic range of more than 100,000,000:1, even though the pixel count is not too high.

The best way to get control over exposures is to use multiple exposures, and manage each exposure separately. When shooting something that has LED lights on it, or a video display, or TV screen, for example, one shot taken with flash or ambient light, and another taken without flash or without the ambient light (e.g. in the dark) can be combined using the Computer Enhanced Multiple Exposure Numerical Technique (CEMENT) that I invented for combining multiple differently illuminated pictures of the same scene or subject matter.

Above you can see examples of pictures I took with a 4-hour long exposure, and a ten-year-long exposure, using CEMENT (HDR with 9 exposure brackets every 2 minutes for 10 years).

I spent most of my time working through the philosophical, inventive, and mathematical aspects of CEMENT and less time writing great code, so the programs are very primitive and simple, in true DIY style, so don't expect great code. You can download it from http://wearcam.org/cement.tgz

Here's also a mirror site in case wearcam.org is busy serving requests:

http://www.eyetap.org/cement.tgz

CEMENT is meant to be run on a simple GNU Linux computer system.

Make (compile) the program using gcc.

If you have too much trouble getting it to compile, you can skip ahead to Step 3, and do it using Octave instead.

In the main CEMENT directory there are some example images you can learn and test with. See that these are present:

\$ ls *.jpg

sv035.jpg sv080.jpg sv097.jpg sv100.jpg sv101.jpg

Now you can try CEMENT.

First generate a lookup table:

\$ makeLookup

With CEMENT, images are combined in lightspace, so you first convert one of the images to lightspace, CEMENT it to another image, and then convert the result back to imagespace.

If you care about this you can read more about comparametric and superposimetric equations, or you can just assume we're doing the math right, and continue.

Once you generate the lookup table, you can apply it to the first image, e.g. let's say we want to CEMENT 35 and 80 together, we'll begin by initializing with sv035.jpg using RGB (Red, Green, Blue) values 1 1 1 (white):

\$ cementinit sv035.jpg 1 1 1 -o spimelapse.plm
Init sv035.jpg (powLookup22.txt) 1 1 1 100%

If you forgot to makeLookup you'll get an error message:

Unable to open powLookup22.txt.
Segmentation fault

I love machines, so rather than exit gracefully, I print a warning message and then let the raw ungraceful exit occur.

Once you get cementinit going on sv035.jpg you've created a Portable Lightspace Map, with filename spimelapse.plm

Now CEMENT the second image into that PLM:

\$ cementi spimelapse.plm sv080.jpg 1 1 1
p: 2.2 exp: 22 filename: powLookup22.txt
Add sv080.jpg 1 1 1 100%

and convert the result back to imagespace:

\$ plm2pnm spimelapse.plm -o spimelapse.jpg
Create spimelapse.jpg (powLookup22.txt) -1 -1 -1 100%

Now you've just CEMENTed two pictures together!

If you got this far, please click "I made it!" and upload the two input images and the CEMENTed result.

## Step 3: Check Your Results: How to Write Your Own Version of CEMENT and Test It to See How Well It Works!

How do we know how well CEMENT works?

One way to test it is to take 3 pictures of a scene or object lit by 2 lights, as shown above (from our ICIP2004 paper; see reference at end of paper).

The first picture, call it "v1.jpg", is a picture taken with one light turned on. Call that one light lamp 1. In our case, that's the lamp to the left of our studio space (notice how it casts shadows to the right of their corresponding objects).

The second picture, call it "v2.jpg", is a picture with that light turned off and another light turned on, say lamp 2 turned on, so v2 is the picture as lit only by lamp 2. In our case, lamp 2 is the the right of our studio space (notice how it casts shadows to the left of their corresponding objects).

The third picture, call it "v3.jpg", is a picture with both lights turned on together. Notice how we see double shadows in this picture.

Now try CEMENTing v1 and v2 together, call the result "v12.jpg".

Now test to see how similar v12 is to v3.

The easiest way to read these images into an array is to download the raw images:

http://wearcam.org/instructableCEMENT/octave_scrip...

http://wearcam.org/instructableCEMENT/octave_scrip...

http://wearcam.org/instructableCEMENT/octave_scrip...

but if you have a slow net connection, just grab the .jpeg images and decompress them:

djpeg -gray v1.jpg > v1.pgm
djpeg -gray v2.jpg > v2.pgm
djpeg -gray v3.jpg > v3.pgm

then edit out the header so you have the raw data, saved, let's say, as files "v1", "v2", and "v3".

You can do this in Matlab, but if you're in the true DIY spirit, you'll prefer to use the free+opensource program "octave": apt-get install octave, and then try this:

fid1=fopen('v1');
fid2=fopen('v2');
fid3=fopen('v3');
V1=reshape(v1,2000,1312); % these dimensions are assuming you downloaded from wearcam
V2=reshape(v2,2000,1312);
V3=reshape(v3,2000,1312);
colormap("gray");
image(V1/4);
image(V2/4);
image(V3/4);
v12=v1+v2;
e=sum(sum((V12-V3).^2))

Which returns:

e = 9.0995e+09

If you downloaded from Instructables, the image dimensions may have changed, e.g. if the dimensions are something like 1024 by 672, then change the above reshape commands to:

V1=reshape(v1,1024,672);
and the same for V2 and V3.

We have just CEMENTed two the two single-lamp images together in Octave, by simply adding them together, and tested to see how similar they are to the picture with both lights on.

Now instead of adding them, try taking the square root of the sum of their squares, i.e. like a "distance" metric:

v12=sqrt(v1.^2+v2.^2);
e=sum(sum((V12-V3).^2))

and what you get is a much lower error:

ans = 6.5563e+08

Now try cubing them and taking the cube root; here the error is a little bit lower still:

ans = 2.2638e+08

More generally, we can raise them to some exponent, n, and then take the nth root. Of course n needn't necessarily be an integer. So let's try a whole bunch of different "n" values and plot a graph of the error as a function of "n". We can do this nicely by writing a simple Octave function in a file named "err.m":

function err=err(v1,v2,v3,N)
if(nargin~=4)
disp("err must have exactly 4 input arguments: v1,v2,v3,n");
end%if
if(max(size(N)))>1
disp("err only deals with vector N, not arrays of N");
end%if
for k=1:length(N)
n=N(k);
v12=(v1.^n+v2.^n).^(1../n);
err(k)=sum(sum( (v12-v3).^2 ));
end%for

Now we can test CEMENT for a whole bunch of "N" values in a long list, e.g. let's try 1000 different N values going from 1 to 10:

N=(1:.01:10).';

The error for each of these is in:

e=err(V1,V2,V3,N);

which is at a minimum around 3.27 or 3.28 (close to equal for those values of N), so let's say that the optimal value of "N" is 3.275.

The optimal value of "N" depends on the response function of a particular camera, which in my case is the Nikon D2h.

Others who have done this Instructable report "N" values for other cameras, so I propose the creation of a "Massive Superposimetric Chart" much like the "The Massive Dev Chart" for film:

Massive Superposimetric Chart:
Camera make and model number "n" Response function
Nikon D2H 3.275
Nikon D60 3.3
Sony RX100 2.16
Canon Powershot S50 2.1875

Going further:

We've used a simple power law here for illustrative purposes, but in fact, we can do something a lot more powerful: we can actually unlock the secrets of any camera, non-parametrically, i.e. determine its true response function, from three pictures, as above, but instead of solving for one "n" we solve for the 256 quantimetric camera response function entries. See for example:

Manders, Corey, Chris Aimone, and Steve Mann. "Camera response function recovery from different illuminations of identical subject matter." In Image Processing, 2004. ICIP'04. 2004 International Conference on, vol. 5, pp. 2965-2968. IEEE, 2004.

## Step 4: Automate CEMENT With TROWEL

TROWEL is a tool for applying CEMENT.

In true DIY spirit TROWEL and CEMENT are command-line based. Keep things pure and simple to start with. Then add fancy GUIs later (we wrote something called X-CEMENT, an X-windows front end, and eCEMENT, on online web-based interactive CEMENT, etc., but let's not go there yet!).

TROWEL is an acronym for To Render Our Wonderful Excellent Lightvectors, and it is simply a PERL script that reads a file named "cement.txt" and calls CEMENT for each line of the cement.txt file that specifies a filename and RGB (Red, Green, and Blue) values.

So for the previous example, create a cement.txt file like this:

sv035.jpg 1 1 1

sv080.jpg 1 1 1

and then run TROWEL with that cement.txt file in the current working directory:

\$ trowel
Init sv035.jpg (powLookup22.txt) 1 1 1 100% p: 2.2 exp: 22 filename: powLookup22.txt

Add sv080.jpg 1 1 1 100%

Create trowel_out.ppm (powLookup22.txt) -1 -1 -1 100%

Try experimenting with different colors and different RGB values, e.g. try changing the cement.txt file to:

sv035.jpg 1 1 0
sv080.jpg 1 2 9

and you will get something with nice yellow light coming from the window, and a bluish sky and building.

These are just low-resolution test images to run quickly, and come with the cement.tgz file for testing purposes.

You can get the raw data for the above picture at full resolution from http://wearcam.org/convocationhall.htm

and click "index of lightvectors" to see the individual exposures that made this multi-exposure picture,

and if you want to reproduce my result exactly, use this textfile: http://wearcam.org/ece1766/lightspace/dusting/conv...

and rename it to "cement.txt" and then run trowel on those lightvectors.

## Step 5: Making Image Sequences With CEMENT

Instructables.com creates a light grey border around each picture.... If you're creating diagrams for an Instructable, like the above diagram, you should set the background color to this same light grey, specifically RGB=(246,246,246)=#F6F6F6, because if you leave it as NaN (transparent or undefined) it gets set to Black. I created the above drawing using Inkscape and then converted the SVG to a PNG file (it would be nice if Instructables had better SVG support, e.g. for vector graphics).

In making a sequence of images, I usually use CEMENT to make each frame of the image sequence, doing this in a shell script, i.e. calling TROWEL from within a shell script, usually BASH, usually with a file named "cements.sh" in the same directory as the images being CEMENTed.

Here's an example "cements.sh" file where I generate 2 frames called out101.jpg and out102.jpg. The first frame is made by CEMENTing a1.jpg (ambient picture #1) and h1.jpg ("head" picture #1), into output frame 101, and then the second frame is made by CEMENTing a2 and h2 into output frame 102. The other image "r.jpg" is just the radar lit up and nothing else.

#!/bin/sh
echo "a1.jpg 1 1 1" > cement.txt
echo "h1.jpg 1 1 1" >> cement.txt
echo "r.jpg 1 1 1" >> cement.txt # radar only lit up
trowel
cjpeg -quality 98 trowel_out.ppm > out101.jpg
echo "a2.jpg 1 1 1" > cement.txt
echo "h2.jpg 1 1 1" >> cement.txt
echo "r.jpg 1 1 1" >> cement.txt # radar only lit up
trowel
cjpeg -quality 98 trowel_out.ppm > out102.jpg

You can get that raw data and images I used from http://wearcam.org/swim/swimstructable/swimstruct...

Here's the shell script I wrote to make the main image used in my homepage Feature last week:

## Step 6: Now Make a ".gif" File But Be Sure Not to Exceed 10,000,000 Bytes!

One thing I really love about the Instructables.com website is its excellent handling of .gif images.

In the self-portrait above, I took one picture with flash, and then 35 long exposure pictures with a light bulb, with the cord defining an arc in front of the surveillance camera. I'm using a tube-type television receiver and amplifier system I created more than 30 years ago, in which four 6BQ5 tubes in a push-pull configuration (two in parallel for push and two in parallel for pull) to drive a 220 volt light bulb directly with an amplified NTSC television signal. This results in video feedback as described in one of my earlier Instructables.

A nice feature of CEMENT is that you can keep CEMENTing in new lightvectors. In the above sequence, the first image is the one with flash, then the next one has the first bulb trace CEMENTed in, then the next bulb trace is CEMENTed into that total, and so on.

Here's the simple two-line shell script called "cementij.sh" I wrote to do the above (calling cementij.sh from another script, once for each output image):

#!/bin/sh
cementi temp.plm img\$1.jpg 1 1 1
plm2pnm temp.plm -o steve\$1.jpg

Finally, at the end, I CEMENTed in the while thing at double weight, and quadruple weight, etc., to build it up for a final crescendo of "(sur)veillance flux". Lastly, if the final image is the most interesting, rename it so it comes up first in the globbing of filenames in generating the .gif image. In this way, when the .gif file is non-animated (e.g. while loading initially, or when iconified) the first frame (and sometimes the only frame visible) will be the most interesting of the frames.

Make sure none of your .gif images exceed 10,000,000 bytes or it just won't work when you upload to Instructables, and there's no warning message (it just simple fails to load).

In the true DIY spirit, I like simple command-line interfaces, and my command of choice to generate the .gif files is "convert".

The main picture from my previous homepage Feature Instructable was generated using the following script:

convert.sh

This generates various sizes of .gif files I used internally, with one of them being generated at just under 10,000,000 bytes.

To get the exact size, I simply generate something near the size I think it should be, and then correct it. For example, the picture at the top of this page at original resolution of 1024 lines (steve1536x1024, close to HDTV resolution) was too big (22894455 bytes) by a factor of 22894455/10,000,000, i.e. 2.289... times too big.

Take the square root of that ratio, and you get about the size reduction you need to hit the target size: cut the size by about 1.5131 and you get 1015x677.

Odd sizes like that don't tend to handle well on computers. So pick the next size down that results in image dimensions that are reasonably composite numbers, e.g. let's take those dimensions, divide by 32 (a typical blocksize in image processing and handling), round down, and then multiply again by 32.

This gives us 960x640, which ends up giving a .gif file that's 9,100,702 bytes, i.e. just under 10,000,000 bytes.

Have fun and make some great .gif pictures the DIY way (e.g. with code you make or write yourself)!

EllenZhu made it! (author)2016-03-03

Camera: iphone 6

Still trying to figure out the best N value

I took some pictures of my desk:

First one: turned on the room light only

Second one: turned on the desk lamp only

Third one: turned on both of the lights

Fourth one: CEMENTed from the first photo and second photo and it seems a little brighter compared to the third photo.

Fifth one: played with different color distributed

Really exciting project! So much fun :)

vimalzxc made it! (author)2016-03-01

From Vimal and Antony Albert Raj Irudayaraj.

We tried to find the n values for various mobile cameras that resulted in the least error while combining two images that differed in the illumination. The iphone and Samsung S6 had n values of 3.08 and 2.78 respectively while the others had relatively higher values of n.

The images are labelled as v1,v2,v12 and v3 as per the convention followed in the instructable.

SenYang made it! (author)2016-02-29

Made similar application that works on Windows. I believe the best way to understand how something works is trying to implement that idea.

The application contains 7 panels. Going from top left corner to the bottom right I have: 1. picture of the stove light on
2. picture of kitchen light on
3. the plot on the relationship of pixel intensity parameter N, and display of N value when optimized versus the squared error
4. the picture of both lights on
5. application generated image space composition of when merging the two images
6. light space composition (to be done soon) of merged images.
7. user input that specifies RGB composition of both images.

For (Sony RX100 M4) it is computed that the optimal N value is 2.16.

One generated image has image 1 RGB = (100%, 20%, 20%) and image 2 RGB = (20%, 20%, 100%). The other one is (20%, 20%, 100%) and (100%, 20% 20%).

SteveMann (author)2016-02-29

Looks great!

Very nice and creative!

SenYang (author)2016-02-29

Application of the 'senment' application on previous project on SWIM.

I have one image that is a long exposure of my machine swiped horizontally in space while changing its display to demonstrate my name. And another image of the ambient background, or the setup.

The images are combined into compositions, which are visible on the top left corner of each produced image. I have one that is simply combined, one that changes the colour of the name to cyan and the other one to enhance the 'redness' of the display.

Photocredits: Helton and Steve.

SteveMann (author)2016-02-29

Looks great. Nice use of colour to make the lights stand out better!

SenYang (author)2016-03-01

Update: Implemented light space composition. It looks very similar to that of the image space composition.

SenYang (author)2016-03-01

Under a closer examination it appears that the edges are sharper in the light space composition with respect to the image space composition.

SteveMann (author)2016-03-01

Yes, and also the lightspace composition will look more natural over a wider range of image examples.

asahoo3 made it! (author)2016-02-29

Camera: Nikon D5100

Best N value : 2.17

SteveMann (author)2016-03-01

Looks great!

It looks like there's some relative movement between the camera and the subject matter. Maybe try with better stability....

Helton Chen made it! (author)2016-02-29

I tried finding the n values on multiple cameras, to keep things simple I will separate them in different post.

First I tried to find the n value for my phone which was a Nexus 5, you can see the plot on image 2 showing the error function output versus different n values. I find 2.06 to be the best for combining the images, as seen in image 1.

The n values we get is measuring the camera system as a whole, and this includes the image signal processing (ISP) in the camera. Since the image signal processor could alter the image heavily, I decided to test the same image again but in the raw format.

An example of the raw format can be seen in image 3, which is the raw Bayer pattern in the RGGB configuration (therefore its B&W). The result I got was shown in image 4, which has the lowest error when n is 1.89

It is interesting to note the n values are different for the same set of images, though it really does not provide much aid when combining images in the compressed JPG format.

SteveMann (author)2016-02-29

Nice to see a range of different cameras and their optimum response functions.

Helton Chen made it! (author)2016-02-29

I then moved on to a bigger sensor from the 1"/3.2 inch phone camera to a 1" inch compact camera - Sony RX100 M4. I conducted the same experiment and got a n value of 2.35 for jpg and 1.89 for raw.

I was curios to see if the n value would be scene dependent so I tried a scene with dramaticly different lighting as shown in image 4, this results in a interesting error plot (image 5) with the tail of the plot growing much faster than normal. The result n of this set of image is 1.45, much lower than the 2.35 measured from image 1.

Personally I think its reasonable for the n value to go closer to 1 for this specific scene as the light is almost lighting up independent portion of the image, which means simply adding (equivalent to n=1) will yield the most similar image to the final result.

Helton Chen made it! (author)2016-02-29

Lastly I tried a 35mm full frame Canon 5D Mark III, which I got the lowest error when n is 2.96 for jpg and 6.63 for raw.

I kept the scene as similar as possible across the different device I tested, and it is interesting to see how the n value increases proportional to the sensor size of the camera. (have yet to figure out why...)

As a interesting application of this project, I created an phone application that simulates a long exposure image in real time by using the same method to combine viewfinder frames. You can see it in action in image4, which is me moving an array of LEDs across the frame. It still needs some work but it is great to visualize what the sensor actually sees over time!

marc_alain made it! (author)2016-02-29

The first set of this series was used to determine the N value for my Canon Powershot S50.

The next two composite images show the result of applying the N value for cementing 3 images into one.

WeigenY made it! (author)2016-02-29

I used my iphone 6 camera to take the picture.

The error becomes minimum when N is around 24.1, so I guess I should use a better camera, since it is not a very professional one.

SteveMann (author)2016-02-29

I think your camera might have AGC or automatic something... which needs to be disabled or set to manual, as you might be getting erroneous readings from the fact that the image on the right is darker in places than the middle image for example.

Adding light should never make the image darker.

asahoo3 made it! (author)2016-02-29

Camera : One Plus One phone camera (13 MP)

Best N : 3.12

SteveMann (author)2016-02-29

Looks great!

This is interesting to see, that the "n" value is around 3, rather than 2, as with many of the small hand-held cameras.

Is it "OnePlus One" (no space between the "One" and the "Plus")?

asahoo3 (author)2016-02-29

Oh you're right, it is OnePlus One!

It was also interesting to note that the TV anchors are different in the first two pics in the top left corner and the CEMENTed pic converged towards the brighter looking one! :)

SteveMann (author)2016-02-29

Yes that might have thrown it off a bit.

How about trying it again without the TV or anything else in the background that changes from exposure to exposure.

That will probably give you a more true and accurate result.

Li Ren made it! (author)2016-02-29

Camera Model: Canon 5D Mark III

Best n: 1.90

SteveMann (author)2016-02-29

This looks fantastic!

Excellent work!

Interesting to note the quantimetric exponent of 1.9.

It seems like the high end and low end Canon cameras have a quantimetric exponent around 2, but the mid-priced Canon cameras have a quantimetric exponent a little over 3.

SteveMann (author)2016-02-29

What was the "n" value you determined?

Also the lower right image looks a bit darker than the upper right and lower left.

Are you sure you have AGC or ELC or automatic exposure turned off?

Camera should be set to manual exposure.

Neeraj Juneja (author)2016-02-23

Seriously impressive work!

SteveMann (author)2016-02-24

Thank you for the kind words!

maker_m (author)2016-02-24

As mentioned in class, we want to combine 2 images with identical scenery but different light sources as accurately as possible. In other words, the combined image should look very similar to the image taken with both light sources on.

Here is my result:

v1: photo taken with only left light on

v2: photo taken with only right light on

v3: photo taken with both lights on

v12: constructed photo from v1 and v2 at n = 3.3

I combined v1 and v2 using the equation mentioned in class: (v1^n+v2^n)^(1/n) with different n values and used error function to calculate the difference between combined image and v3. I plotted the difference value for different n, and found out that for my Nikon D60 camera n = 3.3 gives me the best result.

Note: taking photos with the smallest frame size and using multithreading in python for batch image processing saved me quite some time.

-- Annie

SteveMann (author)2016-02-24

This looks great!

Also since it is so easy to get n=1 (simply add the two images) and n=infinity (simply take the maximum of the two images) it would be good to also include those data points in the analysis.

ajoyraman (author)2016-02-24

Great! Must try.