Desktop Gigapixel Microscope




About: Projects in light, music, and electronics. Find them all on my site:

In optical microscopes, there is a fundamental trade-off between field-of-view and resolution: the finer the detail, the smaller the region imaged by the microscope. One way to overcome this limitation is to translate the sample and acquire images over a larger field-of-view. The basic idea is to stitch together many high resolution images to form a large FOV. In these images, you get to see both the full sample, as well as fine detail in any portion of the sample. The result is an image consisting of about a billion pixels, much larger in comparison to the pictures taken by a dSLR or smart phone, which typically have around 10 to 50 million pixels. Check out these gigapixel landscapes for an impressive demonstration of the massive amount of information in these images.

In this instructable, I will go over how to build a microscope capable of imaging a 90mm x 60mm field-of-view with pixels corresponding to 2μm at the sample (although, I think the resolution is probably closer to 15μm). The system uses camera lenses, but the same concept can be applied using microscope objectives to get even finer resolution.

I uploaded the gigapixel images I acquired with the microscope on EasyZoom:

1970 National Geographic magazine image

Crochet tablecloth my wife made

Miscellaneous electronics

Other resources:

Optical microscopy tutorials:

Optical resolution:

In addition to image stitching, recent progress in computational imaging makes gigapixel microscopy possible without even moving the sample!

Step 1: Supply List


1. Nikon dSLR (I used my Nikon D5000)

2. 28mm focal length lens with 52mm threading

3. 80mm focal length lens with 58mm threading

4. 52mm to 58mm reverse coupler

5. Tripod

6. Seven sheets of 3mm thick plywood

7. Arduino Nano

8. Two H-bridge L9110

9. Two IR emitters

10. Two IR receivers

11. Push button

12. Two 2.2kOhm resistors

13. Two 150Ohm resistors

14. One 1kOhm resistor

15. Remote release for Nikon camera

16. Black poster board

17. Hardware kit:

18. Two stepper motors (I used Nema 17 Bipolar step motor 3.5V 1A)

19. Two 2mm lead screws

20. Four pillow blocks

21. Two lead screw nuts

22. Two bearing slide bushing and 200mm linear shafts:

23. 5V power supply:

24. Wire wrap wire


1. Laser cutter

2. 3D printer

3. Allen wrenches

4. Wire cutters

5. Wire wrap tool

Step 2: System Overview

To translate the sample, two stepper motors aligned in orthogonal directions move a stage in the x and y direction. The motors are controlled using two H-bridges and an Arduino. An IR sensor positioned at the base of the stepper motor is used to zero the stages so they don't run into either end of the blocks. A digital microscope is positioned above the XY stage.

Once the sample is positioned and the stage is centered, you push a button to begin the acquisition. The motors move the stage to the bottom left corner and the camera is triggered. The motors then translate the sample in small steps, as the camera takes a photo at each position.

After all the images are taken, the images are then stitched together to form a gigapixel image.

Step 3: Microscope Assembly

I made a low magnification microscope with a dSLR (Nikon 5000), a Nikon 28mm f/2.8 lens, and a Nikon 28-80mm zoom lens. The zoom lens was set for focal length equal to 80mm. The set of the two lenses acts like a microscope tube lens and objective lens. The total magnification is the ratio of the focal lengths, around 3X. These lenses are really not designed for this configuration, so to make the light propagate like a microscope, you have to position an aperture stop between the two lenses.

First, mount the longer focal length lens to the camera. Cut a circle out of black poster board that has a diameter roughly the size of the front surface of the lens. Then cut a small circle in the middle (I chose about 3mm diameter). The size of the circle will determine the amount of light that enters the system, also called the numerical aperture (NA). The NA determines the lateral resolution of the system for well designed microscopes. So why not use a high NA for this setup? Well, there are two major reasons. Firstly, as the NA increases, the optical aberrations of the system become more prominent and will limit the resolution of the system. In an unconventional setup like this, this will likely be the case, so increasing the NA eventually will no longer help improve the resolution. Secondly, the depth of field also depends on NA. The higher the NA, the shallower the depth of field. This makes it difficult to get objects that aren't flat all into focus. If the NA gets too high, then you will be limited to imaging microscope slides, which have thin samples.

The positioning of the aperture stop between the two lenses makes the system roughly telecentric. That means the magnification of the system is independent of the object distance. This becomes important for stitching images together. If the object has varying depth, then the view from two different positions will have shifted perspective (like human vision). Stitching images together that are not from a telecentric imaging system is challenging, especially with such high magnification.

Use the 58mm to 52mm lens reverse coupler to attach the 28mm lens to the 80mm lens with the aperture positioned in the middle.

Step 4: XY Stage Design

I designed the stage using Fusion 360. For each scan direction, there are four parts that need to be 3D printed: mounter mount, two slide unit extenders, and a lead screw mount. The base and platforms of the XY stage are laser cut from 3mm thick plywood. The base holds the X-direction motor and sliders, the X-platform holds the Y-direction motor and sliders, and the Y-platform holds the sample. The base consists of 3 sheets and the two platforms consist of 2 sheets. The files for laser cutting and 3D printing are provided in this step. After cutting and printing these parts you are ready for the next steps.

Step 5: Motor Mount Assembly

Using a wire-wrap tool, wrap wire around the leads of two IR emitters and two IR receivers. Color code the wires so you know which end is which. Then cut the leads off the diodes, so just the wire wrap wires run from then. Slide the wires through the guides in the motor mount and then push the diodes into place. The wires are directed so they are not visible until they exit the rear of the unit. These wires can be joined with the motor wires. Now mount the stepper motor using four M3 bolts. Repeat this step for the second motor.

Step 6: Stage Assembly

Glue together the Base 1 and Base 2 cuts, one of them with hexagonal openings for the M3 nuts. Once the glue has dried, hammer the M3 nuts into position. The nuts will not rotate when pressed into the board, so you will be able to screw in the bolts later. Now glue the third base sheet (Base 3) to cover the nuts.

Now it is time to assemble the lead-nut mount. Clear out any extra filament from the mount and then push four M3 nuts into position. They are a tight fit, so make sure you clear out the bolt and nut space with a small screw driver. Once the nuts are aligned, push the lead-nut into the mount and attach it with 4 M3 bolts.

Attach the pillow blocks, slider mounts, and motor mount for the X-direction linear translator onto the base. Put the lead nut assembly onto the lead screw and then slide the lead screw into place. Use the coupler to connect the the motor to the lead screw. Place the slider units into the rods and then push the rods into the slider mounts. Finally, attach the slider mount extenders with M3 bolts.

The X1 and X2 plywood sheets are glued together in a similar way to the base. The same procedure is repeated for the Y-direction linear translator and the sample stage.

Step 7: Scanner Electronics

Each stepper motor has four cables that are connected to an H-bridge module. The four cables from the IR emitter and receiver are connected to the resistors according to the diagram above. The outputs of the receivers are connected to analog input A0 and A1. The two H-bridge modules are connected to pin 4-11 on the Arduino Nano. A pushbutton is connected to pin 2 with a 1kOhm resistor for simple user input.

Finally the trigger button for the dSLR is connected to a remote shutter, as I did for my CT scanner (see step 7). Cut the remote shutter cable. The wires are labeled as follow:

Yellow – focus

Red – shutter

White – ground

In order to focus the shot, the yellow wire must be connected to ground. To snap a photo, both the yellow and red wire must be connected to ground. I connected a diode and the red cable to pin 12, and then I connected another diode and the yellow cable to pin 13. The setup is as described in DIY Hacks and How-Tos instructable.

Step 8: Acquiring Gigapixel Images

Attached is the code for the gigapixel microscope. I used the Stepper library for controlling the motors with the H-bridge. In the beginning of the code, you must specify the field-of-view of the microscope and the number of images you want to acquire in each direction.

For example, the microscope I made had a field-of-view of about 8.2mm x 5.5mm. Therefore, I directed the motors to shift 8mm in the x-direction and 5mm in the y-direction. 11 images are acquired in each direction, totaling 121 images for the full gigapixel image (more details about this in step 11). The code then calculates the number of steps the motors need to make to translate the stage by this amount.

How do the stages know where they are relative to the motor? How do the stages translate without hitting either end? In the setup code, I wrote a function that moves the stage in each direction until it breaks the path between the IR emitter and IR receiver. When the signal on the IR receiver drops below some threshold, the motor stops. The code then tracks the position of the stage relative to this home position. The code is written so the motor doesn't translate too far which would make the stage run into the other end of the lead screw.

Once the stage is calibrated in each direction, the stage is translated to the center. Using a tripod, I positioned my dSLR microscope over the stage. It is important to align the camera field with the crossed lines on the sample stage. Once the stage is aligned with the camera, I taped down the stage with some painter's tape and then placed the sample on the stage. The focus was adjusted with the tripod z-direction. The user then presses the pushbutton to begin acquisition. The stage translates to the bottom left corner and the camera is triggered. The stage then raster scans the sample, while the camera snaps a photo at each position.

Also attached is some code for troubleshooting the motors and IR sensors.

Step 9: Stitching Images

With all the images acquired, you are now faced with the challenge of stitching them all together. One way to handle image stitching is by manually aligning all the images in a graphic program (I used Autodesk's Graphic). This will definitely work, but it can be a painful process and the edges of the images are noticeable in the gigapixel images.

Another option is to use image processing techniques to stitch the images together automatically. The idea is to find similar features in the overlapping section of adjacent images and then apply a translation transform to the image so the images are aligned with one another. Finally, the edges can be blended together by multiplying the overlapping section by a linear weight factor and adding them together. This can be a daunting algorithm to write if you are new to image processing. I worked for a while on the problem, but I could not get a fully reliable result. The algorithm struggled most with samples that had very similar features throughout, such as the dots in the magazine image. Attached is the code I wrote in Matlab, but it needs some work.

The last option is to use gigapixel photography stitching programs. I don't have any to suggest, but I know they are out there.

Step 10: Microscope Performance

In case you missed it, here are the results: magazine image, crochet tablecloth, and miscellaneous electronics.

The specs of the system are listed in the table above. I tried imaging with both a 28mm and 50mm focal length lens. I estimated the best possible resolution of the system based on the diffraction limit (around 6μm). It is actually difficult to test this experimentally without a high resolution target. I tried printing a vector file listed on this large format photography forum, but I was limited by my printer resolution. The best I could determine with this printout was that the system had resolution <40μm. I also looked for small, isolated features on the samples. The smallest feature in the print from the magazine is the ink spot, which I estimated to also be about 40μm, so I couldn't use it to get a better estimate for the resolution. There were small divots in the electronics that were pretty well isolated. Because I knew the field-of-view, I could count the number of pixels taking up the small divot to get an estimate of the resolution, about 10-15μm.

Overall, I was happy with the performance of the system, but I have a few notes in case you want to try this project out.

Stability of the stage: Firstly, get high quality linear stage components. The components I used had way more play than I thought they would. I only used one of the slider mounts in the the kit for each rod, so maybe that was why the stage didn't feel very stable. The stage worked well enough for me, but this would become more of an issue for higher magnification systems.

Optics for higher resolution: The same idea can be used for higher magnification microscopes. However, smaller motors with finer step size will be required. For example, a 20X magnification with this dSLR would result in a field-of-view of 1mm (if the microscope can image that large a system without vignetting). Electronupdate used stepper motors from a CD player in a nice build for a higher magnification microscope. Another tradeoff will be shallow depth of field, which means the imaging will be limited to thin samples and you will need finer translation mechanism in the z-direction.

Stability of the tripod: This system would work better with a more stable camera mount. The lens system is heavy and the tripod is tilted 90deg from the position for which it is designed. I had to tape down the feet of the tripod to help with stability. The shutter could also shake the camera enough to blur the images.

Arduino Contest 2019

Participated in the
Arduino Contest 2019



    • Sensors Contest

      Sensors Contest
    • Planter Challenge

      Planter Challenge
    • Stone Concrete and Cement Contest

      Stone Concrete and Cement Contest

    26 Discussions


    4 weeks ago

    Hey! First off, congrats on getting famous ( - you'll definitely want to hit the comments section there for some great ideas)

    Secondly, some thoughts:
    - I think that a common bed scanner would make for a good platform for X/Y movement, as it is already set up for one and you would only need to add the other.
    - To decrease noise by a factor of 2/3/4, for each final shot, take 4/9/16 shots, then average them before using them for any other operation.
    - Using a Super Resolution technique ( will drastically enhance the quality of the final image. I would follow that procedure right up to the reduction step for each separate image, align the large images, then reduce to final size. (I'm honestly puzzled by why they would use 'Nearest Neighbour' for scaling up, as 'Linear' would seem to me to be the better option.)
    - For better alignment, more overlap is better. Most apps seem to like 60-80%. That would also help with SR.
    - If a Canon camera running CHDK ( was used, it would be trivial to create multi-shot-averaged, focus-stacked, exposure-bracketed image sets. (Your camera can then be remotely accessed from your PC: The script I used to take perfectly focussed photos of miniatures--photo attached--used 21 shots for a distance of 30mm in 'Super Macro' mode.) The nicest part about using CHDK would be the Arduino itself being able to trigger the camera after moving it to the each location; very important if you are taking--for example--16 * 21 * 3 MSA/FS/EB shots (with a very shallow depth of field) per position.
    - Here are some software packages that should be of use to you help stitch your photos together:

    Hope this helps!

    2 replies

    Reply 25 days ago

    Thanks for your thoughtful note and letting me know about the Hackaday article! Lots to read over at that link.

    The image averaging idea would work very well for this application because long exposures aren't ideal for my shaky tripod setup. It is a great tip in general for imaging projects.

    Given the optical resolution of the microscope (15 microns), I am oversampling by a factor of about five (3 microns pixels). The super-resolution technique you shared will improve the sampling even more to make the images sharper. I am unsure how much oversampling can push the quality of the image given the resolution of the optical system. From Nyquist theory, the minimum sampling required to avoid aliasing is at least twice as high as the optical resolution. I wonder how the pixel size of the cameras you mentioned compares to the optical resolution.

    I haven't heard of this focus stacking technique and I am really interested to try it out. Your results look great!


    Reply 24 days ago

    Wow, I'm so surprised to find your project!

    I have been working for several months now to develop the design to implement this focus stacking and stitching technique with a CNC Gantry. I thrilled to find others pursuing this implementation at the exact same time. It is quite the moment of convergent thought and discovery. Your tutorial is excellent and well explained thank you for sharing your experience. I look forward to sharing and hearing about any further updates at they come along.



    3 months ago

    This is an extraordinarily clever idea. I wondered if perhaps a low-cost 3D printer might work as a positioner as you already have the XYZ axis and with a suitably lightweight camera and an appropriate bracket, something like the Creality Ender 3 (which I own) should be able to position to approx. 0.05mm accuracy with minimal backlash. After all, that's what it's gotta do as a good 3D printer. Then just use gcode files to control the translation sequence. Camera weight could be offset with a spring assembly mounted to the top of the printer frame, I would think, so that the gantry doesn't have to carry the camera weight, you could virtually 'zero out' the weight and the inertia isn't an issue as you will be moving at very low velocity.

    2 replies

    Reply 4 weeks ago

    I would reverse things, and have the X/Y axes move the item and have the camera itself be locked in place, as whatever you would be photographing would be much lighter than any camera.


    Reply 3 months ago

    I think modifying a cheap 3D printer would definitely work for this project. Good idea to use a spring assembly for the camera mounting. I think this would be the toughest part. Another advantage to using the 3D printer hardware is easy control in the z-direction for adjust the focus, which was a pain to do manually with the tripod.

    Mark Rehorst

    4 weeks ago

    The XY stage would be more stable if you used three bearing blocks instead of two on each axis, and before someone else comments, no, using four would not be even better.

    If you had fine enough position control, and a rigid enough set-up, you could bump the object by fractions of the image sensor's pixel width and maybe get increased resolution in a small area.

    2 replies
    jbumsteadMark Rehorst

    Reply 4 weeks ago

    Thanks for the feedback on providing stability to the XY stage.

    Your idea for improved resolution using sensor or fine XY stage movement is a really good one. Optotune has a camera sensor that does something similar by tilting the beam using a beamsplitter. I wonder if some of the newer SLRs or mirrorless cameras could achieve this with sensor tilting.

    Mark Rehorstjbumstead

    Reply 4 weeks ago

    Years ago, when high resolution sensors were rare and expensive, I read about a digital camera that shifted the image sensor laterally by 1/2 pixel horizontally and vertically to quadruple the resolution of the image. I don't know if it became a real product. Digital cameras that have image stabilization in the body use small piezo actuators to move the image sensor to keep it in sync with the moving image from the lens.


    3 months ago

    Thanks I personally use a camera lens with my phone


    Tip 3 months ago

    For stiching images you can also use Microsoft Image Composite Editor, it's free.

    2 replies

    Reply 3 months ago

    Just tried out Microsoft ICE. It is incredible! Thanks for the suggestion.


    Reply 3 months ago

    Awesome, thanks for the tip. I will try it out.


    3 months ago

    I use Microsoft ICE for stitching images.
    It is one of the better programs and is free last time I checked

    1 reply

    Reply 3 months ago

    Just tried out Microsoft ICE. It is incredible! Thanks for the suggestion.


    3 months ago

    Hmmm, now I want to mod my ultimaker for this :D

    2 replies

    3 months ago

    There is a freely available (registration/login required) viewer for storing and sharing your gigapixel images via a web browser at

    1 reply