This instructable explains how to make a monochrome camera using an Omnivision OV7670 image sensor, an Arduino microcontroller, a few jumper wires, and Processing 3 software.
Experimental software for obtaining a color image is also presented.
Press the “c” key to capture a 640*480 pixel image ... press the “s” key to save the image to file. Successive images are sequentially numbered should you wish to create a short time-lapse movie.
The camera is not fast (each scan takes 6.4 seconds) and is only suitable for use in fixed lighting.
The cost, excluding your Arduino and PC, is less than a cup of coffee.
The component parts, without jumper wiring, are shown in the opening photo.
The second photo is a screen-shot showing the Arduino camera software and the Processing 3 frame-grabber. The inset shows how the camera is connected.
The video demonstrates the camera in action. When the “c” capture key is pressed there is a brief flash followed by a burst of activity as the image is scanned. The image automatically appears in the display window once the scan is complete. The images are then seen to appear in the Processing folder following each press of the “s” key. The video concludes by cycling rapidly through each of the three saved images.
Step 1: Circuit Diagram
The circuit diagram, for all versions of this camera, is shown in photo 1.
Photos 2, 3 show how the jumpers-wires and components are connected.
Without the aluminium bracket the images are lying on their side.
Program your Arduino BEFORE attaching any jumper wires to the OV7670 camera chip. This will prevent 5 volt output pins from a previous program from destroying the 3v3 volt OV7670 camera chip.
Step 2: Parts List
The following parts were obtained from https://www.aliexpress.com/
- 1 only OV7670 300KP VGA Camera Module for arduino DIY KIT
- 1 only camera bracket complete with nuts and bolts
- 1 only UNO R3 for arduino MEGA328P 100% original ATMEGA16U2 with USB Cable
The following parts were obtained locally
- 18 anly Arduino male-female jumper cables
- 3 only Arduinin female-female jumper cables
- 1 only mini bread-board
- 4 only 4K7 ohm 1/2 watt resistors
- 1 only scrap aluminium stand.
You will also need the following datasheets:
Step 3: Theory
OV7670 camera chip
The default output from the OV7670 camera chip comprises a YUV (4:2:2) video signal and 3 timing waveforms. Other output formats are possible by programming the internal registers via an I2C compatible bus.
The YUV (4:2:2) video signal (photo 1) is a continuous sequence of monochrome (black & white) pixels separated by U (blue color difference) and V (red color difference) color information.
This output format is known as YUV (4:2:2) since each group of 4 bytes contains 2 monochrome bytes and and 2 color bytes.
To obtain a monochrome image we must sample every second data byte.
An Arduino only has 2K of random access memory but each frame comprises 640*2*480 = 307,200 data bytes. Unless we add a frame-grabber to the OV7670 all data must sent to the PC line-by-line for processing.
There are two possibilities:
For each of 480 successive frames, we can capture one line to the Arduino at high speed before sending it to the PC at 1Mbps. Such an approach would see the OV7670 working at full speed but would take a long time (well over a minute).
The approach that I have taken is to slow the PCLK down to 8uS and send each sample as it comes. This approach is significantly faster (6.4 seconds).
Step 4: Design Notes
The OV7670 camera chip is a 3v3 volt device. The data sheet indicates that voltages above 3.5 volts will damage the chip.
To prevent your 5 volt Arduino from destroying the OV7670 camera chip:
- The external clock (XCLK) signal from the Arduino must be reduced to a safe level by means of a voltage divider.
- The internal Arduino I2C pull-up resistors to 5 volts must be disabled and replaced with external pull-up resistors to the 3v3 volt supply.
- Program your Arduino BEFORE attaching any jumper-wires as some of the pins may still be programmed as an output from an earlier project !!! (I learnt this the hard way ... fortunately I bought two as they were so cheap).
The OV7670 camera chip requires an external clock in the frequency range 10Mhz to 24MHz.
The highest frequency we can generate from a 16MHz Arduino is 8MHz but this seems to work.
It takes at least 10 uS (microseconds) to send 1 data byte across a 1Mbps (million bits per second) serial link . This time is made up as follows:
- 8 data bits (8us)
- 1 start-bit (1uS)
- 1 stop-bit (1uS)
The internal pixel clock (PCLK) frequency within the OV7670 is set by bits[5:0] within register CLKRC (see photo 1). 
If we set bits[5:0] = B111111 = 63 and apply it to the above formula then:
- F(internal clock) = F (input clock)/(Bit[5:0}+1)
- = 8000000/(63+1)
- = 125000 Hz or
- = 8uS
Since we are only sampling every second data byte, a PCLK interval of 8uS results in a 16uS sample which is sufficient time to transmit 1 data byte (10uS) leaving 6uS for processing.
Each VGA video frame comprises 784*510 pixels (picture elements) of which 640*480 pixels are displayed. Since the YUV (4:2:2) output format has an average of 2 data bytes per pixel, each frame will take 784*2*510*8 uS = 6.4 seconds.
This camera is NOT fast !!!
The image may be moved horizontally if we change the HSTART and HSTOP values while maintaining a 640 pixel difference.
When moving your image left, it is possible for your HSTOP value to be less than the HSTART value!
Don’t be alarmed ... it is all to do with counter overflows as explained in photo 2.
The OV7670 has 201 eight-bit registers for controlling things such as gain, white balance, and exposure.
One data byte only allows for 256 values in the range  to . If we require more control then we must cascade several registers. Two bytes gives us 65536 possibilities ... three bytes give us 16,777,216.
The 16 bit AEC (Automatic Exposure Control) register shown in photo 3 is such an example and is created by combining portions of the following three registers.
- AECHH[5:0] = AEC[15:10]
- AECH[7:2 ] = AEC[9:2]
- COM1[1:0] = AEC[1:0]
Be warned ... the register addresses are not grouped together !
A slow frame rate introduces a number of unwanted side effects:
For correct exposure, the OV7670 expects to work at a frame rate of 30 fps (frames per second). Since each frame is taking 6.4 seconds the electronic shutter is open 180 times longer than normal which means all images will be over-exposed unless we alter some register values.
To prevent over-exposure I have set all of the AEC (auto exposure control) register bits to zero. Even so a neutral density filter is needed in front of the lens when the lighting is bright.
A long exposure also appears to affect the UV data. As I have yet to find register combinations that produce correct colours ... consider this to be work in progress.
The formula shown in the data sheet (photo 1) is correct but the range only shows bits[4:0] ?
Step 5: Timing Waveforms
The note in the bottom left corner of the “VGA Frame Timing” diagram (photo 1) reads:
For YUV/RGB, tp = 2 x TPCLK
Figures 1, 2, & 3 verify the data sheet(s) and confirm that Omnivision treats every 2 data bytes as being the equivalent of 1 pixel.
The oscilloscope waveforms also verify that HREF remains LOW during the blanking intervals.
Fig.4 confirms that the XCLK output from the Arduino is 8MHz. The reason we see a sinewave, rather than a squarewave, is that all of the odd harmonics are invisible to my 20MHz sampling oscilloscope.
Step 6: Frame Grabber
The image sensor within an OV7670 camera chip comprises an array of 656*486 pixels of which a grid of 640*480 pixels are used for the photo.
The HSTART, HSTOP, HREF, and VSTRT, VSTOP, VREF register values are used to position the image over the sensor. If the image is not positioned correctly over the sensor you will see a black band over one or more edges as explained in the “Design Notes” section.
The OV7670 scans each line of the picture one pixel at a time starting from the top left corner until it reaches the bottom right pixel. The Arduino simply passes these pixels to the PC via the serial link as shown in photo 1.
The frame-grabbers’ task is to capture each of these 640*480=307200 pixels and display the contents in an “image” window
Processing 3 achieves this using the following four lines of code !!
Code line 1:
- byte byteBuffer = new byte[maxBytes+1]; // where maxBytes=307200
The underlying code in this statement creates:
- a 307201 byte array called “byteBuffer”
- The extra byte is for a termination (linefeed) character.
Code line 2:
The underlying code in this statement creates:
- a variable called “width=640;”
- a variable called “height=480”;
- a 307200 pixel array called “pixels”
- a 640*480 pixel “image” window in which the contents of pixels array are displayed. This “image” window is continuously refreshed at a frame rate of 60 fps.
Code line 3:
- byteCount = myPort.readBytesUntil(lf, byteBuffer);
The underlying code in this statement:
- buffers the incoming data locally until it sees a “lf” (linefeed) character.
- after which it dumps the first 307200 bytes of local data into the byteBuffer array.
- It also saves the number of bytes received (307201) into a variable called “byteCount”.
Code line 4:
- pixels[i] = color(byteBuffer[i]);
When placed in a for-next-loop, the underlying code in this statement:
- copies the contents of the “byteBuffer” array to the “pixels” array
- the contents of which appear in the image window.
The frame-grabber recognises the following keystrokes:
- ‘c’ = capture the image
- ‘s’ = save the image to file.
Step 7: Software
Download and install each of the following software packages if not already installed:
- “Arduino” from https://www.arduino.cc/en/main/software
- “Java 8” from https://java.com/en/download/ 
- "Processing 3” from https://processing.org/download/
Installing the Arduino sketch:
- Remove all OV7670 jumper wires 
- Connect a USB cable to your Arduino
- Copy the contents of “OV7670_camera_mono_V2.ino“ (attached) into an Arduino “sketch” and save.
- Upload the sketch to your Arduino.
- Unplug the Arduino
- You can now safely reconnect the OV7670 jumper wires
- Reconnect the USB cable.
Installing and running the Processing sketch:
- Copy the contents of “OV7670_camera_mono_V2.pde” (attached) into a Processing “sketch” and save.
- Click the top-left “run” button ... a black image window will appear
- Click the “black” image-window
- Press the “c” key to capture an image. (approx 6.4 seconds).
- Press the “s” key to save the image in your processing folder
- Repeat steps 4 & 5
- Click the “stop” button to exit the program.
Processing 3 requires Java 8
This is a “once only” safety step to avoid damaging your OV7670 camera chip.
Until the sketch “OV7670_camera_mono.ini” has been uploaded to your Arduino the internal pull-up resistors are connected to 5 volts, plus there is the possiblity that some of the Arduino data lines may be 5 volt outputs ... all of which are fatal to the 3v3 volt OV7670 camera chip.
Once the Arduino has been programmed there is no need to repeat this step and the register values may be safely changed.
Step 8: Obtaining a Color Image
The following software is purely experimental and is posted in the hope that some of the techniques will prove useful. The colors appear to be inverted ... I have yet to find the correct register settings. If you find a solution please post your results.
If we are to obtain a color image, all data bytes must be captured and the following formulas applied.
The OV7670 uses the following formulas to convert RGB (red, green, blue) color information into YUV (4:2:2): 
- Y = 0.31*R + 0.59*G + 0.11*B
- U = B – Y
- V = R – Y
- Cb = 0.563*(B-Y)
- Cr = 0.713*(R-Y)
The following formulas may be used to convert YUV (4:2:2) back to RGB color: 
- R = Y + 1.402* (Cr – 128)
- G = Y – 0.344136*(Cb -128) – 0.714136*(Cr -128)
- B = Y + 1.772*(Cb -128)
The attached software is simply an extension of the monochrome software:
- A “c” capture request is sent to the Arduino
- The Arduino sends the even numbered (monochrome) bytes to the PC
- The PC saves these bytes into an array
- The Arduino next sends the odd numbered (chroma) bytes to the PC.
- These bytes are saved into a second array ... we now have the entire image.
- The above formulas are now applied to each group of four UYVY data bytes.
- The resulting color pixels are then placed in the “pixels” array
- The PC scans the “pixels” array and an image appears in the “image” window.
The Processing 3 software briefly displays each scan and the final results:
- Photo 1 shows the U & V chroma data from scan 1
- Photo 2 shows the Y1 & Y2 luminance data from scan 2
- Photo 3 shows the color image ... only one thing is wrong ... the bag should be green !!
I will post new code once I have solved this program ...
https://en.wikipedia.org/wiki/YCbCr (JPEG conversion)
Click here to view my other instructables.