Introduction: How to Stretch Images Through Time With Space-time Camera and Processing
One of the primary ways we perceive our environment is through vision. Each of our eyes contains a lens that projects the 3-dimensional space around us into a 2-dimensional image on the retina. As time evolves, we see the space change around us. Therefore, time is sometimes considered another dimension along which we can perceive our environment. Like our eyes, cameras also consist of a lens that projects the 3D environment onto a sensor, with the added ability of storing the 2D projection as a picture.
Instead of visualizing the universe as a 2D spatial projection onto a sensor or retina, you can also imagine viewing the universe along the time dimension. With this visualization of the universe, an image has a single space axis and a time axis as opposed to two spatial axes (see the diagram). These images can be called space-time images, and they enable you to visualize multiple moments in time within a single picture, similar to long exposure photography or Abakography. Space-time thinking is also a key component in the field of computational photography (e.g. video synopsis), and that is how I first heard about it. Later I found a couple artists applying this idea to create cool space-time videos and images: Hiroshi Kondo and Adam Magyar
In this Instructable, I will go over how I constructed a space-time imaging camera with a Raspberry Pi that converts space-space images into space-time images in real-time. In other words, it’s a video camera that can takes pictures with time as one of its dimensions. The camera frame and module I designed could also be used for other real-time image processing projects.
In addition to the space-time camera, I also wrote Matlab and Python code for converting any video file, like a .MOV video from your smart phone, into a space-time video. With this code, you can create trippy, mind-bending space-time videos without the space-time camera. This code could also be adapted for created a space-time video app for smart phones.
Step 1: Materials and Tools
1. Raspberry Pi (Adafruit)
2. Pi camera (Adafruit)
3. Cooling system for Pi (Amazon)
4. 10 k potentiometer: (Adafruit)
5. Potentiometer knob: (Adafruit)
6. Blue LED push button for shutter (Adafruit)
7. Push button for playback: (Amazon)
8. Analog to digital converter (MCP3008, Amazon)
9. MicroUSB extension cable (male to female, Amazon)
10. Pi foundation touchscreen display (Adafruit)
11. Hardware for mounting:
8-32 x 3/8” cap screws
8-32 x 3/8” cap screws
M1.6x5mm cap screws
¼” – 20 nut
PCB Stand-off Spacer M3 Male x M3 Female 6mm
12. Wire wrapping wire (Amazon)
13. 3D printed parts (see Step 3)
14. Extra lens (optional, Amazon)
1. Soldering iron and solder
2. 3D printer
3. Allen wrenches
4. Wire cutter for 20-30 AWG
5. Electrical tape
6. Wire wrapping tool
7. Needle nose pliers
8. Box cutter
9. Super glue, gorilla glue, or epoxy
10. Electronics screw driver kit
11. Keyboard and mouse for coding pi
12. Calipers (optional, for measuring any new 3D printed parts)
Step 2: Schematic of Space-time Camera Electronics
I used a Raspberry Pi and picamera for capturing a live stream of images and processing them into a live space-time images. For help on setting up a picamera for the first time, check out this link. A single column is saved from each image captured by the picamera and stored as a column in a space-time image. Note that images are made up of columns (shown as red in the gif in the intro) and rows (shown as green). As time progresses, the columns stored moves from left to right in the space-time image, until the space-time image is complete. Check out the video for the completed product.
A touchscreen is used to display the live-stream of images next to the space-time image being constructed in real time. The column of the live-stream images used for the space-time image is highlighted with a red line, and controlled by the user with a 10k potentiometer. Because the pi does not have an analog to digital converter (ADC), the output of the potentiometer is run to an ADC (MCP3008) before being sent to the Pi. The ADC clock, input signal, CS signal, and output are wired to pins 18, 24, 25, and 23 respectively. I followed this tutorial for setting up the ADC.
In order to take space-time photos, I used a blue LED push button that acts as the camera shutter. When the button is pushed, a signal is set to pin 16 of the Pi and the photo is saved to a folder on the Pi desktop. Wiring of the push button (which is a little more complicated than a normal button) is shown in the figure. The camera design also includes a playback button with output to pin 20, so that you can look at previously saved space-time images. When the button is pressed, playback mode is activated and you can scroll through the images using the potentiometer. If you don’t like a photo, you can delete it by clicking the shutter button. To return to the live-mode, you press the playback button again. The display and picamera are connected to the display and camera ribbon ports on the pi.
In the following steps, I will go over how to wire the space-time camera step-by-step in parallel with assembling the camera chassis. I recommend assembling the system shown in the schematic on breadboard first.
Step 3: Overview of Camera Chassis
The camera chassis consists of six parts: a top frame, bottom frame, front frame, rear frame, picamera mount, and power supply extension mount. These parts were designed using Autocad and 3D printed using a Prusa i3 MK2. Because of the cooling system and 7” display screen, the camera chassis had to be pretty big so I tried to give it some style with inspiration from the Nikon FM2 camera body. Most of the parts are held together with 8-32 x 3/8” cap screws, so it can easily be taken apart to access the electronics.
Step 4: Rear Camera Frame Assembly
After 3D printing the 6 parts with color filament of your choosing, it is time to assemble the camera. Take the rear camera frame and insert the large nut from blue LED push button into the top left corner of the part (see photo). Insert six 8-32 nuts into the slots at the front of the frame. These will be used to attach the front panel of the camera.
Next, tap the six holes at the bottom of the frame for 8-32 x 3/8” cap screws using a tap or screw. You may have to sand down the frame over the holes with fine sandpaper afterwards. There are an additional two holes to tap on the left side of the frame where the power supply mount is inserted (see photo).
Take the power supply extension cable, add epoxy around the edges, and insert it into the 3D printed power supply extension mount (see photo). The mount and extension cable can then be inserted into the rear camera frame and attached using two 8-32 x 3/8” cap screws.
Step 5: Soldering Switches, Potentiometer, and ADC
Before building up the other frames, set the front frame aside and grab the shutter switch (blue LED switch), playback switch (black switch), potentiometer, and ADC. Wrap a resistor along one of the connectors of the playback switch and solder three strips of wire wrap wire of about 10-15cm to the positive terminal, negative terminal, and after the resistor. Label each wire or color-code. There should be three different wires leaving the switch.
Moving onto the shutter switch, solder wire wrap wire between the position terminal and NO1, and then another wire between C and negative terminal (see diagram in step 3). Then connect wire wrap wire to each of the following pins: positive terminal, NC1, and negative terminal. Again, there should be three different wires leaving the switch.
Solder three cables to the three pins of the potentiometer and label as shown in the schematic (Step 3)
Lastly, you need to prepare the ADC. Score prototype board with dimensions of 68 x 30mm using a box cutter, and snap the board. Drill two holes in the board separated a distance of around 60mm apart with a power drill. Place the ADC in the center of the prototype board and solder male pin headers next to the pins on the board.
Step 6: Top and Bottom Camera Frame Assembly
Next take the top frame (the one that mounts the switches), and tap holes for 8-32 x 3/8” cap screws in the three holes. Insert the trim potentiometer through the hole that is alone and tighten the nut using needle nose pliers. The playback push button is attached in the same way.
On the bottom frame is a place to insert a nut for mounting the camera to a tripod. Place epoxy around the cavity and the nut, and slide the nut into position. Make sure not to get epoxy into the center of the nut.
Step 7: Assembling Rear, Top, and Bottom Camera Frames
It is time to start connecting the frame. Screw in three bolts to connect the top frame to the rear frame, and pull the shutter switch through the top frame into the rear frame. Push it as far down as possible before it hits the nut you slid into the frame in Step 5. Screw in the push button. The top should now be firmly connected to the rear frame. Then, screw in the bottom frame into the rear frame using six 8-32 x 3/8” cap screws.
Step 8: Mounting the Touchscreen, Pi, and Fan to the Camera Chassis
Connect the long PCB stand-off spacers to the acrylic frame part of the raspberry pi cooling system. Tighten with pliers. Then attach two cables to the 5V ground pins on the touchscreen (here is a video on assembling the touchscreen). The Pi is then positioned over the display screen using the shorter PCB stand-off spacers.
Before attaching the touch screen to the camera frame, it is useful to set up the picamera. With M1.6x5mm cap screws attach the picamera to the mount (see picture). The 3D printed frame should not block the picamera ribbon. Run the picamera ribbon through the top acrylic fan frame into the camera slot before connecting it the camera ribbon connector on the pi. Then hold down the top acrylic part temporarily onto the PCB stand-offs with screws.
After connecting the rear, top, and bottom frames, lay the entire assembly down on top of the 7” touch screen, and screw in the four screws with a small screwdriver. You will have to take off the cooling system’s top acrylic plate later so that you can connect the power for the fan and touchscreen.
Step 9: Wiring Up the Space-time Camera
This part is the toughest. You have to solder or wire wrap all the connections shown in the schematic, and it can be tight space to work. Start my connecting the playback button and shutter button that each have three wires: 5V, Gnd, and signal (playback signal goes to GPIO 16 and shutter signal goes to GPIO21).
Take the ADC and connect wire wrap wire to the pin headers as shown in the schematic diagram, and pull them through the top acrylic plate to the appropriate pins. Then connect the analog output of the potentiometer to the analog input of the ADC, and the 3V and Gnd to the appropriate pins on the Pi.
Now remove the acrylic top plate and connect the 5V and Gnd cables from the fan and touchscreen. Once you have doubled checked all the connections, you can attach the top plate with two screws and two short PCB stand-offs. Connect the display ribbon cable from the touchscreen through the acrylic plate and into the Pi. The ADC is then attached onto the risers with two screws. In the photo and video, you can see that I had to cut some of the prototype board with ADC, because the 3D printed front cover was a little off. The mistake in the 3D front part should be corrected in the uploaded file in Part 3 of this Instructable.
Step 10: Front Frame Assembly
Tap the holes in the front frame using a tap or screws and sand the surface if necessary. Then move the front frame up to camera chassis and attach the picamera mount onto the front frame. Make sure the camera is oriented correctly.
Now place the front panel and picamera over top the electronics. It is a tight fit, so slide the bottom part of the front frame into the rest of the camera chassis first, and then push the part down into place. Screw in six 8-32 x 3/8” cap screws into the holes. They should align with the nuts you slide into the rear frame in step 5.
Step 11: Code for Real-time Space-time Imaging With a Pi
Now that the camera chassis and electronics are assembled, we can start coding the space-time camera. To start, it may be good to check that the camera is working. Attached is simple code for running the camera. In the following steps, I will go over how the code and electronics work together. You can also just download the completed space-time camera program and run through the code yourself.
The code needs to collect an image from the Pi, save a column from image for the space-time image, and display the image and space-time image side by side. When the shutter is pressed, a photo needs to be saved to the disc. To see the saved images, the user clicks the playback button and enters the playback mode of the camera. I decided to call the column that is used to construct space-time images the space-line, which is set by the position of the potentiometer. A red line is overlaid onto the image so that the user knows how the space-time image is being created. Here is an outline of the space-time program:
Import necessary packages
Set up pins and pin states
Set up trim pot conditions and tolerances
Define live video parameters
Set space line conditions
Read disc for saved images and store filenames
Live space-time mode:
Read file and store as matrix
Crop image to square
Update space-line position
Store space-line into space-time image
Concatenate both images into display image with dimensions of touchscreen
Display image on touch screen
Read input from shutter button
Read input from playback button
Read ADC to get potentiometer position
Save photo if shutter button is in right state change
Enter playback mode if playback button is in right state change
Read in files of photos currently saved to disc
If no files saved, exit playback
Read potentiometer change and cycle left or right through saved images
Check if shutter button is clicked to see if user wants photo deleted
Check if playback button has been pushed again to leave playback mode
This is my first time coding in Python, so let me know if you think there is a more efficient way to write the code!
Step 12: Necessary Packages and Fast Video Stream
One of the major challenges in this project was generating space-time images in real-time. Because images are made of so many pixels, it takes a long time to construct the space-time image. For example, if you wanted a space time image with 1000 columns and you could imaging at 30Hz, it would take at least 30 seconds to construct the space-time image. That feels really slow when you are so used to video rate update on digital cameras!
After doing a little searching, I found a great write-up written by Adrian Rosebrock on increasing the picamera frame rate. I was thankful how efficiently this worked, and so it is used in the space-time camera code. Another way to get around this challenge is to use low resolution images.
In addition to Adrian's approach for speeding up the picamera, you will need several other packages for space-time imaging:
from __future__ import print_function import cv2 import time from time import sleep from picamera import PiCamera, Color import numpy as np from imutils.video.pivideostream import PiVideoStream from imutils.video import FPS from picamera.array import PiRGBArray from skimage.transform import resize import matplotlib.pyplot as plt import argparse import imutils import os, os.path from glob import glob
Many of these libraries can be installed in the terminal using pip install. For example:
pip install imutils
The cv2 library, which is the work horse for image manipulation in this project was actually the most difficult to get working. I think this tutorial was best for getting it working on a Mac. Again by Adrian Rosebrock!
Step 13: Space-time Camera Code
In this section, I have listed explanations to the three major parts in the code: setup, space-time live mode, and playback mode. In the set-up, the code defines IO pins on the Pi for the space time camera (switches, potentiometer, ADC), and the matrices that will store information from the images acquired with the picamera. This code is executed only once when the space-time camera starts up. The other two parts are the main code that get cycled through continuously until the camera is turned off.
Define pins and set initial pin states. The code tracks the previous state of the button to determine if the switch is coming from high to low or low to high, instead of just tracking if the button is pressed or not.
GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.setup(16,GPIO.IN) # shutter switch pin GPIO.setup(20,GPIO.IN) # playback switch pin<p>SPICLK = 18 # ADC clock SPIMISO = 23 # ADC data out SPIMOSI = 24 # ADC control in SPICS = 25 # ACD CS</p><p>GPIO.setup(SPIMOSI, GPIO.OUT) GPIO.setup(SPIMISO, GPIO.IN) GPIO.setup(SPICLK, GPIO.OUT) GPIO.setup(SPICS, GPIO.OUT)</p><p>buttonState = 1 # initalize state of shutter switch prevState = 0 # initalize previous state of shutter switch buttonState2 = 0 # initalize state of playback switch prevState2 = 0 # initalize previous state of playback switch foldOut = '/home/pi/Desktop/spaceTimeCode/Images/' # folder images are saved to
Set up potentiometer conditions and tolerances. The analog output of the potentiometer is proportional to the position of the knob on it. There may be slight fluctuations in this signal that you will not want to trigger any action. Therefore, the code compares the difference in the current and previous measurement made by potentiometer to some threshold (the tolerance).
last_read = 0 # this keeps track of the last potentiometer value tolerancePlayBack = 7 # tolarance of trimpot before changing image in playback toleranceLine = 5 # tolarance of trimpot before changing line maxPot = 1000 # Max trim pot value potentiometer_adc = 0
Define live video parameters. The display of the space-time camera is split in two. On the left is the live video stream and the right is the constructed space-time image. The dimensions of these images can be chosen by the use with the trade-off between resolution and speed. Both the normal image and space-time image are stored in 3-dimensional matrices as shown in the code below.
# Screen dimensions: 88 x 154 mm (ratio of 7/4 = 1.75) ratScreen = float(800)/(480) # ratio of LCD screen dimension nPy = int(480) # number of vertical pixels for display image nPx = int(ratScreen*nPy) # number of horiztonal pixels for display image nPyE=int(240) # number of vertical pixels for image captured nPxE=int(ratScreen*nPyE) # number of horizontal pixels for image captured 256; # nPyE=160; indDiff=(nPxE-nPx/2)/2; # pixel difference between image captured and image displayed indArr = np.arange(indDiff,(indDiff+nPx/2)); A = np.zeros((nPy,nPx,3), dtype=np.uint8) # Frame to display At = np.zeros((nPyE,nPxE,3), dtype=np.uint8) # Captured frame ST = np.zeros((nPy,nPx/2,3), dtype=np.uint8) # Space time frame vs = PiVideoStream(resolution=(nPxE,nPyE),framerate=30).start() # set up pi stream object time.sleep(2.0) fps = FPS().start()
Set space-line conditions. The space-time image is a collection of columns from the normal image through time. To select the column used in the space-time image, the user turns the potentiometer. In the set-up part of the code, the position of the potentiometer is initialized. If the user turns the knob all the way to the clockwise, the space-line continually scans across the image.
last_read = readadc(potentiometer_adc, SPICLK, SPIMOSI, SPIMISO, SPICS) # read trim-pot<br> z=0; # initialize space-line position LineCapture = int(float(last_read)/maxPot*nPxE); # Line captured for space-time image<br> LineCaptureLoop =False # Condition for determining if line scans continually across the screen<br> if (LineCapture>nPxE-1): LineCaptureLoop = True
Read disc for saved images and store filenames.
files_list = glob(os.path.join(foldOut, '*.jpg')) # Get list of filenames of saved space-time images<br> numPicDir = len(files_list) # Number of images saved to disc<br> fileNum =  for a_file in sorted(files_list): # Arrange filenames in meaningful way<br> d=filter(str.isdigit, a_file) fileNum.append(d) fileNum=sorted(fileNum, key=int) if not fileNum: # No images are saved to disc p=1 fileNum = '1'; else: p = int(fileNum[numPicDir-1])+1 # number of first image to be saved
MAIN LOOP - Live space-time mode:
Display current image in live stream and space-time image. An image is captured by the picamera and cropped to a square for the display. Then the space-line is updated, and the specified column is selected from the normal image and stored in the space-time image. Both images (matrices) are concatenated into display image with dimensions of touchscreen and displayed on the screen using
At = vs.read(); # capture image from pi camera At = imutils.resize(At, width=nPx) # resize image At = At[:,indDiff:(indDiff+nPx/2),:] # crop image to be a square if (LineCaptureLoop == True): # if true, column increases automatically with each frame LineCapture = z ST[:,z,:] = At[:,LineCapture,:] # Take column of captured image and store it in space-time image A[0:nPy,0:nPx/2,:]=At # Set left half of frame to image captured A[0:nPy,nPx/2:nPx,:]=ST # Set the other half frame to space-time image A[:,LineCapture:LineCapture+2,:]=[0, 0, 255] # Color space-line red cv2.imshow("test",A) # Display image
Read input from shutter button, playback button, and potentiometer. From the pin inputs, get shutter state, playback button state, and potentiometer position. The change in the potentiometer position is calculated to be compared to the tolerances set by the user.
buttonState = GPIO.input(16) # read shutter switch buttonState2 = GPIO.input(20) # read playback switch trim_pot = readadc(potentiometer_adc, SPICLK, SPIMOSI, SPIMISO, SPICS) # read analog input of trim pot_adjust = abs(trim_pot - last_read) # difference in trim pot last_read = trim_pot # Save reading on trim
Update space-line. If the potentiometer change is greater than the tolerance, then the space-line position is updated. For the next image captured, the column used for the space-time image will come from this position.
if (pot_adjust > toleranceLine): # trim pot passes tolerance, update space-line position LineCapture = int(float(trim_pot)/maxPot*nPxE) # normalize trim pot reading and convert to column position if (LineCapture>nPxE-1): # Space-line is greater than number of columns, set to continuous line shift LineCaptureLoop = True elif (LineCapture<=nPxE-1): LineCaptureLoop = False
Save photo if shutter button is in right state change. If the button has switch from low to high, then it has been pressed. The image filename is created and the current display image is saved to disc.
if (buttonState ==0 and prevState==1): # Shutter has been pressed and was not pressed on last cycle, therefore take picture # Save photo prevState = 0 filename = foldOut+'image'+str(p)+'.jpg' # File location of saved image cv2.imwrite(filename,A) # Save photo p+=1 # increment p for saving photos elif (buttonState==1 and prevState==0): # button has been released, set previous position to high prevState = 1
Enter playback mode if playback button is in right state change. If the playback button goes from low to high, then the button has been pressed, and the camera will enter playback mode.
if (buttonState2 ==1 and prevState2==0): # playback button has been pressed # ENTER PLAYBACK MODE #
MAIN LOOP - Playback mode:
Read in files of photos currently saved to disc. The filenames of images currently saved on disc are saved and the first saved image is displayed. The potentiometer position is stored.
# ENTER PLAYBACK MODE files_list = glob(os.path.join(foldOut, '*.jpg')) # get list of all photos in save photo directory numPicDir = len(files_list) # get number of pictures saved to disc fileNum =  for a_file in sorted(files_list): d=filter(str.isdigit, a_file) fileNum.append(d) fileNum=sorted(fileNum, key=int) # filename numbers of photos saved to disc if not fileNum: # fileNum is empty # no photos saved! p=1 fileNum = '1'; playBackCond = False # Set playback to false and return to live space-time stream else: p = int(fileNum[numPicDir-1])+1 # number of first image to be saved playBackCond = True prevState2 = 1 whichFrame = 0 filename = foldOut+'image'+fileNum[whichFrame]+'.jpg' # name of first image file to display in playback img = cv2.imread(filename) # load playback file cv2.imshow("test",img) # display playback file last_read = readadc(potentiometer_adc, SPICLK, SPIMOSI, SPIMISO, SPICS) # read trim pot
Check if playback button has been pushed again to leave playback mode. If the user has released the playback button and pressed it again, then the code will return to the live space-time mode.
buttonState2 = GPIO.input(20) # read play back button to check if user wants to leave playback mode if (buttonState2==0 and prevState2==1): # once button is released set prevState to LOW prevState2 = 0 if (buttonState2==1 and prevState2==0): # button pressed again, leave playback mode playBackCond = False # user clicked playback button again and she wants to return to live space-time stream
Read potentiometer change and cycle left or right through saved images. If the potentiometer position change exceeds the tolerance, then the code cycles through the saved images. If the knob is turned to the right/left, then it cycles through one image to the right/left
trim_pot = readadc(potentiometer_adc, SPICLK, SPIMOSI, SPIMISO, SPICS) # read the analog pin pot_adjust = abs(trim_pot - last_read) last_read = trim_pot # save trim pot value<br> if (pot_adjust > tolerancePlayBack):# user turned knob if(trim_pot>last_read): # user turned knob to right whichFrame = whichFrame +1 # cycle through saved images to right elif(trim_pot<last_read): # user turned knob to left whichFrame = whichFrame -1 # cycle through saved images to left if(whichFrame>numPicDir-1) or (trim_pot>maxPot): # user has reached last image whichFrame = numPicDir-1 elif(whichFrame<0) or (trim_pot<0): # user has reached first image whichFrame = 0 filename = foldOut+'image'+fileNum[whichFrame]+'.jpg' # filename of saved image to display img = cv2.imread(filename) # load image cv2.imshow("test",img) # display image cv2.waitKey(50) # pause
Check if shutter button is clicked to see if user wants photo deleted. If the shutter is pressed in playback mode it acts as a delete button. The image currently displayed gets trashed, and the playback goes to the next saved image.
buttonState = GPIO.input(16) # read shutter button which is now delete button if (buttonState ==0 and prevState==1): # delete image if shutter is clicked... # delete photo prevState = 0 filename = foldOut+'image'+fileNum[whichFrame]+'.jpg' # filename of image to be deleted os.remove(filename) # delete image fileNum[whichFrame:whichFrame+1] =  # remove image from file list whichFrame = whichFrame -1 numPicDir = numPicDir - 1 if (numPicDir == 0): # no pictures left saved on disc playBackCond = False # return to live ST MODE time.sleep(0.1) if(whichFrame<0): # user has reached first image whichFrame = 0 elif (buttonState==1 and prevState==0): prevState = 1
That is a basic outline of the code, but there may be a few bugs still. Let me know if you discover any problems, and I will update the Instructable!
Step 14: Code for Converting Any Video to Space-time Videos With Matlab
The processing for converting any video into a sequence of space-time images is more straightforward, but takes more time to process. Because we don’t require the space-time images to be rendered in real-time, the results are higher quality and all columns/rows of the image can each be converted to a space-time image. Now we can watch movies that propagate through space instead of time!
i. Import the video file and store it as a matrix V. The dimensions of the 4D matrix with be number of y pixels (nPy), number of x pixels (nPx), number of colors (three), and number of frames captured through time (N).
ii. Create xspace-time images. We now create a matrix Sx with dimensions of (nPx, N, 3, nPy). In other words, we create a space-time image for every row in the image. Check out the gif in the introduction part of the instructable.
iii. Create yspace-time images. We now create a matrix Sy with dimensions of (nPy, N, 3, nPx). The process is the same as for Sx, but we are now cycle through columns in the images instead of rows.
iv. Save Sx and Sy as videos, so that you can watch them in any video player.
Step 15: Space-time Imaging Conclusions
The space-time camera was a great first Raspberry Pi and Python project. I think the results have been pretty good too, but there are two major problems with the real-time space-time imaging: slow frame rate and poor resolution. It would also be possible to make this more like a modern digital camera by using the touchscreen (see references below for an example of how to do this). I am also having issues with the Raspberry Pi shutting down after about 10 minutes of streaming, and I haven't got to the bottom of it yet. Have I pushed the Pi too far? I think they are other cool image processing projects that could be done with the camera by just writing the code. Any ideas?
I think the post-processing videos are the best, and it can even be done with any video. See the previous step for Matlab and Python code.
7" touchscreenlayout with dimensions: http://www.raspiworld.com/viewtopic.php?t=13#top
DIY WiFi Raspberry Pi Touchscreen Camera: http://www.raspiworld.com/viewtopic.php?t=13#top