loading
9Comments

Tell us about yourself!

Complete Your Profile
  • Grasping Gravitational Waves: Augmented Reality Robots teach physics fundamentals to children and adults alike

    Two images are shown: the first is a long exposure photo of a flashlight attached to a blackboard eraser slid along a chalkboard track. The light was initially angled at the camera going left from the right, but keeping that angle unchanged. As a result the area (intensity of light) going into the camera reduces as a function of space, eventually it becomes undetectable. The second photo shows another flashlight that is programmed to flash at a specific interval under a similar setup. The light is shoved from the right to the left, and since the light interval is constant, the pulses shows the acceleration and deceleration of object in motion at that sample rate.

    Two images are shown: the first is a long exposure photo of a flashlight attached to a blackboard eraser slid along a chalkboard track. The light was initially angled at the camera going left from the right, but keeping that angle unchanged. As a result the area (intensity of light) going into the camera reduces as a function of space, eventually it becomes undetectable. The second photo shows another flashlight that is programmed to flash at a specific interval under a similar setup. The light is shoved from the right to the left, and since the light interval is constant, the pulses shows the acceleration and deceleration of object in motion at that sample rate.

    View Instructable »
  • Shooting for a Homepage Feature: Timelapse and multi-exposure photography the DIY way (Make or write your own code!)

    Update: Implemented light space composition. It looks very similar to that of the image space composition.

    Under a closer examination it appears that the edges are sharper in the light space composition with respect to the image space composition.

    View Instructable »
  • Shooting for a Homepage Feature: Timelapse and multi-exposure photography the DIY way (Make or write your own code!)

    Application of the 'senment' application on previous project on SWIM.I have one image that is a long exposure of my machine swiped horizontally in space while changing its display to demonstrate my name. And another image of the ambient background, or the setup. The images are combined into compositions, which are visible on the top left corner of each produced image. I have one that is simply combined, one that changes the colour of the name to cyan and the other one to enhance the 'redness' of the display. Photocredits: Helton and Steve.

    Application of the 'senment' application on previous project on SWIM. I have one image that is a long exposure of my machine swiped horizontally in space while changing its display to demonstrate my name. And another image of the ambient background, or the setup. The images are combined into compositions, which are visible on the top left corner of each produced image. I have one that is simply combined, one that changes the colour of the name to cyan and the other one to enhance the 'redness' of the display. Photocredits: Helton and Steve.

    Made similar application that works on Windows. I believe the best way to understand how something works is trying to implement that idea. The application contains 7 panels. Going from top left corner to the bottom right I have: 1. picture of the stove light on2. picture of kitchen light on3. the plot on the relationship of pixel intensity parameter N, and display of N value when optimized versus the squared error4. the picture of both lights on5. application generated image space composition of when merging the two images6. light space composition (to be done soon) of merged images. 7. user input that specifies RGB composition of both images.For (Sony RX100 M4) it is computed that the optimal N value is 2.16. One generated image has image 1 RGB = (100%, 20%, 20%) and image 2 RGB = (20%...see more »Made similar application that works on Windows. I believe the best way to understand how something works is trying to implement that idea. The application contains 7 panels. Going from top left corner to the bottom right I have: 1. picture of the stove light on2. picture of kitchen light on3. the plot on the relationship of pixel intensity parameter N, and display of N value when optimized versus the squared error4. the picture of both lights on5. application generated image space composition of when merging the two images6. light space composition (to be done soon) of merged images. 7. user input that specifies RGB composition of both images.For (Sony RX100 M4) it is computed that the optimal N value is 2.16. One generated image has image 1 RGB = (100%, 20%, 20%) and image 2 RGB = (20%, 20%, 100%). The other one is (20%, 20%, 100%) and (100%, 20% 20%).

    View Instructable »
  • Imprint invisible sound and radio waves onto your retina: Augmented reality with perfect alignment

    I have added a depth detector (IR pair) to detect the approximate location of the WIM, and use the depth data to determine what segment of the text is to be displayed, making the output space dependent and time independent. The images demonstrate that outputs are very similar next to each other in space. There are some distortion from one place to the next, and the input is not linear. Thanks for the input. I may try the signal generator & phase shift technique to detect location. This should eliminate some of the issues I am seeing right now.

    This in the first step of creating a display that uses parallel (P) LEDs that aims to reproduce (R) AR images by imprinting(I) them in space. It is consistent because output is a function of space (S) rather than time. PRISM is acronym of this machine (M). Currently the project is able to display my name using a Parallel WIM. The images aren't reproducible and is not yet a function of space, but time. This will be the next step.

    View Instructable »
  • HDR EyeGlass: from cyborg welding helmets to Wearable Computing in everyday life

    I have had some troubles trying to learn the principles of HDR at first, and I am motivated to making an application that attempts to educate others to reduce their learning curves to this subject. Currently the application animates both a comparametric plot, and one of the two images being compared with pixel trackers in real-time. As you can see in the screenshots, as the darker parts of the images are being processed, the lower left corner of the comparametric graph is being built. As the brighter parts of the image is being processed, you can see the graph extending towards the right diagonal. This demonstrates the parts and significance of the plot, and how it can relate to the composition of the images (i.e. the left parts are more concentrated, since most of the image are in the ...see more »I have had some troubles trying to learn the principles of HDR at first, and I am motivated to making an application that attempts to educate others to reduce their learning curves to this subject. Currently the application animates both a comparametric plot, and one of the two images being compared with pixel trackers in real-time. As you can see in the screenshots, as the darker parts of the images are being processed, the lower left corner of the comparametric graph is being built. As the brighter parts of the image is being processed, you can see the graph extending towards the right diagonal. This demonstrates the parts and significance of the plot, and how it can relate to the composition of the images (i.e. the left parts are more concentrated, since most of the image are in the dark.) It also demonstrates lighting and noise. The user may also track and see the points identified on the plot as they move their cursor across the image, as shown. I am currently finishing up my work on animating the creation of a camera response graph using comparametrics step by step during the lab today.

    View Instructable »