Introduction: CAMXPLORER - Wireless Zoom & Focus for Sony Alpha 7 III

  • We are heavily influenced by Hey Jude's Instructable
  • We are implementing this system to help someone that lacks fine motor control, which prevent her from manually focus and zoom in her camera.
  • Using a gear + servo + Arduino + app system to enable the use of the manual zoom and focus from an app
  • The app integrates the native Sony app plus the zoom and focus functionality to allow the user complete control over the camera set up

At the moment we are still working on the app portion. We are hoping to create an open source app using the Sony's API. This will allow users to integrate and modify according to their needs

Supplies

Arduino:

We will be using an Arduino Blend V1:

  • You will need to use the following GitHub repository to be able to edit in the Arduino IDE
  • You can use the any Arduino that gives you Bluetooth accessibility

15mm DSLR/Camera Rails (Amazon)*

2x Servo Motor (Wide Angle 300 degrees). We used HDKJ D3015

  • 300°C rotation
  • 14-15 kg.cm torque
  • Note: We decided that for a future project we will use a full rotation servo. The focus lens can turn infinitely. This becomes an issue especially with the macro lens since it requires more turns to focus.

Tools:

  • M3 Nut & Bolt Assortment (Amazon).
  • Specific Sizes required: (8x12mm; 8x15mm; 2x20mm)
  • Small Philips and Flat-Head Screwdrivers
  • Pin Vice (Amazon)
  • Prototype Board
  • Wires
  • Battery Pack (4x AA) (Amazon)
  • Heat-Shrink (Amazon)
  • DC Barrel Jack Adapter/ Male DC barrel connector.: (Amazon)
  • Male-Male Header (Amazon)
  • Multimeter

Optional:

  • 3.5mm Panel Mount Audio Socket (Amazon)
  • Wire-up with 3.5mm Audio Cable (Amazon).


Printed Parts:

2x Lens Rings; 2x Servo Gears; 4x Servo Mounts; 4x Wire Clips.

Step 1: 3D PARTS + CAD

The original CAD files where designed for The original CAD for this project was designed around a Canon 600D (aka Rebel T3i in the US), with a Tamron 28-300.mm Lens by the Zocus Instructable. We adapted this for the Sony Camera and Lens but you should visit their instructable to see if they will work with your camera and lens dimensions


 Step 1: edit Zocus CAD files

  1. Download Zocus Editable CAD files.
  2. Open the files in Rhino software and you should have a mesh object in view.
  3. If you need to scale the object in only one direction, use command “Scale1D”. If you need to scale the object in three dimensions, use “scale”.
  4. If you need to add parts to the object, use “convert objects to NURBS” command and convert the mesh to polysurface. Use “Booleanunion” to join parts
  5. If you need to separate part of the object, create a surface that intersects completely the area you are doing the cut. use “trim” or “split” to remove parts with the surface.

 

Step 2: create a good mesh

  1. Once your polysurfaces are all joined together and is “capped” (water tight), user command “mesh from surface/polysurface” to convert the object to mesh.
  2. Move the mesh object to 0,0,0 position
  3. Use command “meshrepair” to check and repair mesh.
  4. Once it’s a good mesh, export the object to stl file

 

Step 3: follow the instructions of your 3D printer to print the file

l Note: We used the object printer and used clear gloss material with 25%-35% black rubber (a digital material).


The only file that we modified directly before printing has been attached. The other pieces were modified as specified throughout the Instructable

Step 2: Servo Assembly

  1. Remove the central screw from the Servo.
  2. Use the Pin Vice to widen any of the holes if your 3D print is a bit tight fitting.
  3. Press the horn into the 3D-printed gear
  4. Check the assembly fits and can rotate freely.
  5. Press the Nut in the hexagonal hole of the 3D-printed Mount.
  6. Screw the small Bolt into place
  7. Do not tighten the screw until you have slid it into the rails of the camera mount
  8. Do the same for all clamps.
  9. Set the other 4 Nuts in the rest of the mount's hexagonal holes
  10. Press Servo into place.
  11. Using bolts, clamp the two Mounts together (by connecting them to the nuts you set in step 8).
  12. Screw the Gear & Horn back on.
  13. Repeat steps 5 -11
  14. Note that the setup is mirrored for each of the servos

Step 3: Circuit Diagram

Pins 10 and 11 serve as a control to the servos. They are the ones that output the PWM.

Note that both servos are connected to the same Vin and Ground

The batteries will be connected to the DC Barrel. This will be discussed further later.

Step 4: Setting Up the Arduino

To keep the project clean we decided to set up the Arduino an a PCP board

  1. Solder the pins into the PCB
  2. Connect the Vin, Ground to the Male-Male Headers
  3. Connect Pin 10 and 11 to the control
  4. The order should be Vin, Ground, Control as shown in the image

You will need to use the following GitHub repository to be able to edit in the Arduino IDE

  1. Follow the Instructions in the GitHub
  2. I recommend that you test your Servo's connections to make sure that everything is working well
  3. You can use something like this Tutorial to test the system

NOTE: I struggled to get this to work with a Mac. I recommend using a Windows system (unsure if a virtual machine would work since it did not do so for me)

Step 5: Battery Set Up


  1. Solder then Ground and Vcc to the corresponding wires in the Male DC barrel connector.
  • Make sure that the wire is long enought to reach from the back of the camera mount to the front section
  • Use Heat shrink to make sure there is no exposed solder
  • Place a Heat-Shinkg before soldering
  • Use the hot, non-tip, section of the solder iron to heat it up (or a lighthter/heat gun)
  • Make sure that you can measure the Voltage in the connector with your multimiter


Step 6: (Optional) Cleaner Battery Set Up

In this project, we decided to use an audio cable to connect the Arduino to the battery pack


  1. Drill a hole for the Audio Socket to fit through the battery pack.
  2. We had to use some super glue to ensure it would fit
  3. Cut wires from Battery Pack to size.
  4. Solder to Socket.
  5. Place a Heat-Shinkg before soldering
  6. Use the hot, non-tip, section of the solder iron to heat it up (or a lighter/heat gun)
  7. Cut the Audio Cable and look at the cables that are inside.
  8. Using an schematic of the cables solder them to the corresponding cables of the Male DC barrel connector.

MAKE SURE TO USE A MULTIMETER TO MAKE SURE ALL YOUR CONNECTIONS ARE CORRECT

Step 7: Assembly on Mount Rails

  1. Place the rings on the camera lens
  2. Make sure that they are the right fit and that they are tightly adjusted (not super tight, but tight enough to ensure they will not slip)
  3. Place the Servos in the front section of the camera mount
  4. Make sure that they line up with the camera rings (as shown in the pictures)
  5. Under them place the Arduino using the 3D printed part
  6. Clip it under the rails (as shown in the image)
  7. You might need to carve a hole in the back of the box to allow the battery pack to be connected (you can fix this by adding the hole in the CAD file before printing)
  8. Slide in the cable management 3D printed parts
  9. This can be done either in the front section on the back section. It will depend in how long your cables are and what your set up is.
  10. Slide the battery after the clips (in the back section)
  11. Clip it under the rails to prevent it from covering the camera screen (as shown in the image)

NOTE: when I refere to front and back I am refereing to the section in fron of where the camera attached to the mount and the section behind it

Step 8: Set Up Remarks

In our case the Mounts was held by a robotic arm attached to a wheelchair. Due to the weight distribution of the mount we decided to use Velcro to hold it down to the arm.

Make sure that you are concious on how you will attach the camera set up to the wheelchair and keep that in mind when building the mount. Some slight re-arrangement of the set up might be benefitial for your particular use case

Step 9: App Set Up

As we mentioned earlier we are working on an open source app that we hope will integrate the Sony's app functionality as well as the zoom and focus system. As of now we are using the Zocus app created Hey Jude in the Instructable we mentioned earlier.

This are the set up instructions:

Go to http://www.zocus.co.uk/ and navigate to the App Store, where you can download the App on iOS.

Follow the instructions to install the App. Switch on Bluetooth on your device.

Before your power-up the Zocus for the first time, you need to follow a few steps:

  1. Remove the Camera from the Zocus Rig.
  2. Now switch on the Battery/Power to the Zocus Rig.
  3. *It will make a small 'zzz' noise as the servos spin into their Mid-Positions*
  4. With the Camera off the rig, adjust the Zoom and Focus to the mid points of their rotations.
  5. Start the App.
  6. Move the sliders - you should see the motors turn.
  7. Put them back to the centered position.
  8. Put the Camera back on the Zocus Rig (everything should now be in a 'Mid Point configuration).
  9. Slowly slide the sliders back and forth until you the Zoom/Focus rings reach their natural limit.
  10. If you are happy that this covers the full range of the movements, you can then go to calibrate and set the Max and Min points. You can save different lenses.
  11. For storage, slide both sliders to Min. But be aware on start-up, the Zocus will go to Midpoint by default.
  12. You will need to calibrate the set up to the min and max values of your lenses.
  13. You can store multiple lenses, so the calibration only need to happen once

Step 10: Creating the Open Source App - PROTOTYPING

One of the most important things for use was to make sure that our user could comfortably use the app that we were designing. So we went through multiple rounds of prototyping in Figma to ensure the layout of the app was as optimal for her as possible. You can find the final version of our prototype here.

We were mainly focusing on:

  • Decreasing the number of steps needed to take a photo
  • Increasing thier independence when taking a photo
  • Decreasing the amount of exhausting when taking a photo


UX design process: We used the Design Thinking framework of empathize (research and interview), define, ideate, prototype, test, and iterate.

 

Step 1: Primary research and user interviews:


The goal is to fully understand the pain points and needs of the user, so that we can pinpoint what solution to design for.

 Steps: set a goal for the interview, create a list of interview questions based on this goal, meet the user, and conduct interview. The resource below offers best practices and details for how to do this:

 https://www.nngroup.com/articles/user-interviews/#:~:text=A%20user%20interview%20is%20a,of%20learning%20about%20that%20topic


Step 2: Define and ideate

Using the results from primary research and user interviews, defined the problem and solution. Ideate on all possible solutions. Team brainstorming sessions and sketching are what we used.

 

Step 3: Prototyping

l We used Figma to design the interfaces

Create an account in Figma (it’s free to use)

Use our UI frames https://www.figma.com/file/NZDayrVSJC3sYsHyNQu1eX/CamXplorer_published?node-id=0%3A1&t=o6bGrlsxz68DTTRn-1

or follow these tutorials to design your own: https://www.youtube.com/watch?v=Cx2dkpBxst8&list=PLXDU_eVOJTx7QHLShNqIXL1Cgbxj7HlN4


Design Resources:

We are heavily influenced by Apple Native Camera App for the UI and Sony Imaging Edge for the features.

For UI assets, you can use https://material.io/blog/material-3-figma-design-kit; you can also use any UI kits and assets of your choice. Tons can be found in Figma community and Figma plugins, such as Inconify and Icons8.

For UX accessibility standards, helpful resources we referred to are: https://www.w3.org/WAI/tips/designing/, https://accessibility.huit.harvard.edu/design-controls-easy-operation, https://digitalcommunications.wp.st-andrews.ac.uk/2019/11/08/designing-for-users-with-physical-or-motor-disabilities/

Because our user will be using the device from a distance, we want to make sure the contrast is high. Great accessibility plugins to check contrast on Figma include Stark, Able, Contrast Checker, and Color Blind.

 

Step 4: Testing

The goal is to have the user complete specific tasks to get feedback. Details please refer to:

https://www.nngroup.com/articles/usability-testing-101/

 

Step 5: Iterate

  • Iterate steps 2-5 as needed.

Step 11: Making the App

  • We used Sony's API/SDK
  • We used iOS app development since the person we were designing this for has an iPhone
  • If you are interested in learning more about how the app was created reference the READ ME in the GitHub of this app

Summary

ppat-camera-remote allows for remote control of ISO, Shutter Speed, and Aperture of a Sony DSLR Camera. The app uses Wi-Fi to "talk to" the Sony DSLR camera and send event commands that let you change ISO, Shutter Speed, and Aperture. This app uses another repo called ROCC to send these commands (https://github.com/simonmitchell/rocc) ppat-camera-remote is specifically designed with accessibility in mind, with all the buttons being close to the bottom and aligned to one side of the phone. This allows a user to operate the app without having to make large movements.

ppat-camera-remote was written as part of a class at MIT called Principles and Practices of Assistive Technology (PPAT).

Features include:

  • Changing ISO, Shutter Speed, and Aperture on from your device
  • Being able to trigger the shutter from your device
  • Viewing current camera settings


Requirements


Running App on XCode Simulator

  1. Download the source code

$ git clone git@github.com:shreya2142/ppat-camera-remote.git

  1. Open PPATCamera in Xcode
  2. Select iPhone 11 to use with the iPhone model this app has been tested in. The app was designed to work with other iPhone models as well, so selecting iPhone 11 in the simulator is not necessary.
  3. Hit the play button at the top left corner of XCode
  4. Now that you know the app is working in XCode, we will try to connect it to your camera (see section below)


Connect App to Camera (Instructions for Sony Alpha 7iii)

  1. Turn on the camera and press the menu button at the top left of the viewfinder. Also turn the top dial to M for manual mode.
  2. Navigate to the icon that looks like a globe and press "Ctrl w/ Smartphone"
  3. Make sure Ctrl w/ Smartphone is on and so is Always Connected
  4. Press the Connection button, a screen that says Connection Info at the top should pop up
  5. On your laptop, disconnect from your local wifi and connect to the wifi printed out on the camera. If it is your first time connecting, you will have to connect with a password, which is done by pressing the trash can button on the camera
  6. From there, close any simulator instances you already had running and click play. Then click the camera icon on the bottom left of the screen. If an error message shows up, wait for a few seconds and attempt connecting again. More drastic actions include closing and reopening the simulator or turning off the camera and repeating this process again.
  7. Now you're connected and ready to go! Click on ISO / SS / A, which stand for ISO, shutter speed, and aperture respectivey and a scrolling wheel pops up that allows you to change the value. No confirmation is necessary, the values on the camera update in real time. Once you are satisfied, click the capture button in the middle of the screen to take a picture.


ROCC Information

The initial strategy for connecting with the camera was using the Sony native API/SDK. After struggling with that for a while, I went in search of alternate ways to connect to my Sony camera. I came across ROCC (https://github.com/simonmitchell/rocc) which has the same functionality as the Sony native API/SDK but with a signficantly more straightforward interface. The documentation for adding ROCC as a dependancy in Swift did not work for me; I suspect this is the case because the documentation is slightly outdated. I will walk through the steps required to add ROCC as a dependency in XCode (I am using XCode 14.0.1).

  • Open PPATCamera.xcodeproj in XCode
  • On the right, there should be a panel with PROJECT and TARGETS as headers. Click the PPATCamera below the PROJECT header.
  • Click on the package dependencies tab
  • Click on the + icon at the bottom of the list
  • Add Rocc using the github link and follow instructions about version rules
  • Check to make sure Rocc shows up under Package Dependencies when you click on the PPATCamera.xcodeproj icon
  • From there, just put import Rocc at the top of your file and you should be good to go!


Development Process

The first step in the development process was to talk to our codesigner and gain an understanding of what sorts of UI element they like / do not like. Based on this, we made several Figma mockups to test with out codesigner. From there, we made modifications to the Figma files based on their input and continued iterating on the design of the frontend. See the section Frontend Design Considerations to see what we learned from this experience. Given the frontend of this app, it is incredibly easy to do things such as change the size and placmenent of buttons or increase/change fonts. Things that may be slightly more difficult would be changing the type of UI element, such as going from a scrolling wheel to a manual input.

Once the Figma was iterated upon to a degree we were satisfied with, the next step is to create this frontend in XCode. For the development of this app, we used StoryBoard, so the creation of the frontend was quite intuitive. Due to a Github mishap, we lost the complete fronend and hand to start over. Due to time constraints, we decided to go with the functionality that we deemed the highest priority, which was ISO, shutter speed, and aperture. When building out the UI, frquently press play to see your code on the simulated phone. There were many instances where we had "floating" buttons or buttons in the wrong place due to misplaced constraints. Furthermore, check to see how your app functions on devices that are different from what you are planning on running it on.

After completing the UI, we moved to working on the backend. The ROCC repository has several examples of how to get started. One point of confusion for me was that many of these examples had to be contained within delegates. These delegates have certain functions that they are required to have. The first step to the backend is writing the function that connects to the camera. To do this, you make a CameraDiscovererDelegate that is called somewhere within your ViewController. This delegate should have a function called connect that allows you to connect to your Sony camera. I suggest testing that connection works before proceeding with coding any of the other backend elements. In order to test connection, you follow the same steps in 'Connect App to Camera'. I found it exceptionally helpful to print information about the camera such as its name and connection mode to help with the debugging process. After that, most of the other backend functionality is just sending commands to the camera by using ROCC's performFunction. One limitation of this app is that it CAN NOT recieve information from the camera. I will discuss this further in the Missing Functionality section.

Due to the fact that the app can not recieve information from the camera, one important development note is to manually input possible states and also keep track of the current state. The possible states just the values that ISO / SS / A can possibly be given a particular camera and lens. A lot of this information is contained within the ROCC repo or online, so there is no need to manually scroll through the camera. In order to make sure you are keeping track of the current state properly, immediately upon connection send a command to the camera setting ISO / SS / A to known values and save those values in your code. After that, when sending any command to update values in the camera, make sure you update the respective variable in your code.


Frontend Design Considerations


See 'Figma Design' at the from the previous step to see what this app is working towards!

  • All the buttons are at the bottom and left aligned. This makes it easy to operate the app without lifting your elbow, which can be tiring for some users. Feel free to modify the placement of the buttons to suit your own needs. The placement of the buttons significantly affects the user experience as it determines how taxing it is to do any one operation.
  • Displaying current camera settings in app. This app was designed with the intention that the user would not be looking at the camera or the settings display at all. Therefore, it is critical to make sure that all values are easily displayed on the screen.
  • Our codesigner wanted to ensure that this app would work even if she upgraded her phone. With this in mind, make sure that constraints placed are not dependent on exact distances from points on the screen in a way that would cause the app to change appearance and/or functionality with a phone switch.


Missing Functionality


One large piece of missing functionality is the inability to get information from the camera. Based on the documentation provided in the ROCC repo, this is definitely possible to do from the ROCC repo, I was just unable to figure it out due to time constraints. The addition of this functionality would allow the app to know the actual values of the camera without manually updating them. It would also allow the user to see what 'the camera sees' on their phone, which was an important critera for our codesigner that we were unfortunately unable to achieve. The code currently has a CameraEventNotifierDelegate that is supposed to call the eventNotifier function whenever it gets information from the camera. Dispite hours of debugging, eventNotifier was never called and therefore we were unable to recieve information from the camera.

Another easy to add functionality is to add more options to toggle other than ISO / SS / A. Examples include white balance, flash, and timer. Another feature to be added is to give the capture button more built in functionality. We want it so that a light press to the button will correspond to the light press of the shutter button on the camera and a long press will correspond to burst mode. Looking that the ROCC documentation, this is also a possibility.

The final piece of missing functionality is for the app to successfully operate on a phone. For our final user prototype testing, we were successfully able to upload the app on to our codesigner's phone by connecting it to the laptop via USB, and then installing as a device in XCODE, and then running the simulation on it. Although the frontend of the app was working, our codesigner was unable to communicate with the camera. We tested the app again using the laptop simulator and it was successfully able to do so. This is a critical part of getting the app in the hands of people who actually want to use it, since it is unreasonable to expect them to be running it on a development environment.



Step 12: Final Remarks

Please visit Hey Jude Instructable. This is an application of his system with some slight modifications to fit the particular needs of a wheelchair user. The main expansion was made in the use of a different Arduino due to the once recommended by them being discontinued as well as the development of an open source app that can be used to integrate the remote control of the camera functions with the remote control of the zoom and focus system. We have also updated the CAD files to fit the Sony camera, and the new Arduino.