Introduction: 3D Metavision Using a 2D Computer Screen by Way of Superposimetric Image Compositing

About: I grew up at a time when technologies were transparent and easy to understand, but now society is evolving toward insanity and incomprehensibility. So I wanted to make technology human. At the age of 12, I c…

Meta means "beyond" in the self-referrential sense, e.g. a meta conversation is a conversation about conversations and a meta joke is a joke about jokes. Metadata is data about data, such as the JPEG header information in a picture that tells you where it was taken and what the shutter speed and aperture were, etc..

In my childhood in June of 1974, I discovered that if I move a television receiver in front of a surveillance camera that it glows more brightly when it is tuned to pickup the TV signal, and that a long-exposure photograph of the surveillance camera (taken with another camera, hence the name "meta") shows the capacity of the surveillance camera to "see". I called this "metavision" = the vision of vision, or metaveillance, the sousveillance of the surveillance (veillance of veillance). I also founded a company called Meta, together with Meron Gribetz and Ben Sand, and then I later brought in one of my PhD students as a co-founder (US Patent 9,720,505, invention of 2012 originally filed Jan3, 2013).

Metavision is the predecessor of the metaverse, http://wearcam.org/metaverse/

and you can learn more about this work and beyond metaverse = eXtended uni-omni-meta-Verse = XV through our effort at IEEE, OpenXV.org and our research paper https://arxiv.org/pdf/2212.07960.pdf

See also the video, "Beyond the Metaverse: XV explained in 6 minutes."

In this Instructable we'll learn how to take a simple metavision photograph using multiple exposures. This Instructable builds on some of my other Instructables, e.g. we'll use as an example the 1-pixel camera that we build for teaching purposes in my ECE516 course.

The goal is to use another camera to photograph the metaveillance of a simple 1-pixel camera, in order to teach the principles of metavision.

Supplies

Computer screen, workbench upon which it can be slid back and forth, test camera (e.g. a simple 1-pixel camera built for teaching purposes), and a mount for the test camera (cantilever so it can reach out right to the screen), and a second camera to photograph the test camera (metavision).

The above photos show the various scraps of wood (wooden blocks) that I salvaged from a scrap bin and painted black. I used these to build a cantilever arm to hold the camera out so that it can reach over past the keyboard of my laptop computer that can slide under the camera so the screen can come right up to the camera.

The sliding platform is made from 4 skateboard wheels and bearings with 5/16th inch threaded rod, but you can also simply side a screen across a desk, especially if you have a smooth desk and a piece of cloth that can go under the screen so that it is easy to slide back and forth.

Step 1: Set Up the Test Camera As Well As a Second Camera to Capture the Metavision

Here we will use one camera to photograph another camera together with the other camera's metaveillance. To do this, we'll use a computer screen to display the metaveillance in 2-D slices, while moving it through 3D space.

The first step is to setup the camera-under-test by mounting it directly in front of the computer screen. The computer screen should be placed on a workbench with room to move it back and forth. Optionally use skateboard wheels or some surface that slides easily or some object the screen can rest on to move it to make this easy.

In the above picture, the test-camera (homemade 1-pixel camera) is on the left and the metavision camera is on the right.

Step 2: Calibrate the Metavision Camera (determine Response Function)

Since we're going to combine multiple exposures, we'll need to calibrate the metavision camera.

This can be done as shown in previous labs (previous Instructables), according to the following steps:

  1. use a manual exposure setting or download a manual setting app, and capture a series of pictures that differ only in exposure. To do this, affix the metavision camera on a tripod or mount it or fasten it securely while capturing a series of pictures that differ only in exposure.
  2. Determine the comparametric equation g(f) by computing comparagrams of image pairs.
  3. Solve the comparametric equation to determine the response function f(q).

Step 3: Capture 2D Metavision Slices

Capture 2D metavision slices. This can be done in one of two ways:

Beginner method: determine the field of view of the camera in previous labs with a test light moving from center outwards until the camera's output falls off;

Advanced method: plot a white square (if your sensor is square) or rectangle (of your sensor's aspec ratio) and increase the size of the square until it falls beyond your camera's sensor, at which point you'll observe a reduction in the camera's output. Note the size of the biggest square that still falls within your camera's sensory capacity by way of observing the output. You can automate this if you have a microcontroller connected to the camera.

Here is a useful Desmos script, https://www.desmos.com/calculator/dyyskbphgk

Set the background to black (non-reverse video which Desmos calls "Reverse contrast"), and set the Polygon's "Fill" to 0.

You can alternatively use Octave or other plotting programs, or even do it in OpenBrush.

For each of several screen distances from the test camera, determine the size of square (or rectangle) that's in the liminal space between just inside and just outside the test camera's view.

For each of these distances, photograph the test camera and screen while it is displaying the square (or rectangle).

Capture also a picture of the test-camera. Above are shown three photos, one of the test-camera and one of a small test square close in, and one of a large test square further out.

Step 4: Combine the Exposures Using the Compametric Law of Composition

We wish to consruct a double exposure picture of the two squares together with the picture of the test-camera, so there's actually three pictures. Call them v0 (test-camera with no squares showing and with the screen off), v1 (small square) and v2 (large square).

Compute f(f-1(v1)+f-1(v2)+f-1(v3)).

The above picture shows the result.

Step 5: Continue and Have Fun With Metavision

Repeat for more squares (or rectangles)...

Step 6:

Optional additional step: Repeat using OpenBrush to create metavision in the metaverse and beyond.