The primary result of my creative technologist residency at SLV LAB in 2025 was a mixed-reality exhibition titled In Memory | Of Being. It invited visitors to explore photos and audio from State Library Victoria’s digitised collection in a spatialised form through mixed reality goggles. In one section of the installation, the work featured a bunch of stereograph images from the library’s collection; in this article, I’ll describe what stereographs are and how they work, and then demonstrate how to view these with their original depth restored using simple image processing methods.
What is a stereograph?
A stereograph is a pair of photographs taken at the same time, looking at the same scene but from two different points in space, where the distance between those two points is ideally approximately equal to the distance between a pair of human eyes. This produces a left-right set of paired images, like this stereograph view of underground mining at Bendigo, Victoria, taken by J.B. Edwards in 1901.
/filters:quality(75)/filters:no_upscale()/filters:strip_exif()/slv-lab/media/images/176964128c4ab/ie1405419.jpg)
At first glance, the two images appear identical, but on close inspection there are subtle differences that result from the two images originating from slightly different perspectives on the scene. If you had access to the original stereograph print of this image, as well as a stereoscope you would be able to view this scene with perceptible depth rather than as a flat image; viewed stereoscopically, the curving pipe that cuts through this scene appears to jump out of the image, and there’s a palpable sense of claustrophobia that emerges from the tight confines of the environment.
Viewing this image stereoscopically requires that we somehow separate and isolate the images, so that your left eye sees only the left image, and your right eye sees only the right image.
Stereoscopes
Stereographs were very popular in the late 1800s and early 1900s, driven in no small part by the invention of a hand-held stereoscope by Oliver Wendell Holmes . This was far cheaper to produce than earlier designs such as those by Charles Wheatstone and David Brewster , and Holmes famously elected to not patent his design, thereby allowing anybody to build their own version.
While the popularity of stereographs eventually waned, similar principles eventually led to the development of 3D movies and modern-day virtual and augmented reality headsets.
State Library Victoria's stereograph collection
The State Library Victoria catalogue lists 3,145 stereographs in its collection, of which 2,916 are available for online viewing.
If you happen to own a Brewster-style stereoscope and a printer, you could theoretically print out these digitised stereographs and view them more or less as originally intended. You might also be able to perceive the depth in these images by crossing your eyes and trying to fuse the left and right images, but not everyone has an easy time doing this and it generally leads to at least some degree of eye strain pretty quickly.
Manipulating Stereographs
So, what other options exist to view stereographs? Assuming you don’t have a stereoscope or a virtual reality headset (more on that later), the easiest option here is to find a pair of red-cyan anaglyph glasses, which were the original way to watch black and white 3D movies, and can be either purchased inexpensively or assembled from scratch with a bit of red and cyan cellophane.
The way these work is that they take two monochromatic images of a scene, and colourise them separately so that one image is printed in red, and the other image in cyan, and then two images are then overlaid directly on top of one another. If you now put a red filter (which blocks cyan) in front of one of your eyes, and a cyan filter (which blocks red) in front of the other, your eyes will now see two different things when looking at the same image, and when your eyes attempt to fuse the two images, the stereo disparity between those different images will lead you to perceive depth within the image.
3 ways to view stereographs on your screen
Method 1: Using GIMP to create a red-cyan image from a vintage digitised stereograph
What follows is a relatively simple, if somewhat manual, approach using the open-source image editor GIMP (I used version 3.0.4, but the steps below apply to just about any version).
- First, you’ll need a stereograph; I’m going to use that image of coal miners in Bendigo from earlier, but feel free to browse through the Library’s extensive collection to find something that tickles your fancy. Download a copy of your chosen stereograph, ideally in the highest resolution available.
- Open that image in GIMP. If you’re using the default layout, you should see the Layers window on the right side of the UI; we’re going to make a duplicate layer of our original image by right-clicking the base layer and selecting ‘Duplicate Layers’:
- Avoid future confusion by naming one of these layers “Left” and the other “Right”.
- Next, we’re going to desaturate these layers, which tosses out any of the colour that might be present in the scanned image and converts it to pure black and white. In the Colours drop down menu, select Desaturate and Desaturate again (then press OK on the dialogue box) to convert the layer to black and white. Make sure to do this for both layers.
- Next, assuming that we’re using conventional red-cyan anaglyph glasses, where the red filter covers the left eye and the cyan filter covers the right eye, we need to convert our left layer to red and our right layer to cyan; we’ll do this by clicking the Colours menu and selecting Channel Mixer. The Channel Mixer allows us to apply a multiplier to the red, green, and blue channels within the layer. For our left layer, we’re going to leave the multiplier on the red channel at 1.0, and set the multiplier for the green and blue channels (which combine to make cyan) to 0.0. For the right layer, we’ll set the red multiplier to 0.0 and leave the multiplier for green and blue at 1.0:
- At this point, whichever layer is the top layer is going to be blocking the layer beneath it; in my case, I have the right layer on top of the left layer, and so all I see is the cyan image. If we change the Layer Mode for the top layer from ‘Normal’ to ‘Addition’, then instead of the top layer blocking the lower one, the channels of the layers will add together, and since we’ve effectively divided the channel information from the original image across two separate layers, setting the top layer mode to Addition will bring us back to a black and white image again.
- This might feel somewhat anticlimactic, but if you now select the move tool from the toolbox on the left side of the UI and move the top layer around relative to the bottom layer, you’ll see the separation of the red and cyan components of the image.
- Next, align the right-cyan image with the left-red image, and to do this we’re going to find the furthest-back object in our scene and align our layers until that object overlaps in both images. In my case, I’ve picked one of the wall-mounted candles towards the rear of the scene, and I’m moving my top-right-cyan layer so that the candle overlaps.
- Once your layers are aligned, zoom back out and use the rectangle select tool to crop your image down to the overlapping area of the two images:
- From the Image menu, select Crop to Selection, and you’re done!
- You can export this to an image file, and if you have your red-cyan anaglyph glasses handy, you can pop them on and marvel at the restored depth of this fine vintage stereograph.
Method 2: Using GIMP to create GIFs
GIMP can also produce animated GIFs from images with multiple layers, which is another way to (sort of) see depth in a stereographic image on a flat screen:
- Select File > Open as Layers > select all images you want to be in the GIF > Open
- In Layers tab order your images > The GIF sequence will start with your bottom layer and run through each layer bottom to top. (I have only used two layers but you can use more)
- Select Filters from main menu > Animation > click Optimize for GIF
- To view GIF: Filter > Animation > Playback
- Press the Play button to start GIF. To save GIF: Select File > click Export as
- Name your GIF and choose folder > select File Type > Choose GIF Image
- In the dialogue box, select ‘As Animation’ and select ‘Loop Forever’ To change speed between each image change the Delay.
- Click Export.
Method 3: Code
While you could theoretically do this each time for as many stereograph images as you have patience for, we can also speed up this process considerably using a code-based approach instead of a full-blown image editor like GIMP. Given a scan of a stereograph, we can automate most of those steps; the only manual intervention we need here would be to have control over the selection of the corresponding far point in the left and right images, as well as the bounding box for the final crop.
There are any number of ways to approach this problem, but for the purpose of demonstration here I have a web-based application that relies on the open-source three.js javascript library:
Try out the Stereoscope Viewer prototype
This is a kind of homage to the Steregranimator project that was hosted by the now sadly closed NYPL Labs, which was a web application allowing for the generation of red-cyan anaglyph and stereo-wiggle images from vintage stereographs in the collection of the New York Public Library; the implementation is quite different, but the end goal of making stereogram depth accessible via a web application is consistent.
The code for this application is all open-source and hosted on the SLV LAB’s github page, and allows for the generation of red-cyan anaglyph or stereo-wiggle images from any stereograph in library’s digitised collection.
I’m using the IIIF interface to access the raw image data from the SLV’s archives, which means that you don’t have to download the images yourself and then upload them into the tool – all you need is the IE number for the stereograph you’d like to convert. These IE numbers can be extracted from the URL of the SLV’s image viewer interface – there are some detailed instructions on finding these IE numbers and using the IIIF interface here (the javascript code I’m using to grab the data from the library’s IIIF server is directly adapted from the Python code presented in this article).
/filters:quality(75)/filters:no_upscale()/filters:strip_exif()/slv-lab/media/images/17715456546eb/ie-number-in-url.png)
The image-processing part, in which the original stereograph plate is converted into either a red-cyan anaglyph image or a stereo-wiggle animation, is done using graphics shaders rather than traditional image manipulation techniques. This allows for the image manipulation to happen very quickly: the original image is loaded into texture memory once, and then the remaining operations use computationally inexpensive calculations to produce the final output.
How to use the prototype
- Find the IE number of a stereograph you’re interested in working with, and enter this into the URL string for the stereogram tool. The tool will then acquire a high-resolution image directly from State Library Victoria’s collection and display it in the upper half of the window.
- Pick your corresponding stationary point in the left and right images via a mouse click. If you hold down the space bar while hovering over the image, you’ll get a zoomed-in view allowing for more precise point-picking.
- Once you’ve picked the corresponding stationary point in the left and right images, the bottom output image will show an aligned red-cyan anaglyph image suitable for viewing with 3D glasses. If you’d like to see your stereograph as a stereo-wiggle animation instead, you can press 2 to switch to wiggle mode (press 1 to go back to anaglyph mode). There is also a cropping rectangle that can be helpful for trimming any extra bits you don’t want out of the view: drag the handles at the corners of the rectangle to select only the area of the image that you’re interested in.
- If you make a mistake with selecting your stationary points, or if you’d like to try selecting a different set of stationary points for the same image, hit R to reset the points and start over.
If you’re interested in understanding more about how this works, let’s dig into the shader code for generating the anaglyph image to see how we’re implementing all the manual steps from the GIMP process:
uniform sampler2D uTex;
uniform vec2 uLStat;
uniform vec2 uRStat;
varying vec2 vUv;
vec2 halfToPlateUV_L(vec2 uvHalf) { return vec2(uvHalf.x * 0.5, uvHalf.y); }
vec2 halfToPlateUV_R(vec2 uvHalf) { return vec2(0.5 + uvHalf.x * 0.5, uvHalf.y); }
float luma(vec3 c) { return dot(c, vec3(0.2126, 0.7152, 0.0722)); }
void main() {
// Left eye stays fixed in the output coordinate system
vec2 uvL = vUv;
// Shift right eye so that uRStat aligns with uLStat at the same output pixel
vec2 uvR = vUv + (uRStat - uLStat);
uvL = clamp(uvL, 0.0, 1.0);
uvR = clamp(uvR, 0.0, 1.0);
vec3 colL = texture2D(uTex, halfToPlateUV_L(uvL)).rgb;
vec3 colR = texture2D(uTex, halfToPlateUV_R(uvR)).rgb;
float gL = clamp(luma(colL), 0.0, 1.0);
float gR = clamp(luma(colR), 0.0, 1.0);
gl_FragColor = vec4(gL, gR, gR, 1.0);
}This is code for a fragment shader, which is a small graphical program that decides what color a pixel on your screen should be. In order to do its job, the shader needs be able to sample data from the source image, and it needs to know the locations of the stationary point for the left and right side of the image: these are provided in the three uniforms at the top of the shader block.
At a high level, every time the fragment shader runs for a given pixel in the output window, it samples from the input texture twice, at two different locations based on the picked locations for the stationary points. This is analogous to the copied layer system from the GIMP workflow, except it’s doing it one pixel at a time.
Once it has color from the left and right images, it converts these from color to luminance values (i.e. to black-and-white) using a standard formula defined by WCAG 2.0:
L = 0.2126 * R + 0.7152 * G + 0.0722 * B
To produce the final output RGB value for the fragment (pixel), it takes the luminance value for the texture in the left image and puts that in the red channel, and then puts the luminance value for the right image into the green and blue channels:
gl_FragColor = vec4(gL, gR, gR, 1.0);
With the outcome being that rather than having to repeat a bunch of steps each time we want to make an anaglyph or wiggle animation out of a stereograph, we can automate almost every step of the process aside from picking the stationary points and setting the crop rectangle.
Resources
| Type | Author(s) | Tags | |
|---|---|---|---|
| codebase |
|
/filters:quality(75)/filters:no_upscale()/filters:strip_exif()/slv-lab/media/images/17700915214af/cropped-stereograph-woman-on-porch..jpg)