Vertical and Horizontal Stripes This week I was off the nano-lathe and focused instead on creating images to match the cut planes of the mirror facets. I put both of these into my optical system for testing and figured out that there would be lots of incremental changes to make. I began by printing a series of small scale translucent images to allow the light travel through the image and onto the mirror plane.
I created both vertically and horizontally striped images to reflect at 90 degrees to the cut mirror plane. That is, for the mirror with vertical cuts, I reflected/projected a horizontal stripe image and vice versa. This allows me to see how much the depth of cut bends the light and if my next program for the nano-lathe requires deeper cuts. These striped images are just for optical testing, when I work out how much the light bends I will create a series of images and matching mirrors.
Within the images I added a fine black stroke line in between each coloured stripe. This allows my eye to pick up the change in direction of the light (stripe) more easily – a trick I picked up from the 19th century artist-engineer Charles-Émile Reynaud! He used a black background in his image frames so that the viewer could more easily perceive the change in the characters in the foreground.
Image and mirror holders After preparing the images, I modelled up and 3D-printed some image holders so I could control their position in the optical system. Originally I thought that the image size needed to be exactly the same as the mirror, but when I tested this, it turned out that the image frame needs to be significantly larger than the mirror to compensate for the distance between the image-object and the surface of the mirror.
Before placing the facets onto the polygon gig, I needed to test a single mirror facet in my optical system and therefore modelled up a small holder for the mirror.
At the same time Tom Cave from the School of Physics Workshop make a brass gig allowing me to easily attach the aluminium facets onto a holder for cutting. This will make centring the placement of the facet on the spindle much easier – for when I’m back cutting again next week on the lathe!
Geoff and I have been working on a python script that we will use next week once back on the machine. We’ve been working towards achieving more stepped and deeper cuts on the aluminium facets, to see if this changes the reflected image more dramatically.
Zooming in and out – from nano scale to millimetres Over the last few weeks I have been on a steep learning curve, venturing into the world of nano fabrication. It’s been quite a disorienting experience adapting my visual thinking to a Cartesian reference system to accommodate the different axes of the nano-lathe and understand how the g-code relates to the physical behaviour of the machine.
The constant moving back and forth between a such precision machine and 3D printers has forced me to continuously shift my focus. The shiny mirrored parts I’m making on the nano-lathe require my gaze to zoom in beyond what I can see without the aid of a powerful magnification lens. When I jump back onto the 3D printers, I feel my gaze zooming back out again and a millimetre now seems extraordinarily large! This shift in perception has made me think of a quantum physicist or astronomer, constantly oscillating between the ‘zoomed-in’ world of minuscule or faraway objects and the ‘zoomed-out’ world of everyday life.
What I’ve been testing, exploring, making
Cutting the mirrored facets My collaborator Geoff Campbell has been patiently showing me the steps of the nano-lathe and there are many! After learning how to zero the various axis of the machine using a magnifying camera, probe and balancing system, I began facing components for a small 6-sided polygon, which I’ll place in one of my optical systems for testing – to see how these facets affect my image.
Neil Devlin cut a series of blanks for me to use on the nano-lathe and after facing them I’ve now begun to cut splines (patterns) into them. These patterns form the outer surface of the polygon and determine how the image-light, which travels through the image frame, is reflected off the mirrored surface and into the objective lens. By controlling the depth of cut to the mirror plane, the facets shift certain parts of the image. The images below with the wavy blue lines demonstrate how much the cut mirror surface changes the direction of the light. The wavy blue line is the reflection of a (very straight!) blue highlighter pen.
This visual ‘relocation’ of certain areas of the reflected image will enable me to control the movement within the projected image, when placed into my optical system. We’ll be producing lots of iterations of these facets, experimenting with how the light ‘bends’ specific images and consequently creates the illusion of movement in the projected image. As the mirror cuts are so shallow on the lathe, I’m also exploring with Geoff how to calculate the distances required from the image source to the mirror facet to the objective lens and viewer. The next step is to play with how each cut on the mirrored facet bends specific sections of the image, so we can develop a sequence of cuts for the polygon facets.
Rotating the mirrored facets I’m assembling the polygon components around a single rotating shaft, connected to a small stepper motor, controlled by a driver and some simple Arduino code. I 3D printed a set of gigs which will hold the individual mirror facets and sit on top of the motor’s shaft collar. As the motor rotates the polygon within the optical system, each uniquely patterned facet will refract the reflected image-light differently to create the illusion of movement in the projected image.
Rotational speed is important for the perception of movement in the projected image. So far I have only been able to set the motor to rotate as slowly as 10 RPM, so I might need to add some gears to slow it down further. Luke Materne from the Electronics Unit at the Research School of Physics has been helping me with this. More things to learn!
Cutting Splines into the facets To begin a conversation with Geoff about the mirror cuts, I 3D modelled and printed some surface profiles. Understanding now how the nano-lathe works has modified my thinking about how to make the mirrored facets. Geoff wrote a python script to approximate the spline into a series of points and then turned this into g-code for the nano-lathe. The machine also cuts very slowly, for example a facet of 18mm x 25mm takes about 4 hours to cut about 120 microns (0.12mm).
Each cut has to be parsed several times with rough and finish cuts of different depths. It’s been really interesting to discover that the cut in the mirrored facet doesn’t have to be very deep. If it is combined with the correct focal length lens (that is, the distances from the mirrored facet to the objective lens and projected image) one’s visual perception still picks up the refraction (or change in the image).
Conversations with Geoff There’s been lots of 3-way conversations been Geoff, myself and the whiteboard! As well as mirrors, we’ve started discussing how we can use the nano-lathe to cut some lenses using optical grade acrylic. We’ve also begun a dialogue about splitting up our optical system to have 2 mirrored polygons running on perpendicular axis. This will double the effect of the apparent movement in the projected image.
To explore this further I’ve been looking at Charles Wheatstone’s 19th century stereograph system, where he uses 2 image sources and angled mirrors to create a single image with a depth of field. I’ve also been exploring lots of historical optical device patents to look at different configurations for optically recombining the 2 separate images into a single projected image. The important difference between our system and Wheatstone’s stereograph system, is that our system only uses a single image source.