Yesterday morning, I woke up early. I mean, really early. More night than morning early. I went to sit out on my verandah to enjoy the quiet night for a bit. I often do this. I then go back to sleep, or not. I am used to get up early, as I love to experience the breaking day.
Anyway, back to last night. The sky was clear, just a few small clouds here and there. And there was the moon, full or at least full-ish.
Beholding the scenery unfolding in front of me, across the sea, I realised that I might see the moon setting behind the horizon line of the water, given its current trajectory – and, important, before the day breaks. I quickly checked the app I use to determine sunrise and sunset times for photography purposes, you know, for getting the nice warm light around these times of the day and all that. And yes, moonset was on 4:57, and sunrise at 5:37. Yay.
I took this picture at 4:49. I got lucky with the clouds.
In the above picture you already see the twilight of the awakening day. Compare to the following photo, taken only 15 or so minutes before. It’s still pitch black. Dawn and dusk happen very quickly here. Also check out the colour differences, in particular of the moon itself. It was fascinating to watch the dramatic changes.
While I was able to capture the first picture, ie. the one with the setting moon, in one shot, the second one is composed of three photos. When we behold the moon, we are able to see the details of the moon’s surface, as well as the surroundings of the landscape. When you try to take a photo, you realise how bright the moon actually is: the camera cannot capture the moon’s details and the landscape. If you expose for the moon’s brightness, everything else is underexposed. If you set the camera to capture the surroundings, the moon will be blown-out, just a white blob. Technically, our visual “apparatus” from the eyes to the brain has a way higher dynamic range than any camera. A camera measures luminosity linearly, while we humans do some logarithmic capturing and processing, but my memory is hazy on how this works for us.
For the first picture, the moon was already sufficiently dark, ie. the brightness of the moon and the landscape where within the dynamic range of the camera – hence, one shot.
However, for the second picture above I took three shots, one exposed for the foreground, one for the water, and one for the moon. Check out the three pictures below. They were taken within a few seconds, the camera mounted on a tripod, with the same aperture and ISO settings of course, but with exposure times of 1 s, 1/4 s, and 1/125 s.1 I then layered the photos, ie. one stacked upon the next, and used masks to make visible the right portion of each picture. I am not (yet) good at this, but hey, one has to start somewhere. I think the above resulting picture looks acceptable. I struggled with making visible some of the clouds you see on the first two photos, but could not find a solution that looked natural, mainly due to the shining halo around the moon,2 and my missing skills I guess. At the end, I decided to forego the clouds for now. Anyway, if you only see the result, you wouldn’t even know there were clouds, right?!
Yes, I did try to “pull out” the dark landscape in the third picture, ie. the one where the moon is well-exposed, but to no avail. The camera simply did not register anything useful there that could be made visible without unbearable levels of noise.
Earlier that night, when the moon was still higher up, I even needed to use exposure times of 1/4000 or 1/3000 of a second to get well-exposed picture of the moon. ↩︎
Comparing to my memory of the scenery, the shining area around the moon wasn’t even there, it’s a camera artefact. The light on the clouds is there, though. I guess I could go into full-scale compositing mode and create a picture that better reflects my memory, masking each cloud and so on. ↩︎