The Good
NASA are in the process of updating the software on their Voyager spacecrafts. At the time of this writing, the distance of Voyager 1 from the Earth is more than 24 * 109 km, while Voyager 2 is about 20 * 109 km away. The travel time of light to Voyager 1 is over 22 hours, to Voyager 2 it’s close to 19 hours – which is the time the signals for updating the software take as well. The two probes are well out of our solar system now, with Neptune being about 4.5 * 109 km away, in “interstellar space”.
Both space craft have been launched in 1977, that is, they are now more than 46 years into their journey. Quickly recall the status of technology available in the seventies of the last century, and you can picture the kind of ancient on-board computers, processors, and instruments NASA are dealing with here.
The software update for the Attitude and Articulation Control Subsystem (AACS) is uploaded via a link using 16 bits/sec command rate. That takes some time. The article says it took 18 hours to transmit the software patch. I don’t know if NASA use several different transmission stations around the globe, since Earth will rotate during these 18 hours, and one single transmitter probably loses line of sight during that time period.
What always fascinates me about this kind of remote updates is that there’s basically no margin for error. Engineering turned up to eleven. Effing up the update could mean to lose control, in particular patching the erroneous patch. There’s no one around to press a reset button.
The Bad
Speaking of well-tested software and procedures, here’s the counterexample.
From the article:
A survey of 500 developers (…) found more than two-thirds (67%) admitted they pushed code into a production environment without testing, with more than a quarter (28%) acknowledging they do so on a regular basis.
However, this takes the cake:
More troubling still, 60% also admitted using untested code generated by ChatGPT, with more than a quarter (26%) doing so regularly.
If you have read that ChatGPT or its ilk sometimes “hallucinate”, ie. present falsehoods as facts, you need to realise that this is their basic modus operandi. Anything factual asked from ChatGPT needs to be fact-checked via other sources. Why ask ChatGPT in the first place then? I think this kind of machinery is great for tasks like translating texts, where no facts that are not already in its input are needed in the process.
Some weeks back, I had asked ChatGPT to “create a scheduler for coroutines in Oberon”. Admittedly not the easiest request, not least as Oberon does not provide native coroutines, but nothing you could not expect from a smart programmer. The proposals were embarrassing, to say the least. Even with a helping hand, the presented “solutions” were, well, not solutions. Not even close. And I didn’t even expect elegant code, just sort of working would have been fine.
For starters, any programmer would know to create an abstract data type for the coroutines. Also, this problem is platform dependent – another added challenge for ChatGPT –, since the couroutine’s stack needs to be managed in memory space, and thus the processor’s stack pointer must to be updated upon context switch. Depending on the platform, this can mean to actually write to a CPU register. Which can be easily encapsulated within the abstract data type and its operations. So I would have expected ChatGPT to point out this platform dependency, and either tell me it has to give up, or ask for more information about the platform. Or present a solution with the platform-dependent bits left out, and a comment what needs to go there.
Nope. Spewing out more bullshit was its approach. Which demonstrates that ChatGPT does not actually know anything, not even its boundaries and limitations. I know, I know, I probably should have known better. But an assistant that never says “I don’t know, I cannot help you”, and just continues to present wrong solutions or facts, is of little use to me.
If the above survey is of any indication of today’s reality in companies’ programming and operations shops, I don’t feel at ease about the future of our increasingly complex systems that touch, or even handle, so many aspects of our lives, directly or indirectly. We possibly should not even mainly blame the software developers responsible to put untested code into operation, if handcrafted or generated by a ChatGPT-like systems, but the time and cost pressure put on engineers by the companies. Good engineering takes thought and time.
Not that everything has to be tested to the extent to be ready for transmitting to a remote spacecraft outside our solar system, I get that, but come on.
As an aside, I hate the continuous apologising of ChatGPT when you point out trivial and obvious errors. I don’t need that kind of assistant either.