Featured

It’s 20 years since Y2K - have we learnt to be cautious about our reliance on technology?
Towards the end of the last century computers became an essential tool, integrated into all aspects of human life. Financial companies used computers to monitor credit fraud and keep track of customer accounts, and make data based investment decisions, powerplants became dependent on IT to monitor water pressure and radiation levels, and even airports integrated them into flight scheduling systems.
Under the hood, in the early years of programming, the IT industry was making its mark as a niche – albeit growing – global community. Programmers were painstakingly setting up systems and asking some of the early questions about what computing could mean for the future. Evidently, not thinking far ahead enough though – because the first programmers, those dedicated individuals who write the binary 0s and 1s into actual computing functions, took a shortcut of sorts with their dating system. This shortcut would have scary implications for future computer users.
Threat or hysteria?
As convention, computers were designed to categorise dates with the last two digits of the year to save memory space – ignoring the ‘19’ at the start of the number, denoting 06/15/98 rather than 06/15/1998. This short cut saved space and sped up processing by deadling with any date in the 20th century – all the way up to 1999 – but at what cost? Computers wouldn’t be able to handle the 21st – they’d be left with a confusing 00.
It made computers run more efficiently but it lacked foresight into the reality of what would happen when that date rolled over from 1999 to the year 2000 – where would that ’00’ go?
The problem was that nobody knew what was going to happen when billions of digital clocks struck 2000. The fear was that these date-based codes would warp the whole system – prompting some computers to malfunction, and others to stop working at all.
When businesses started to get wind of the potential threats to their IT systems there was a flurry of activity to update the old code. Opinions varied. Some predicted mayhem and destruction that would cripple banks and governments when the feared moment arrived, and the clocks struck midnight on January 1, 2000.
Plenty of people feared even worse. It soon seemed the whole world was anticipating the threat of Y2K. Hysteria set in and ominous predictions echoed across news stations. Average people stocked up on food, water, and even guns; prophecy proclaimants and conspiracy theorists came out in full force, building emergency bunkers.
As 2000 neared, Y2K lawsuits were being filed and wilderness-survival bootcamps were becoming more popular. Then president Bill Clinton gave a speech that claimed that Y2K was a “massive” threat to private and public organisations, and “one of the most complex management challenges in history”. TIME magazine fed the hysteria with the somewhat ironic but provocative January 1999 headline The End of the World!?!
As the clock struck 00:00, it seemed that millennial fears were unfounded, and the digital clocks had quietly clicked over to the new millennium – a seamless transition had occurred. Humanity emerged relatively unscathed, aside the estimated billions spent in fixes.
The prospect of a fix was far from simple
Those in the know knew that fixing the issue was not going to be straightforward – that easy fixes were just not possible. In many cases, the original source code (the programmer’s code input) was so old it was impractical to attempt to alter the original digital format. Also bleak was the realisation was that there were no longer any valid compilers (the program’s to covert the programmer’s code into digital form) compatible with the programming code.
Even more complicated though, the whole set up of non-date-dependent programs would need to be reformatted to make space where the two extra digits were added.
A challenge to confidence in the industry
The Y2K crisis has left the IT community and outsiders wondering, was it a beat up?
Those not associated with the IT world doubted that Y2K was real problem and, there were many on the fringes who continued to insist it was all overhyped (maybe similar to today’s conspiracy theorists accused of engaged in hysteria about global warming or life-threatening misinformation about COVID-19).
For an industry that prides itself on its response to rapid change, its not exactly the negative sentiments you’d want associated with your work, right?
Asking the big questions
In the 1960s a single line of code was the basis for most systems, with storage space minimal and costly – a single kilobyte (holding digital information equivalent to 2 or 3 paragraphs of text). Fast forward to today’s layers of code used to produce complex software, we have scalable Cloud network attached storage that is in the realms of exabytes (equivalent to the size of the sun).
Maybe the fact that we fixed Y2K in time could give us hope about other global crises facing us today. Y2K shares features with other large-in-scale problems. It was big, expensive to fix, and it had the dangerous feature of short-term resolution – its worst effects would only be seen in the future.
Whether those early programmers anticipated the extensiveness of future compute power seems unlikely. Today, with cloud computing available to some 46 percent of the world’s population, maybe we stand a better chance.
Can we be optimistic about our responsiveness in years to come? Will we be able to spot and mitigate future risks, and pivot in a way that was just not possible 20 years ago? The answers could involve some serious questions about what global, Cloud based computing systems can do, and what impacts they will have on the lives of those to come.
Enjoyed this article? You may also like Quantum computing is coming for business – what you need to know or Post Covid, more companies will ditch their servers.