AI And the End of the World (As We Know It)

AI And the End of the World (As We Know It). Written by Jonathan Colby

   After thirty-five years in Information Technology, collecting a few certifications and a lot of experience along the way, I like to think of myself as someone on the technical side; suffice to say I am not a trained writer. However, I think I can offer some perspective on current technology developments and rapid advancement as someone who’s dedicated a lifetime to this field. While there are a lot of facets of AI development we could focus on, I’d like to focus on…the end of the world.

   Let me provide some historical context to frame the conversation.

   In the mid-1980s, I was an electrical engineering student at the University of Pittsburgh. Just across town at Carnegie Mellon University, researchers had built something called the Terregator—short for Terrestrial Navigator. It was one of the first vehicles designed to drive itself. To me, that was one of the earliest real attempts at autonomous navigation.

   Fast forward twenty years. In 2004, the DARPA Grand Challenge offered a million-dollar prize to any team whose vehicle could complete a 142-mile course through the Mojave Desert—no humans allowed. Fifteen teams tried. None finished. The farthest anyone made it was 7.32 miles.

   It might sound like failure, but it was a huge step forward. The lessons from that challenge led directly to the breakthroughs we see today. And now, in 2025, Teslas are driving down city streets while their owners scroll through playlists. You might be thinking, “Wait – what do self-driving cars have to do with the end of the world?” And the honest answer? Not a thing. But they do show how fast things can change and how what once seemed impossible can become completely ordinary in just a couple of decades.

   Artificial Intelligence (AI), Robotics, and Neural Interfaces are now advancing as quickly as self-driving cars once did. Examining their rapid development timeline may provide insight into what our future could hold.

Artificial Intelligence

   The story of AI really starts in 1950 with Alan Turing. He proposed the Turing Test—a thought experiment to see if a machine could exhibit intelligent behavior similar to that of a human. Six years later, John McCarthy organized the Dartmouth Conference, officially giving AI its name.

   By the 1960s, early programming languages like LISP (List Processing) and IPL (Information Processing Language) allowed researchers to experiment with symbolic reasoning. Projects like ELIZA (the first chatbot) and SHRDLU (an early language parser) could carry on simple conversations and move virtual objects. It was clunky, but it was the beginning.

   Jump ahead to 2017, when a team of Google researchers published the seminal paper Attention Is All You Need. That paper introduced the Transformer model, which became the foundation for nearly all modern AI—including the large language models that power systems like ChatGPT. It’s hard to believe that it was less than a decade ago.

   Today, AI is no longer theory, it’s changing how we live and work and IBM’s Watson is a great example of that. AI’s real superpower is its ability to dig through massive amounts of data iteratively, spotting patterns that humans might never see. Watson has been trained in healthcare, and today it can outperform even the most experienced doctors – from pathologists examining tissue samples for cancer to radiologists interpreting X-rays and making full diagnoses. These are tasks that normally take years of school, internships, residencies, and decades of hands-on practice – but Watson does them faster and, in many cases, more accurately.

Robotics

   In the 1950s, Unimate became the first industrial robot. It was simple, a mechanical arm doing repetitive factory work, but it changed manufacturing forever. That same basic design is still around today, though modern robots are faster, smarter, and far more precise. In 1973, Japan introduced WABOT-1, the first humanoid robot. It could walk, see, hear, and even hold short conversations. It was the first time machines really started to look and act a little like us.

   Then came ASIMO, Honda’s famous humanoid robot, in 2000. ASIMO could walk as well, but it could also climb stairs, recognize people, and even wave, however awkward and stiff. It ran for 40 minutes before its battery died, couldn’t handle uneven ground, and only understood preprogrammed phrases. Still, it captured imaginations because it represented possibility. Fast forward to today: Boston Dynamics’ Atlas can run, jump, flip, and navigate complex terrain almost like a human athlete. It’s incredibly agile, reacting in real time to obstacles. But it’s still limited by battery life, fine motor skills, and cost keep it in the research lab. Atlas shows how far robotics has come, and how close we’re getting to something that moves and interacts the way we do.

Neural Interfaces

   Neural interfaces (also called Brain-Computer Interfaces (BCIs)), go back to the 1960s, when scientists first used EEG (electroencephalogram) to record brain activity. In 1973, UCLA’s Jacques Vidal coined the term “brain-computer interface,” imagining a world where thoughts could control machines. By the 1990s, researchers had proven it possible: monkeys moved robotic arms with their thoughts, and humans used brain signals to move cursors or type letters. Today, that’s no longer science fiction. Blackrock Neurotech’s BrainGate lets paralyzed patients move robotic arms or type using only their thoughts. Synchron’s Stentrode implant allows users to interact with smart home devices, and Neuralink aims to build high-speed connections between brain and computer. Even non-invasive versions can let people control prosthetics or interfaces without surgery.

   We are, quite literally, learning to think machines into action.  It’s the beginning of a world where our thoughts and machine computations start to operate as a single system – what can be referred to as hybrid cognition.

The End of the World

   If you’ve guessed where this is heading, yes — it’s Terminator time. Sorry for spoiling a 46-year-old movie plot, but here goes. In some ways, the world imagined in The Terminator mirrors our own; war, famine, and pollution all contribute to what one might describe as a sick and decaying planet. In that fictional universe, an advanced intelligence called Skynet quickly determines that the real problem is… us. Cue global nuclear war and an army of killer robots hunting down the survivors. It’s a grim cautionary tale about machines turning on their creators and how our brilliant inventions can become a little too ambitious. And honestly, it’s hard not to see a reflection of ourselves in that story as we sprint toward faster, smarter, more capable systems.

   Do I think that’s where we’re headed? Not exactly. But I do think the next few decades will bring changes so profound that our world will be almost unrecognizable. Every major technological leap has reshaped civilization in ways we did not anticipate. We ushered in the Industrial Revolution and, along with it, centuries of carbon emissions. We created plastics that now blanket the oceans and circulate through the food chain. We’ve created “forever chemicals” like PFAS that persist in our soil and water. We split the atom to unlock electricity and simultaneously unlocked the capacity for absolute destruction. We developed pesticides that nearly erased entire species and refrigerants that punched a hole in the ozone layer.

“Our ingenuity is extraordinary — and yet our ability to foresee the consequences often lags far behind.”

   That’s why the rise of artificial intelligence demands more than curiosity or caution — it calls for reflection. AI is not merely another tool; it is a kind of mirror we are building, one that reflects our priorities, exposes our blind spots (and sometimes magnifies them), and reveals both our ambitions and our fears. The question is not whether AI will destroy us, but whether we will recognize ourselves in what we create, and whether we will choose to guide that creation with intention rather than inevitability. This story isn’t about killer robots. It’s about whether we learn faster than we invent.

   To borrow a line from R.E.M., “It’s the end of the world as we know it… and I feel fine (sort of).”

Acknowledgement: supported by ChatGPT for content refinement and visual design.