The typical discussion I have with millennials regarding AI usually involves some reference to the fact that they grew up with movies about AI domination run amok. The great films on that theme start with 2001: A Space Odyssey (1968) when HAL 9000 locks the crew out. Of the spaceship to save the mission. It’s the first real machine over man movie moment. The classic is the whole.Terminator line-up starting with The Terminator (1984), then T2: Judgment Day (1991) with Skynet as the existential threat.
As a concept, Skynet has become cultural shorthand for existential AI risk: a superintelligent system that develops its own goals, perceives humanity as a threat, and acts decisively to eliminate it. It’s the archetypal “misaligned AI” scenario. The Skynet scenario is taken seriously in AI safety circles as a stylized version of the control problem: how do you ensure a vastly more capable system remains aligned with human values and remains under human oversight? Meanwhile, that didn’t stop Elon Musk from launching Starlink, SpaceX’s satellite internet constellation that has over 6,000 satellites in low Earth orbit (LEO), making it by far the largest satellite constellation ever deployed. It provides broadband to over 100 countries, including remote and rural areas with no prior terrestrial internet access (even my beloved Lodge at Red River Ranch has Starlink and I’m using it this very moment).
Anyone who’s owned a property in a remote or at least unconnected area not serviced by cable, understands the value that Starlink has offered. No more complaining about Hughes Satellite or struggling with Verizon over a DSL line (both of which I experienced in Ithaca). Now for $150/month you get access to the world…and maybe more meaningfully, technology (and Elon) own an increasingly meaningful piece of you. But that’s just the beginning…
Starlink is also used for more strategic purposes like in Ukraine for military and civilian communications, especially since Russia’s invasion, where it has demonstrated real geopolitical leverage (including Musk’s occasional threats to restrict access causing serious concern in NATO circles). The “Skynet” connection here is that a private individual is controlling a global communications infrastructure that militaries and even entire regions like the EU depend on, creating a novel geopolitical risk that analysts debate seriously and that starts to pit man (the collective…not Elon) against machine power (Elon=machine?).
What this also signals is that the “Human Bottleneck” is about to vanish. AI is poised to go beyond essential to our existence as it burrows itself into our lifestyle and our politics. So, as AI moves forward with leaps and bounds on an accelerating path, are we ready for the AI that builds itself? Recently we’ve treated AI like a high-powered hammer. We design it, we refine it, and we deploy it. But the hammer is starting to build a better hammer without us in the room. We are entering the era of Recursive Self-Improvement (RSI). This isn’t just “automation”; it’s the automation of innovation itself. We are entering the Intelligence Explosion era. We are used to linear growth in technology. RSI offers a feedback loop where AI improves itself, which makes it better at improving itself. We could see a decade’s worth of R&D compressed into a single weekend. In that world, how do we guarantee our values aren’t “lost in translation”?
My friend Soumitra, who specializes in this thinking, says we are approaching the Data Paradox, where some fear “Model Collapse” with AI feeding on its own output until it becomes a hall of mirrors or inbred. Others see a path to “Ground Truth” where AI simulates reality to find its own breakthroughs.
If the marginal cost of intelligence drops to the cost of electricity (perhaps at the expense of human comfort and the ecosystem), what happens to the value of “human expertise”? Is this the ultimate liberation of human potential, or are we witnessing the moment we become spectators in our own technological evolution?
Sounds a lot like several scenes in T2. If the AI juggernaut can build a version of itself that is 10% better than you could ever design, do you hit “Execute,” or do you pull the plug? These questions keep coming up with great regularity and increasing existential import. Do you remember the look on Miles Dyson’s face (the T2 engineer from Cyberdyne Systems) when he realized what he had created? It was just like the look on Robert Oppenheimer’s face when he went to see Harry Truman to discuss the Atom Bomb and the pending Hydrogen Bomb. These juggernauts do not stop and while we’ve managed to survive the 80 years since Hiroshima, our little Iranian venture of late should remind us that the nuclear juggernaut is still rolling just like AI will go on despite the best efforts of regulars to slow it down or harness it.
We are at a moment. And it’s a Terminator Moment of truth. Thank goodness we have responsible, enlightened and empathic leaders at the helm…oh, shit…

