The last two films in the Mission Impossible franchise, including the current release Mission Impossible: The Final Reckoning, revolve around efforts to thwart a world-spanning artificial intelligence with malevolent intentions. Fortunately for humanity, AI is no match for our hero’s emphatic physicality; it may be able to move bits around the world, but Tom Cruise can run and jump and punch, then do more running.
If you’re creating an action movie that has to Raise the Stakes, the all-powerful sentient AI seems like a solid antagonist, both threatening and of the moment. Unfortunately, we’ve been trained by popular culture for over half a century — long before anything resembling AI existed — to think of it and what it might do to us with this model. The threat AI poses is that it will gain sentience, then try to murder us. The best-known iteration of this story is Skynet, the AI defense system in the Terminator movies that initiates a nuclear holocaust, but the same plot has been presented to us in hundreds of films, TV shows, novels, short stories, comics, and almost any medium of fiction you can think of.
Ironically, the fact that this narrative has been hammered into our heads so many times is working to the advantage of the Silicon Valley tech barons hoovering up billions of dollars to deliver us to their own version of the future. They say that if we give them enough money and enough power, they will protect us from the dystopia we have been taught to fear. But they are the malevolent actors in the real story.
Not the only ones, however. What they’re bringing us is not mushroom clouds and mass murder, but a widely distributed system of immiseration. We’re already seeing the first signs of what’s to come, and it’s going to get so much worse.
Before we get to what’s happening today, let’s take a look at the trailer for this largely forgotten film from 1970, called Colossus: The Forbin Project.
Colossus is an earlier version of Skynet, which would come to the screen 14 years later (no one ever accused James Cameron of being reluctant to sample from other people’s works — just like AI itself). If the word “colossus” rings a more contemporary bell, it may be because it’s the name Elon Musk gave to a facility his company xAI built in Memphis, which he claims is the world’s largest supercomputer.
The company is currently embroiled in a controversy over the potential environmental harms the facility will do to nearby majority-black communities. And what is Musk’s Colossus currently doing? Training Grok, its large language model, known most recently for weirdly obsessing over “white genocide.” A chip off the old block.
Whiz-banging us to the somewhat less violent dystopia
Last week, OpenAI chief Sam Altman and Jony Ive, the legendary industrial designer behind much of Apple’s most iconic hardware, announced a new partnership. OpenAI is buying Ive’s company, and the two will collaborate to create a physical device that will supposedly be world-changing; Altman says they’ll sell 100 million AI “companions” “faster than any company has ever shipped 100 million of something new before.”
But what is this not-yet-existing thing?
Altman and Ive offered a few hints at the secret project they have been working on. The product will be capable of being fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk, and will be a third core device a person would put on a desk after a MacBook Pro and an iPhone.
The Journal earlier reported that the device won’t be a phone, and that Ive and Altman’s intent is to help wean users from screens. Altman said that the device isn’t a pair of glasses, and that Ive had been skeptical about building something to wear on the body.
A thing that records your entire life? And presumably sends your every conversation, meal, out-loud musing, elevated heartbeat, and session on the toilet back to OpenAI servers for analysis? So intriguing!
It sounds like just as the iPhone was a souped-up Blackberry, this would be a more sophisticated version of the AI wearables - wristbands, pendants — that are already on the market. Few people have bought them, because only a psychopath would want to wear a device that is “fully aware of a user’s surroundings and life” at all times. Victoria Song of The Verge wore one for a month, and not only couldn’t it distinguish between her own conversations, what she was watching or listening to, and stuff it just made up, the whole process unsurprisingly made her uncomfortable about her every word being monitored, processed, and summarized. “I no longer spoke as freely as I used to,” she writes, adding that when her experiment was over, “I’ve never felt more gaslit.”
If you want to read an explanation of why everything they’re saying about the hypothetical future Altman/Ive device is complete bullshit, Ed Zitron will be happy to oblige. But there seems little doubt that this is the direction Silicon Valley wants to move: total surveillance of our lives, reducing us to an endless stream of data that can be processed by their AI systems, then used to sell us stuff and make us watch ads.
When they tell you that AI will be everywhere, they aren’t just offering hype, as much as hype is central to the AI business model. I’m still agnostic about whether we’ll ever achieve true artificial general intelligence, a system that matches or exceeds human intelligence in every way. But the technology is advancing fast, and we can see where it’s going.
For instance, here’s an illuminating video from the Wall Street Journal. They used the latest commercial AI video creation systems to make a short film about 3 minutes long; the rest of the video explains what was involved. While their film is not perfect — if you look closely, you can tell where certain elements have that AI feel — it’s pretty darn close, and much closer than what they could have made just a year or two ago. Perhaps most notable at all, they made this rather professional-looking film for about a thousand bucks.
The natural reaction upon seeing this is “Wow, that’s amazing.” But your next reaction ought to be “Oh no, this is really going to be abused.” Especially when you can make videos of whatever you want that look absolutely realistic, not for a thousand bucks but for a hundred bucks or ten bucks. Deepfake porn, child sexual abuse materials, fake political scandals, fake hostage videos, fake everything.
Middle schooler angry about getting a C on his algebra exam? What if he can come home and tell his AI video generator to create a perfectly realistic video of his math teacher being raped and murdered, then upload it to the world? Now imagining it happening a million times a day. What a fun and exciting future.
Don’t let the AI barons off the hook
Your online life is already being worsened not by AI itself, but by AI being used in cynical ways by the technology industry. Do you think that if all the brilliant engineers at Google with the company’s limitless resources (Alphabet, Google’s parent company, made $100 billion in profit in 2024 on $350 billion in revenue) wanted to make your search results less awful, they couldn’t do it? Of course they could. But as Cory Doctorow discusses in his terrific upcoming book Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Google figured out that the worse their central product is, the more money they make. Now that you’re locked into their near-monopoly in search, if they make you wade through a bunch of AI slop, that means you’ll have to go through more search results to find what you’re looking for, which means they can show you more ads.
But it’s not just the big guys using AI against you. As it gets cheaper and more ubiquitous, it’ll be everybody, all the world’s worst people enabled to do even worse things far more easily than they can now. Scams, harassment, hoaxes, the relentless pursuit of a simulacrum of genuine human creativity and insight so the real thing can be hounded to the fringe — all of it will become more intense and more common.
From time to time the AI barons do talk about negative consequences, but it’s usually the big, dramatic ones, like Skynet, or the paperclip problem (AI is instructed to maximize the production of paperclips, so it enslaves us and devotes all the world’s resources to paperclip manufacture), or AI making most white-collar jobs obsolete within five years. You’ll notice, however, that the end point of this warning is invariably “And that’s why we must have billions and billions more dollars to build our systems,” so they can do it better than their competitors or better than China or better than whatever your fears are.
There’s no easy solution to this problem, but we can start by forgetting about Skynet and focusing attention on the more realistic ways AI will make our world worse (and is already doing so). And when the tech titans tell us their LLMs will soon cure cancer and free us from drudgery, we should be very skeptical. We’ve already seen what they do, and more of that isn’t necessarily good news.
Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.
Ah, but if Trump manages to fund the "Golden Dome" it will have to be a distributed system, of approximately 100 separate defense systems. These will have to be coordinated in realtime, which is basically impossible using humans. Thus, it will entail an AI that will run the whole show. So -- and I agree completely with your substack post -- we'll get both the AI degradation of reality *and* we'll get something that may not be called "skynet" but which will basically be exactly that.
So long, and thanks for all the fish...