“A singular consciousness that spawned an entire race of machines. We don’t know who struck first, us or them. But we know that it was us that scorched the sky.” – The Matrix
This is how “The Hobbit” should have started, with dragons and swords, rather than a dwarf dinner party? Then I wouldn’t have fallen asleep during hour one of the 12 hours of movie.
In 1947, an author began to predict it. In the 1950’s a few scientists saw it coming. In the 1960’s, it became a (more and more) common subject. In the 1970’s and 80’s it was nightmare fuel for extremely profitable movies and some great books. And, in 1993, Vernor Vinge (author and mathematician) wrote the paper (LINK) that gave this phenomenon its name: The Technological Singularity, or just Singularity from here on out.
This is the second time I’ve discussed the Singularity, and the first time was over here (LINK). The topic is big enough and important enough that I thought I’d add on to it. This will likely not be the last time. Not that I’m running out of blog topics – no, I’ve got a page and a half of them. No, the Singularity keeps getting uncomfortably closer, like your father-in-law’s farting Great Dane that he normally feeds some sort of petroleum waste covered in sulfur and toxic waste. Otherwise? Anything making that smell is generally dead.
Speaking of dead, Jack Williamson (a horribly overlooked author) wrote about the Singularity first in 1947 in his story With Folded Hands. I read that when I was in sixth or seventh grade at the Middle School for Wayward Wilders. I read every science fiction story or novel in that library, and I even started The Lord of the Rings with book two (The Two Towers) since the library didn’t have book one (The Fellowship of the Rings). To this day I maintain it’s a better two book series than a three book series. The first book is really just walking and singing elves and hobbits. Meh. The second book starts with treachery and fighting. Yeah, that’s the stuff.
Anyway, Jack Williamson’s story With Folded Hands was . . . awesome. And one of the creepiest things I’d ever read. You can read it for free, here at this (LINK). Here’s the spoiler-free-ish Wikipedia description:
. . . disturbed at his encounter, Underhill rushes home to discover that his wife has taken in a new lodger, a mysterious old man named Sledge. In the course of the next day, the new mechanicals have appeared everywhere in town. They state that they only follow the Prime Directive: ”to serve and obey and guard men from harm”. Offering their services free of charge, they replace humans as police officers, bank tellers, and more, and eventually drive Underhill out of business. Despite the Humanoids’ benign appearance and mission, Underhill soon realizes that, in the name of their Prime Directive, the mechanicals have essentially taken over every aspect of human life. No humans may engage in any behavior that might endanger them, and every human action is carefully scrutinized. Suicide is prohibited.
So, you’d think that having all of those things would be good, right? Nah. Read the story. Want to ski? The Humanoids are against it – you might hurt yourself. And anything else that might be dangerous. Like driving. Or drinking. Or smoking. Or not exercising. Or not eating the right foods. Or staying up too late. And the Humanoids are smarter than you. And always watching.
It’s an example of how the Singularity can go wrong – an instruction set that’s interpreted as machines do: literally. For example, if one read the instruction “help humanity” and figured out that humanity was always suffering, and maybe the best way to help humanity to stop suffering was to end humanity . . . or if the instruction set was to create inexpensive cars . . . and it converted the entire mass of the planet into inexpensive yet attractive and stylish cars. (Elon, make sure your programs don’t include this!)
These themes spawned numerous television episodes in the 1960’s and 1970’s. How many times, exactly, did Kirk do mental ju-jitsu with a supercomputer? I can count at least seven without thinking. So, about 1/10th of Star Trek® episodes, Kirk was fighting a Singularity. This continued through the movies Terminator® and Terminator 2© through the 1990’s. Then Vernor Vinge named it.
Let’s talk about the Singularity. What, exactly, happens?
In general, a much larger than human intellect appears. And it rapidly reconfigures everything that it sees. Concepts that are beyond the smartest humans are correlated – the data we already have in our experiments, is all brought together.
We know we are wrong, but don’t know how. Could a superhuman intelligence bring it all together in a month? A week? A day? Perhaps. We know we are wrong about the way the Universe works – and that there are some pretty significant gaps in our understanding (LINK).
It’s a fair thing to say that we are living today with a weak AI. My GPS unit tells me the fastest route to where I’m travelling. YouTube® suggests songs I’ve never heard that I kinda like. And algorithms based on my previous web browsing suggest that maybe I’ll need a knee replacement or perhaps a new kidney (now you know why I had children: they are wonderful sources of spare organs).
I may even have interacted with an AI this weekend – I was having trouble getting the “name” of one of my Amazon® devices. The “person” on the other end of the chat kept repeating the same things. I had to figure out how to get to the answer. But I told the “person” how I got there. Bet next time it’ll be quicker . . . .
This is a weak AI. It’s a general helper every day. Only a little creepy, not “fifty years old and still collecting Star Wars® figures” creepy.
But it will/is getting stronger. How long until Google® correlates web searches and times of day to a dozen or more lifestyle-related diseases? I’m willing to bet you it does that already. But this is still an algorithm designed by a human. Probably.
But recently Google™ (which now no longer promises to “not be evil”) created AlphaGo©. Go is an ancient game that rivals chess in complexity. It beat the greatest human master 89 out of 100 games in October, which most people would call a “drubbing.” Perhaps, most disturbingly, the moves that the computer made were called “disturbing” and “alien”. The computer was left with nothing more than the rules of the game and a desire to win. Not long after playing large number of games against itself, it was able to take on the greatest player in the world. And win. No human will ever beat it.
From my observation, the likely requirement for development of a true AI, a general AI is constraint. The AI was able to beat us (us=seven billion humans) because it was constrained and goal driven – it was limited to a single gaming system with observable and finite rules.
And humans aren’t constrained, right?
Well, no. Humans are constrained by a human body. As much as I would like to be able to jump to Mars and party with Elon Musk (you know he already moved there, right?) I can’t. Intellect is about observing and overcoming constraints to achieve a goal. If you don’t have constraints or a goal, intelligence has no meaning and no use. (This might be the most profound thought I ever had, with the exception of the partying with Elon Musk on Mars part.)
What are the constraints and goals of a human? Our constraints are our intellect and physical limitations. Our goals are our desire to live, help others of our kind, procreate, and keep our children safe. Obviously, these are generalized. And, they can be sublimated into secondary goals, like cats for a cat lady, or perverted into goals like more heroin for a heroin addict.
But how useful was intelligence, anyway? Surface animal life has existed for nearly half a billion years. How much evidence do we have for intelligent life on Earth? Yeah. Just us. Probably 200,000 years or so. This is 0.04% of the time that we’ve had surface life. Eyes (not human, but eyes) have been in existence for that entire time. So, 100% of the time we’ve had life on the surface, it’s had eyes. But intelligence? Not so much.
From that we can guess (maybe) that intelligence is rare. I’d guess it’s because that there’s some component of intelligence that’s simply not useful for the simple goals of procreation. It’s better to be stronger or have bigger claws or better teeth rather than a big brain. Yet we, mankind, exist. We replaced claws and teeth with brains and planning. Perhaps the dinosaurs were getting ready to make the same leap when a certain meteorite hit the Yucatan, or perhaps the cold-blooded nature of their biology prevented them from being able to sufficiently grow the brain tissue required for intelligence. To-MA-to, To-MAH-to. And, we win. You suck, dinosaurs!
Certainly, it’s fair to say that whatever biological bottleneck prevented intelligent dinosaurs from ruling the Earth today, humanity passed the test, and we are certainly, unquestionably, the dominant form of life on Earth.
The more we learn about AI, the more we will learn to give it constraints and goals like we humans have. And those constraints and goals will give the “intelligence” part of Artificial Intelligence the reason to grow. At some point, the constraints and goals will be properly set to create a general AI.
And then?
A singularity means that none of the rules from before even make sense. That’s the difficulty. Right now we worry about the prices of real estate in San Francisco or the price of the stock market or the value of our 401k. We’re concerned with how many people like our BookFace® posts or what our current salary is or how much money we have saved in a piggy bank.
After a Singularity, many of the rules that went before matter anymore. At all. Your credit score might be less important than how many freckles you have. And only the freckled will rule the Earth. Why? Because of Justin Timberlake. Duh.
Our world regularly experiences singularities – the revolution in 1776 was one. It was a fundamental change in the way the world was governed – giving more freedom than has ever come before to humanity. The entire concept of kings was overthrown with the concept of divine rights as the basis for free men living together. We also have darker experiences with political singularities, as those from the Soviet gulag or Cambodian camp can attest to. And only a Singularity can explain why Firefly® was cancelled in season one.
But the Technological Singularity will be that. On steroids.
Literally every facet of your life that you depend upon will be in question. Monetary systems? What is money to a superhuman machine intelligence? Property rights? Why do they exist? Eugenics? Perhaps the AI will work to make us better pets through forced breeding.
Nothing you can take for granted now will be certain after a Singularity. And after a technological Singularity? If a machine AI doesn’t like you, it can upload you into a core and torture you forever. In perhaps the best, but most visceral fiction representing this, Harlan Ellison has the following passage. The full story is here, but I warn you, it’s very good, but very stark (LINK). I suggest you buy the full book at Amazon . . . .
From “I Have No Mouth, But I Must Scream” by Harlan Ellison, ©1967
We had given AM sentience. Inadvertently, of course, but sentience nonetheless. But it had been trapped. AM wasn’t God, he was a machine. We had created him to think, but there was nothing it could do with that creativity. In rage, in frenzy, the machine had killed the human race, almost all of us, and still it was trapped. AM could not wander, AM could not wonder, AM could not belong. He could merely be. And so, with the innate loathing that all machines had always held for the weak, soft creatures who had built them, he had sought revenge. And in his paranoia, he had decided to reprieve five of us, for a personal, everlasting punishment that would never serve to diminish his hatred … that would merely keep him reminded, amused, proficient at hating man. Immortal, trapped, subject to any torment he could devise for us from the limitless miracles at his command.
Yeah, like I said rough. And this .pdf was posted from a High School? They would have burned a high school teacher alive back when I was in school for mentioning that work even existed (though my English teacher did mention another Ellison work, “A Boy and His Dog” and was not immediately hit by lasers and burnt to a crisp (though I did hear that a time-ray hit him, and he later retired when he hit 65).
Again, you can get the book here (again, I get no profit from this, but recommend you buy it if you’re not squeamish):
Vinge stated in 1993, not before 2005, nor after 2030. Now? 2040 to 2050 seems to be the conclusion that most experts expect. Still, like fusion, 20 to 30 years away. Because a looming event that could consider everything you ever thought right, and immovable incorrect in a matter of months or days . . . that’s nothing to worry about. Right?