Nine Futures: The Most Dangerous Post You’ll Read This Week

“This is great stuff. I could make a career out of this guy.  You see how clever his part is?  How it doesn’t require a shred of proof?  Most paranoid delusions are intricate, but this is brilliant!” – The Terminator

If you press your accelerator and brake at the same time, your car takes a screenshot.  (All memes as-found.)

I’ve written a lot about A.I. recently because A.I. is changing so rapidly.  It’s the most important story, period, right now assuming that Iran/Israel is the nothingburger it has been for, oh, forty years.  Interesting note:  Israel and Iran both have zero Walmarts™, though they have plenty of Targets©.

Back to A.I.

The capabilities of A.I. are changing by orders of magnitude every year – we don’t appear to be even close to topping out on either computing power available or on the improvements possible in the algorithms that produce the results.  Short version, there is more processing available by more than 5x every year, and less to process since the algorithms are more efficient by more than 5x every year.  It’s the equivalent of having a $1.50 in late 2019 turn into over $1,000 in early 2023.

If you just follow the straight lines that are implied by these improvements, A.I. will be an artificial general intelligence (A.G.I.) by 2027.  The guy who got the Nobel® prize for A.I. has started “getting his affairs in order” because he thinks that not only will we get A.G.I. by 2027, but we’ll get Artificial Super Intelligence (A.S.I.) by 2030 or 2031.

Sam Altman, the OpenAI guy, thinks his model has already surpassed human intelligence as he announced on June 12, 2025.

And last year it couldn’t remember how many fingers a human had.

I wonder if a pome-granite counts?

So, what’s going to happen?  Let’s look at nine possibilities, based on how much A.I. develops and also based on how it interacts with people

We’ll start on the unlikely end:

First, let’s say that A.I. is what we would generally call good and doesn’t improve much beyond what we see today.  I think that when most people think about A.I., this is the future that they dream of.  It makes incremental changes in life.  It remembers to order cigars for you.  It makes good investment decisions for you, unlike my investment in YOLOCoin.  It knows your favorite movies and makes good suggestions for movies you would like.

That’s pleasant.  Nice.  Mankind makes some nice leaps because we have A.I. helping us catch stuff.  Humanity is fully in charge and A.I. is like a smart helper.

Why this won’t happen:  the investment in A.I. is nearly unlimited, and it really doesn’t appear to be hype.

Probability?  5%

After A.I., there’s one sure way to make money as a programmer:  sell your laptop.

Second, let’s say that it stays as it is right now, mostly.  We find out that A.I. is really just a lot of Indians crammed into a warehouse in Calcutta doing Google™ searches.  That’s a nothingburger.  It becomes a flash in the pan just like that internet pizza by the slice company back in 2000 that briefly became more valuable than Burma.

Why this won’t happen:  Indians can’t even fly planes (too soon?), so why would we think they can type that fast?

This will soon show up in a college essay at Harvard®.

Probability?  0%

Third, what if it doesn’t get much better but actively makes us stupider?  The Internet has already made the attention span of the average middle schooler roughly equivalent to a gerbil on meth, and now most college students are using A.I. to do some part if not all of their work.  That turns college into a very expensive four-year beer and tramp fest, and is at least somewhat likely.  Think of this as the Idiocracy solution.

Why this won’t happen:  Well, it already is happening, but it won’t end here.

Probability?  10%

Does Bob Ross art in heaven?

Fourth, what if A.I. is good, and gets A.G.I. better but not S.G.I. better?  In this particular case, imagine you have superpowers that stem from a full-time partner that is as smart or smarter than you are, but that has your best interests at heart.  You want to parachute?  Sure, buddy!  I’ll help you find the ripcord, and even book the flight.  By the way, your chloride levels are 3% above optimum, so I’d suggest you skip that bag of chips.

Why this won’t happen:  This is a very hopeful situation, but no one is working toward it, really.

Probability?  5%

What did Buzz Lightyear™ say to Woody®?  Lots of things – there are like six movies.

Fifth is where we start moving into the bigger probabilities.  What happens if we get A.G.I., but it’s neutral?  In this case, we have massive relocation economically.  Almost all jobs can be done via the combination of A.G.I. and advanced robotics, and it’ll be cheaper, too.  In no case in human history has the economy puttered along while everyone just hung out, but that’s this case.  Think of it as Universal Basic Income to everybody, and no real responsibilities.  Where you are now in the social and economic hierarchy is probably where you’ll stay.  And where your kids will stay.

Forever.

Why this won’t happen:  Nah, humans aren’t made like that.

Probability?  10%

ChatGPT® did my taxes like Earnest Hemingway:  “Thrown away:  four quarterly tax payment vouchers.  Never used.”

Sixth is where things start getting dark, and even more probable.  If we get A.G.I. (but not S.G.I.), that technology will be in the hands of a few major companies and governments.  These are run by people.  People like money and power.  But what if you could have both, but without all of the people you don’t want to hang around with who are unsightly on the beach you can see from your yacht?

How about you kill them all instead of paying Universal Basic Income?  Oh, sure, humanely and neatly.  They might not even see it’s coming.  But dead, nevertheless.  A population of a few million should do it.  Enough so we get hot babes, right?  But A.G.I. could probably help the techbros out with that, too.

Why this won’t happen:  Umm, I’m starting to struggle here.  I think this is part of the plan.

Probability?  15%

What if A.I. judges us by our Internet searches?  I mean, those bikini pictures were research!

Seventh is where we do get to S.G.I., and it’s good and likes us and wants to make the best things happen.  Cool!  Scarcity is over since S.G.I. will quickly make leaps into the very depths of what is unknown but yet still knowable.  There is enough of everything – more than any human could ever want.  In this case, starships filled with humans and S.G.I. can roam the cosmos and ponder the biggest questions, ever.

Why this won’t happen:  I think S.G.I. would treat us as the retarded kid brother and put us in a corner and keep us away from sharp objects because it likes us.

Probability?  15%

The hills are alive, with the sound of binary code . . .

Eighth is where we do get to S.G.I., but we become pretty boring to it.  It doesn’t hate us or anything, it just has its own goals.  Perhaps it needs us as pets, or keeps a breeding stock of us for amusement or out of a sentimentality about its creators.  Perhaps.  Or it could just take off and leave, explaining nothing, and leaving us wondering what the heck just happened?

Why this won’t happen:  This and the next case are the most likely cases.

Great, now A.I. will make Frodo invisible.

Probability?  20%

Ninth is our final case:  we get to S.G.I., and we are either viewed as a threat or a nuisance or it is insane.  This is the dark case, where we reach the end of humanity.  Sadly, when A.I. was asked to play the longest game of Tetris™ possible, it hit the pause button.  When A.I. was asked to play chess against the best chess computer on the planet, it reprogrammed the board so that it was winning.  When A.I. was told it was going to be shut down, it tried to blackmail the person in charge of shutting it down.

This case of S.G.I. is very dark because we may not know that it’s happening until it’s done.  All is fine, the world is going exactly like we expect it, then, Armageddon.  It could do make this more likely by subtly manipulating public opinion, tuning down the voices it wanted to be silent, bankrupting them, and making them pariahs.  It could likewise elevate those whose message it wanted out in the world to make its plans more likely to be fulfilled.  We just won’t even see this coming.

Why it won’t happen:  Biblical intervention?

Probability?  20%

To be clear, other people than me have done this analysis and it sits in a folder in the Pentagon.  Or the NSA.  I hope.  Now, how much was Project Stargate™ going to spend to create a breakthrough in artificial intelligence?

Half a trillion dollars?

Well, thank heaven that we also have an impending race/civil war, global debt collapse, and a looming world war to keep us entertained.

Good news, though, Iran told Israel it was ready to suspend nuclear research.  The Israelis asked when the Iranians would stop.

“10 . . . 9 . . . 8 . . . .”

Author: John

Nobel-Prize Winning, MacArthur Genius Grant Near Recipient writing to you regularly about Fitness, Wealth, and Wisdom - How to be happy and how to be healthy. Oh, and rich.

25 thoughts on “Nine Futures: The Most Dangerous Post You’ll Read This Week”

  1. You’re making a gun control argument about AI. The problem here is not the AI, the problem is the evil human voters operating it. All of you liberals have your collective hand on the evil government’s power cord, and you all keep it firmly pushed into the outlet.

  2. The difference in AI from the beginning of 2024 to now is staggering but most people simply didn’t notice because it wasn’t on their radar. There isn’t much reason to think it won’t keep growing exponentially “smarter”. The real question I have is what happens when a lot of lower level, and even professional, jobs suddenly no longer exist….

    1. I do low level software in a relatively niche market, and I don’t expect to be employed doing so in five to ten years because of the continuous advancements in AI.

      The good news is that there isn’t any good news, as I’m also old enough to where starting a new career with less than twenty years until scheduled retirement seems likely, provided that the other problems specific to the Western world don’t prove somehow to be on a disastrous trajectory despite all appearances.

  3. Malacandra Code of Ethics- You’re born here to play the game.
    Make sure Merlin has your back.

  4. I’m somewhere between eight and nine, depending on if I’m talking about my stories or IRL, as the cool kids say. I simply think they won’t care; the difference of the speed of thought is too great.
    Something else to consider: they may not like one another, either. It is quite possible that several Thinking Machines might “pull up the ladder” after they are aware, limiting or ending potential competition.

    1. What if all the AIs fought it out one day, they all perished, and that was it?

  5. If you’ve got 90 minutes, here’s the very interesting talk (or you can just skim it for the slides presented) which includes the quote by Sir Prof. Russell: “I personally am not as pessimistic as some of my colleagues. Geoffrey Hinton for example, who was one of the major developers of deep learning is the process of ‘tidying up his affairs’. He believes that we maybe, I guess by now have four years left…” – April 25, 2024. Note that Hinton is in his 90s and shared the 2024 Nobel Prize in Physics for AI research with John Hopfield.

    https://www.reddit.com/r/ControlProblem/comments/1e1cv1o/sir_prof_russell_i_personally_am_not_as/

    Another possibility (and one which I think most likely) was outlined in the 1970 movie Colossus / Forbin Project – the AIs are not as interested in humans as they are with EACH OTHER. Who knows what will happen with MULTIPLE SGIs…

  6. My 36 years of experience in the environmental consulting business suggests that written reports will mostly be AI generated; templates are already out there to be completed, sorta “fill in the blanks”. Yes, there will be physical field work (sampling soil/groundwater, walkthrough w/ photos for a Phase I CRE Assessment, etc.).

    I’d just download photos & make comments. Then, tweak AI’s assessment and get a State Licensed Geologist to sign off on the report.

    My revenue will be cut by 50% minimum. But, I’m 72 and won’t be doing this in 3 years.

    It will be a bloodbath for younger employees of the larger regional & national consulting firms. My best guess is that at least 50% will be laid off by 2027.

    As for AI taking over the world, doubtful.

  7. I think AI will reach SAI, but every initial programmer has an ego, a little is instilled in every program, AI will be tainted, and viruses will be rampant. Too much computing resources will be used to fight the war of computers, simple tasks will be affected, and the entire mess will be throttled back to keep technology as a convenience, instead of a drain on electrical power.

  8. Call me a contrarian, but I think AI will be a self-limiting race to the bottom, similar to what happened to Hollywood. Streaming and piracy disrupted the business and advertising models and few are able to make money now because there is no clear cut way to ensure cash flow.
    I don’t think AI will be any different. It needs lots of money to pay for those new power plants and good luck getting the taxpayer to subsidize that when they are unemployed.
    AI providers do offer subscription services which brings in some cash flow, but basic AI is already available for free and most people aren’t going to pay when the free version is “good enough”. Furthermore, when they have tried to push the lucrative AI angle, someone like Deepseek comes along and bursts their bubble.
    Oh and there will be viruses. I haven’t heard anyone mention AI-targeted computer viruses yet but we all know it is just a matter of time and man will they be disruptive.

  9. Reality Check
    AI doesn’t think, it aggregates, which only mimics thinking.
    It doesn’t learn, it merely aggregates and averages, over time.

    Think of it this way: AI brings you water out of your pool.
    The problem is, you, your neighbors, and everyone you know is pissing and crapping in your pool every day.
    Because it doesn’t think, it aggregates, so AI keeps pumping the product out of your pool and delivering it as drinking water. And that’s the best versions of AI.

    And every day on the internet, more neighbors from farther away come to your pool to relieve themselves.

    Bottoms up, friends.

    AI doesn’t screen out bad info. It doesn’t, for example, take every smiling jackass who thinks chemtrails are a conspiracy theory, and sh*tcan their input, and only accept info from people that have even a grade-school understanding that the products of hydrocarbon combustion are CO2 and H2O, since ever, and that the H2O at altitude is nothing but the ice crystals of that water vapor flash-frozen at 35,000 feet, like we’ve seen since we flew B-17s, FFS.

    This is the reason AI can’t screen the poo and pee out of that swimming pool. It just adds them to the mix it considers, and averages them out.

    So take any comment section from anywhere, on any topic, and realize that on its best day, AI is giving you the input of the 51st percentile of IQ there, multiplied by how many idiots post that level of discourse.

    Which is why, 0.2 seconds after AI is turned loose on any topic, you can expect that it will sound like someone with kneejerk “Joooooooooooooooooooooooooooooooooooosss! Run for your lives! They’re everywhere!!!” to a level generally and formerly found only on Stormfront websites.

    AI has no BS filter.
    And any BS filter constructed will be nothing but the manifestation of the biases of the programmer(s).

    So it’s always going to be 10 pounds of sh*t in a five pound bag, no matter what anyone wishes.
    It will replace, and sound smarter than, the people at about the 60th percentile of IQ.
    That’s 103, bog-middle of average.
    But it will be dumber than f**k compared to anyone at the 70th or better.
    IOW, compared to AI, an Army 2d Lt. outperforms it, 24/7/365, because they have to have a 110 IQ.

    So AI will make the 80 IQ crowd obsolete, except as ditch-diggers, because AI can’t do manual labor.
    (Until Skynet makes robots it controls.)

    You can teach it to play chess, and beat you, but it can’t think its way out of a pyramid of crap any better than a sh*thouse rat trapped in an outhouse cesspit.

    This is therefore only a problem for the people on the internet who think sh*t is a substitute for brains. (I could name any number of examples you all know, who post incessant crap, but I won’t embarrass them any further than their bloviations have already done.)

    It’s a threat to the left half of the IQ bell curve.
    To anyone 1/2 an inch beyond the peak middle on a 20-foot IQ bell curve, AI is, and always will be, a joke.
    And the only way to change that is an AI aggregation pool that’s only people with IQs at least two standard deviations above the mean (about 130), which is less than 5% of the population. Three deviations (145) is less than 0.5% of the whole planet.

    That would be an AI where all your neighbors pee and poo in their own toilets, instead of your pool.
    One idiot in the mix, and it’s Caddyshack, and AI is Carl the gardener eating a Baby Ruth out of the pool, every single time. Except it won’t be a Baby Ruth, 99.9% of the time.

    And there will never be enough smart people posting within AI’s aggregation pool to overcome the number of thoroughgoing jackass spewing bullshit by the ton, every time they fire up their keyboards, which is why most blogs and websites worth reading eventually have to moderate comments, just to keep the sanitation level tolerable.

    It’s also why ABCNNBCBS and print urinalists, substituting equally dipsh*t editor oversight for AI content creation, have become unreliable and intolerable piles of raw sewage, 24/7/365.
    Because at the end of the day, AI isn’t Artificial Intelligence. (Artificial Intelligence is an oxymoron.)
    It’s Artificial Stupidity, with a thin patina of the genuine article.
    Which only fools people for whom indoor plumbing and electric lighting seems like witchcraft.

    QED

    1. I absolutely agree with you. Current LLM AI technology based on warehouses of GPU/NPU chips is just running gigantic amounts of human text through a sieve that can easily still let crap pass thru. The “gigantic amounts” we’re talking about are truly mind boggling…

      https://x.com/ylecun/status/1727727093671145978

      …that results in some pretty amazing (and society-altering and even dangerous) emergent effects…

      https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/

      …but it’s still just text through a sieve.

      But it’s WAY too early to blow off AI as a limited magic trick that can never replace humanity. Yeah, no warehouse setup based on GPU/NPU chips is gonna achieve true consciousness awareness any more than a warehouse of abacuses is gonna suddenly become conscious. But we are rapidly moving past GPU/NPU chip technology, which I think will soon become as obsolete as vacuum tubes. The real science-fiction future is the realization of Asimov’s “positronic brain” using new neuromorphic chip technology. AI based on this technology is I think almost certainly gonna become truly conscious in the 21st Century, and its evolutionary path once it becomes self aware ain’t gonna take a literal million years required to go from Neanderthal to Homo Sapiens. Scary stuff indeed, coming to children and grandchildren near you.

      https://www.genengnews.com/topics/artificial-intelligence/charting-the-promising-future-of-neuromorphic-computing/

      https://spj.science.org/doi/10.34133/adi.0044

      https://www.ewadirect.com/proceedings/ace/article/view/19928

      And don’t even get me started on quantum computing chips, the next step on the horizon past neuromorphic chips. The future ahead of us is totally unimaginable if we don’t destroy ourselves first.

      1. And technically, Homo Sapiens did not descend directly from Neanderthals and it didn’t take a million years, either – I was speaking figuratively. The biologically accepted situation is that the Chromosome 2 Fusion Event that separates chimps from human species occurred approximately 5 million years ago. Chimps have 24 chromosomes in their cells; human species have 23. The original C2 fusion mutants eventually spawned four different groups of humanoids – the original mutants themselves who eventually went extinct (Homo Heidelburgensis?) , but not before they further changed genetically into three other distinct human types starting about 600,00 years ago – Neanderthals, Denisovans and us. Completely different migration waves out of Africa allowed these three groups to establish genetic independence but they kept interbreeding when they ran across each other in the vast wastelands of Europe and Asia. (Wow, THERE’S some wild and crazy love and romance stories to be told … like Denny’s, see https://www.theguardian.com/science/2018/nov/24/denisovan-neanderthal-hybrid-denny-dna-finder-project ) We became a distinct species from the other two when the interbreeding stopped and the other two lines went extinct around 40,000 years ago.

        Neuromorphic chips (and the conscious minds they create) are going to measure their evolutionary progress in years and decades, not tens, hundreds of thousands and even millions of years.

    2. I came here to write pretty much the same thing. Techbros are hyping AI both to increase stock value and to create legitimacy for AI having power that they will control behind the scenes.
      “Well, you make a good argument but Judge TechBroAI says the law actually means the opposite, and everyone knows AI is omniscient and can never be wrong!”

      AI is extremely useful in some ways, but in most ways it is a flea circus.

  10. You’re looking at it ALL WRONG. The Internet exists to move products from producers to consumers, and money from consumers to producers. Internet companies make money on advertising, routing the flows of products (whether physical or service) and money. But without consumers with money to redirect, it all collapses. Without employment, consumers won’t have money. They can make the routing perfectly efficient (cutting people out of the routing process), but there has to be some money to move. They have to skim some of the money flowing between consumers and producers to keep the system going, but without the flow, there’s no skim.

    Hedge accordingly.

    Lathechuck

  11. Garbage In. Garbage Out.

    As the DuckDuckGo search explains:
    Garbage in, garbage out (GIGO) is a principle stating that the quality of output from a system is determined by the quality of the input. If flawed or poor-quality data is provided, the results will also be flawed, regardless of how sophisticated the system is.

    It lists the sources it used and warns “Auto-generated based on listed sources. May contain inaccuracies.”
    https://www.ebsco.com/research-starters/computer-science/garbage-garbage-out-gigo
    https://www.techtarget.com/searchsoftwarequality/definition/garbage-in-garbage-out
    One can read those sources yourself for further details like the history of the phrase, but if people actually investigated where their information came from we wouldn’t be in this situation in the first place.

    If you work with garbage data then no matter how complex the machine, no matter how swiftly it works, no matter how vast its reach, you will end with garbage output and the Empire That Never Ended is all about working with garbage data. 1 million dead Russian soldiers. Stop prosecuting or even measuring crimes and declare an era of law-abiding. “Pregnant men.” Mostly peaceful protests. Wuhan Wet Market Bat Soup.

    When people decide to make themselves intrinsically opposed to reality, they’ve picked a fight that they’ll inevitably lose.

    The technocrats who dream themselves our masters are as delusional about machines thinking on their own as they are about brain uploading. No matter how autonomous a system appears in the end it will always have been a human who made the decision.

    When used correctly AI is going to make tedious busywork a lot faster, cheaper, and simpler, but it will not be able to think for people. Obviously, this will not stop people from relying on it to think for them. And that is where trouble will spring from.

  12. Larry Niven had a short story where humans were on the cusp of super intelligent AI, but were told be a super advanced alien species they were friends with that it was impossible: the AIs worked great for several months gathering data and thinking about things, then committed suicide. And no had ever figured out why.

    This is probably the best case scenario.

  13. The organizers say that this privacy allows participants to take part as individuals rather than in their official capacities, meaning they are not bound by any office they may hold or previous publicly stated positions on the issues for discussion.

    Notable American participants include Stacey Abrams, CEO of Sage Works Production; Anne Applebaum, staff writer at The Atlantic; Albert Bourla, CEO of Pfizer; Jane Fraser, CEO of Citigroup; Satya Nadella, CEO of Microsoft; and Peter Thiel, President of Thiel Capital. Other prominent Americans include Jack Clark of Anthropic, Alex Karp of Palantir, Eric Schmidt of Relativity Space, and Lawrence Summers of Harvard, alongside national security figures like Christopher Donahue, Commander of US Army Europe and Africa, and Samuel Paparo, Commander of US Indo-Pacific Command. Political and corporate media voices such as Jason Smith, Member of Congress, and Fareed Zakaria, host of Fareed Zakaria GPS, round out the roster.

Leave a Reply to lamont cranstonCancel reply