This is a good piece for the most part, but I don’t agree that things like an AI are not a threat. I think this view is pretty daft and ahistorical, actually. And I disagree with the idea that technological progress has halted. Plateaued for a bit, yes, but not halted.
About the AI question, the reason it is worth pondering is that even threats with a very small chance of coming true if they have a very large — potentially life-ending — impact still deserve thought and mitigation even if the odds of any particular danger materializing are quite small. A lesson we are about to learn the extremely hard way due to climate change, by the way.
But think of it this way. From the time humans started making war until 1945, we had at most a bomb that could destroy a city block.
By late 1945, we had a bomb that could singly destroy an entire city. One plane. One bomb. An entire metropolis, wiped out. Not science fiction. Hard reality.
By 1952, we had a bomb that could destroy an entire region. One bomb. Urban New England, just gone.
Not all progressions are linear. Some of the most destructive ones are in fact extremely non-linear, and since an AI is not likely something we’d create but rather something that would start to evolve on its own, it’s fundamentally unpredictable and worth mitigating against a hostile AI or even worse an AI so powerful it didn’t care about us at all. The idea that we’d “create” an AI is pretty stupid, actually. It’d most likely create itself without our noticing. And even if we did kick it off, it’d most likely be a much-accelerated process of evolution so that just like our own brains we did not understand the result. And perhaps could not control even a little bit.
I’m not a Singulatarian or a technological utopian but I do believe — and history demonstrates this — that humans can create very destructive very powerful things without really understanding what they are doing. Evidence shows this. Thinking about it and working to prevent it is worth something. Worth quite a lot, actually.
Given that there is at least a small bit of risk for instance that in the future some rogue AI will decide to disassemble the sun to travel to another galaxy, spending a few million now to understand the possibilities seems worth it, yes?
About technological progress, like a lot of folks this person is fairly lacking in historical knowledge. Though I agree that we’ve picked most low-hanging fruit, there are possibility spaces we’ve hardly even begun to explore. And some we still probably have not even discovered.
Another thing. I saw a chart a few months ago that I can’t find right now, but when I do find it I will post it because it was great (and I will find it). If I remember right, for the first 190,000 of the 200,000 years of anatomically modern human history, economic growth per capita/yr was something like 0.00000167%. Or, basically, nothing.
Then this happened:
Like with regular bombs vs. hydrogen bombs, some progressions are hugely, staggeringly non-linear.
Technological history moves in spikes and plateaus. Anatomically modern humans used stone tools for 150,000+ years. Then they didn’t. They used bronze for maybe 8,000 years. Etc. Right now I’m perfectly content to admit we are at a plateau. But as history demonstrates time and time again just when everyone is convinced that nothing else can possibility be invented, a wave of invention comes along.
There is a limit to this, of course. But have we hit it? Really doubtful. Really, really doubtful. We are young and the universe is vast. We are clumsy puppies stumbling over the living room entryway, not even having made it to the front door and thinking we’ve seen it all, the entire house, not even realizing that there is an outside.