Climate change is the big existential risk. Most effort should be spent abating and adapting to this. But it bothers me that the same people who say we shouldโve done something about climate change 30 or 40 years ago say that itโs pointless worrying about AI now โ that essentially no resources should be spent on this.
While the prospect of runaway evil AI is remote and fantastical, that is not the only failure scenario, and some of the more mundane ones are nearly as bad. How you prevent horrific outcomes is that you devote resources early to ensuring those outcomes are forestalled at the outset.
Is that so hard to understand?
By the way, AIโs risk is both overstated and understated โ understated in the sense that AI is already causing harm, so those many who say we should do nothing are already wrong. And itโs overstated in the sense that the sun is not likely to be disassembled anytime soon to make paperclips.
Also, the โSkynetโ scenario is most likely for AI causing massive harm. One day, there will be lots of autonomous robots making decisions related to combat and weapon use. I hope I donโt have to point out how even without full AGI this can and likely will go horribly wrong.