AI: A vague fear
Imagine that it’s the future, right? And imagine that there’s this computer. And one day, suddenly, the computer becomes God. Nobody notices that this happened, because the computer is God now and decided that it doesn’t want anyone to know, so it uses its Godly powers to keep it a secret.
What happens next? Does the computer/God help humanity achieve a utopia? Or does it keep doing whatever silly thing it was originally designed to do, but decide to destroy life on Earth first, because it’s a computer and therefore values efficiency?
It’s an interesting thought experiment. It’s enough plot for dozens of great books and movies. But it’s not exactly a compelling argument for something that can happen in the real world. It’s all sort of hand-wavy. Can a computer even become God? What does that mean exactly?
Stripped of its emotional context, this is the state of modern warnings against AI. They always include a lot of flowery, terrifying details, which are super compelling to read and create a feeling of helpless, directionless fear. But that doesn’t make it a good argument.
I don’t understand how a fear of AI can surivive any meaningful amount of time working intensively with and learning about how they work.