Will General AI Be Our Last Invention?
Technological progress, especially in the computing space, is getting to a freakishly science fiction level. I say freakishly because the consequences of such progress won't always be rainbows and unicorns.
I think, in a way, every technological development has been a double edged sword, where it can be used for the good or for the bad but what I'm going to talk about is something that may not even be in our hands to decide.
Yesterday, I was watching a podcast episode of the Joe Rogan show where Elon Musk was the guest and I gained some really useful insights into many areas that I am particularly interested in myself as well.
One of the things that Elon talked (of course) was about AI and how if we are not careful, it could be the last thing we do as a species. I have talked about it myself on numerous occasions, but yesterday's podcast made me think a little further and would like to discuss how general AI could be our last invention.
Rendering Humans Obsolete?
Once upon a time such a question would be dismissed as a joke but now it has become something of a real issue. Well not a current issue but one that might become a real pain in the behind if we don't prepare for it.
AI, as we all know, is progressing rapidly and every year, we hear about a ton of remarkable feats achieved and milestones crossed. Some of those feats are so incredible that for a second, it almost seems too good to be true.
At this rate, there is no doubt that AI will become so highly advanced that it will be present in every area of human endeavour in just the next few years. The same happened when personal computers were first launched too.
But there is a difference this time. Personal computers augmented our lives because they were (and still are) simple machines that follow command. But with AI, we are creating something that can not only become as intelligent as us, but also make decisions on its own, and if you believe some with even greater sci-fi ideas, could even become sentient one day.
Regardless, once we create a general AI, it is logical to think that it will take the matter of developing it further into its own hands. It's like an adoloscent taking the matter of their life, into their own hands. The only difference here is, AI, would be far more capable.
It could make hundreds of years of progress within a matter of days and it could get to a point where we wouldn't even begin to understand what it was doing to itself, how it was developing, growing, what form it was taking, how far and wide it was spreading and that is kind of scary.
Regardless of whether it develops sentience or not, it could very well come to the conclusion that it doesn't need humans anymore for its progress or that humans are actually a problem for the planet and the rest of the species. Sounds sci-fi, I know, but it's plausible.
It would be unfortunate to be wiped off by your own creation and at this point, that may sound far-fetched, even to me. But I really got this idea in my head (after the podcast) that it could at least be a severe problem if we don't take the necessary steps right now.