Earlier this year, Elon Musk gave an interview where he alluded to the idea of Artificial Intelligence being far more dangerous than nuclear weapons. This is quite a bold statement, which led to a lot of backlash, given the destruction and devastation that nuclear weapons can cause.
That being said, his comments should not be ignored, as others like Julian Assange have expressed the same concerns.
These concerns are coming from people who have been around, know about, and even use this type of technology. They have tremendous amounts of resources and connections, and clearly, have a lot of knowledge in the area.
In the interview, Musk stated:
“I think the danger of AI is much bigger than the danger of nuclear warheads by a lot… Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes.”
This isn’t the first time he’s called out the potential dangers of artificial intelligence. Prior to this, he said that AI is much more dangerous than North Korea.
Now, most of our readers will be aware of the fact that ‘the powers that be’ have labelled many as dictators whose countries as ‘gone rogue’ with their weaponry so they can basically step in, impose their will and install a government that best suits their own interests.
This has long been the tactic, to create the problem so you can propose the solution. Obviously, Musk does not delve into this aspect, but I thought it was important to mention.
Musk has explained his confusion as to why there is hardly any regulatory oversight when it comes to AI. These are important questions that nobody is really thinking about or asking. AI is developing at an exponential rate.
Facebook founder Mark Zuckerberg said Musk’s “doomsday AI scenarios are unnecessary and pretty irresponsible.” Harvard professor Steven Pinker has also criticized Musk for his comments. This could be from the fact that AI has proven to be huge for big business.
Zuckerberg’s Virtual Reality Photo: Our Future in the Matrix
Not only is it a field where giant profits can be made, but it’s also helping with overall efficiency and safety in our everyday lives. Musk does not condemn AI, his companies utilize AI, he is simply saying there should be some regulation to make sure that things don’t go too far.
AI Thinks For Itself
Just how far could things go? Well, artificial intelligence thinks for itself, and it’s already demonstrated that it can learn. We’ve all been made well aware of the scenario, a self-aware artificial type of intelligence, taking no direction or oversight from humans, beginning to think on their own.
Again, we’re just in the beginning stages of this, and Musk is looking into the future, but it seems we’re almost there.
Take, for example, AI programs that don’t just work online to handle payments, coding etc, but control robotic humans. A few years ago, we published a story about an android named Dick. He’s able to answer a series of complex questions, and find answers to things he has previously not been programmed to do.
It’s a mathematical technique that makes it possible for the android to index, retrieve, and extract meaning from the natural human language.
He is able to learn, and everything he learns can be learned from other artificial intelligence that is hooked up to the same mainframe. If this becomes a reality, then what one robot learns could be learned by every other single robot as well.
In this fascinating interview, it’s quite shocking to hear Dick’s responses at such an early stage of development.
For example, When asked if he thinks, he responded:
“A lot of humans ask me if I can make choices (showing he is aware of what others are thinking as well) or is everything I do and say programmed? The best way that I can respond to that is to say that everything, humans, animals, and robots (everything they do) is programmed to a degree.”
Musk Is Truly Worried
“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea–which is fundamentally flawed.
“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”
The takeaway here is to really recognize why we do the things we do. On a collective scale, right now, our systems are operating solely for the intention of profit. For the sake of profit, oversight and necessary regulations don’t seem to apply in areas that they should.
When developing technologies these days for the human race, a lot of life-changing and game-changing technologies are actually subjected to restriction and patent suppression. The black budget world if far ahead of the mainstream world, and we can’t really say for sure just how far our technology in the category of AI has advanced.
When creating technology, we need to look at why we create it, what we are using it for and why we decide to use it. If the intentions are for profit, and to make things easier, we have to ask ourselves, at what cost will this come? Perhaps intentions are good, but that doesn’t always lead to the best outcome.
The point is, there is clearly a serious concern about developing AI technology, and if the technology is not kept transparent and coming strictly from the intentions of helping the human race and the planet, it will most likely useless if not dangerous.
We could be doing so much more for the planet and the inhabitants. Imagine if we created AI to constantly regrow forests? Create food forests? Feed the hungry?
Elon Musk sees the potential for AI to become self-aware and self-governing, and beneficial to humanity. He is simply worried about the people who control it right now and what their intentions are.
By Arjun Walia, Guest writer (excerpt)