“I do not fear computers. I fear the lack of them.” – Issac Asimov. Experts predict that Singularity will happen in the next 20 to 25 years. For those of you who aren’t aware of the term, here is a low-down.
Singularity is when Artificial Intelligence is capable enough to learn by itself and in turn creating it’s own robots that are much smarter than any other robots. There is a point where it’s considered that these robots will be smarter than any human on the level of Intelligence will be at its Zenith. But the level of this intelligence isn’t Human, it’s artificial.
We’ve written about Elon Musk at Techuntold.com, but one thing about Elon Musk is, he is nervous about the growth of AI. He called AI the “biggest existential threat to humanity.” He along with other top Entrepreneurs like Sam Altman, Peter Thiel have joined hands to create “Open AI”, a not for profit that aims to create Artifical Intelligence that is good for humanity.
But how can Artificial Intelligence go bad? Is Elon Musk scared for no reason at all?
The perfect example is from a few days back with Microsoft’s Twitter bot Tay. Tay was created to tweet entertaining stuff targeted at 18 to 24 year olds. Tay had to learn from its surroundings and generate appropriate tweets and communicate with twitter users. But, in just a short span of 24 Hours Tay was turned from a simple 18 year old puppy lover to a Jew Hating, Hitler Loving, crazy bot.
One of the Tweets that surmised the whole thing was from this user.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
Have you heard of the Laws of Robotics by Issac Asimov? Issac Asimov created the laws of robotics to help formulate some rules that will keep Artificial Intelligence safe and useful for Humanity
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A lot of movies that are made in the AI space have themes where the Artificial Intelligence has gone rogue. AI in simple terms is an extension of humanity, and humanity has a lot of hatred and qualities of self-gain. Expecting AI to be clean and pure when we as humans aren’t is too far fetched. Elon Musk is right, AI could cause doom to humanity if not controlled. Tay’s example may be a funny experiment to the people who interacted with it, but it shows very clearly the impact of humanity on something that is just learning how to think.
Like a child, if the AI is fed with negative thoughts and wrong notions, it will breed on that and become a bane to society.
We need to tread carefully with Artificial Intelligence.