Sunday, March 22, 2015

How to Survive the Pending AI Apocolypse

"Success in creating A.I. would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." -Stephen Hawking

According to some of the best minds of today, we're doomed. A.I., or artificial intelligence, will someday overtake humans and wipe us out. Pretty sure we all saw that movie, right? Right.

From Stephen Hawking to Elon Musk, and many other super smart people, the current thinking in the thinking business is that we're proper fucked when it comes to thinking computers.

We all know how it works. Humans invent a machine that becomes sentient and can improve itself. Machine begins thinking for itself and innovates so much faster than any human that it no longer needs humans. Humans either a) become enslaved, b) get wiped out for being unnecessary, or c) get sent back in time to stop killer A.I. robots that look like ex-body builder, ex-California governors.

How do we avoid becoming meat toys for some super brain computer?

Easy.

Method One: Don't build it. This is actually a lot simpler than it sounds. It just requires a hardcore government and societal crackdown on AI research. See also, embryonic stem cell research and human cloning.

That wouldn't be any fun at all, though. While embryonic stem cell research is unnecessary (we can most likely get the same results with regular stem cells), and human cloning is a wee bit creepy (except in twisted sex fantasies, of course), AI is freaking cool. It opens up an assload of possibilities and human potential.

Method Two: Just stay ahead of the computer. We continue to augment human potential using the very machines all these very smart people are afraid of. If a person, augmented by machines, is just as capable as the machine by itself then we're on even ground. I'll take those odds.

"I think there's things that are potentially dangerous out there. ...There's been movies about this, like 'Terminator. There's some scary outcomes and we should try to make sure the outcomes are good, not bad." -Elon Musk

Ultimately, being able to transfer human consciousness into a computer will take care of the problem of rogue AI completely. Unleash the full potential of humanity, operating at the speed of thought, instantly, all at once.

AI is cool, but it's a sideshow. The interesting thinking actually arises out of contemplating option two. As a proponent, I don't really see the problem as technological in nature. We can certainly do much more with the tech we have to enhance and augment the human body and brain.

I don't have any particular moral issues with the concept. No more than, say, using a hammer instead of my forehead to drive a nail versus using a hammer to drive a nail into someone else's forehead. The morality is pretty clear cut.

You can argue there's a bit of a spiritual issue here. If, that is, you assume the body is required for the spirit. I don't. Hook me up!

“ … the fact is, our “smartest” AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over, not putting the finishing touches on Skynet." -Yann LeCunn, Facebook AI Labs

Luckily, all those smart minds are thinking about it. It's important not to just read the headlines. In every case, from Hawking to Musk, they qualify statements of caution with a rosier out look on the whole thing. AI is a tool. Good or bad, we choose.

Don't get me wrong. We're not ready to be completely computerized, yet. Not by a long shot. Can you imagine Facebook when everyone can post every thought instantly? The horror. But as long as we stay ahead of the machine, we'll be good.

No worries.

Easy enough.

Sort of.

-CDE

No comments:

Post a Comment