• BEMYAPP MEDIA
  • Connect
    • Agency
    • Events
    • BMA Startup Group
      • With The Best
      • Hacker Unit
      • Hackathon.com
      • Coder Power
      • Hacker Bucks
  • About

BeMyApp Media

BeMyApp Media and Resources

Maybe Black Box AI Isn’t So Black After All

November 10, 2017 by "Penguin" Pete Trbovich

Nietzsche warned us about this. He said, “When you gaze long into the Black Box Deep Learning AI algorithm, the black box deep learning AI algorithm will also gaze into you.” Close enough to a quote for the Internet, anyway.

“Black Box” AI is this year’s hot TEDx buzzword. We’ve gone from trying to build an intelligent system by hook and crook, to just wiring up a neural net and letting it learn on its own. Deep Learning AI systems have been showing some impressive results, cracking everything from Go games to walking algorithms:

 

Look, that’s a very natural walk for someone who’s jogging while fighting off a swarm of bees while also playing the trombone. But we’re not too sure how far we can trust Black Box learning methods. What we’re really doing is giving the computer a goal and letting it make up its own mind how do accomplish it, through random experimentation. For instance, government agencies using Black Box AI to manage the courts have come under criticism because we don’t fully understand how these rules are implemented.

Before we get too fuzzy about this problem, let’s explain it with a simple example:

ELI5: Black Box Deep Learning

OK, we’re going to build a chatterbox that can carry on a conversation. We’ve decided that we’re going to teach it general rules of grammar and parts of speech, then turn it loose in a chat room. Our subjects, humans paid to “educate” our AI system, will respond to sentences our chatterbox utters by rating them correct or incorrect, determined by how much logic the sentence makes.

To get our AI started, we give it some sample sentences:

* Bob ran through the park.

We tag the parts of speech and map the sentence to one of those sentence diagrams we all fell asleep over in school. So now the system knows the parts of the example sentence:

* Bob (subject) ran (predicate) through (preposition) the park (direct object).

* Bob (subject) ran (predicate) to (?) the park (direct object).

In the second sentence, we’re letting the computer discover that “to” is also a preposition which works in this sentence. Later on, we’re showing it sentences like:

* Alice ran out of the park.

* Alice ran to the bridge.

* Alice ran under the bridge.

When the system guesses something like “Alice bridge under the ran,” we hope to correct it so that it understands that “bridge” can never be a predicate and “ran” cannot be a noun or object.

By now, we’ll let the program sort out the parts of speech by inference, then ask it to construct sentences for the testers. The first sentence the tester sees is:

* Bob ran under the park.

Now, the AI is doing what all Black Box systems seem to veer towards, applying perfectly reasonable rules to come up with an unexpected result. Here’s where we make our mistake: The tester marks this sentence wrong because it’s not logical to run under a park. We’re blindly hoping that the system learns that “under” should never be applied to “the park,” but it’s OK to speak of running under a bridge. But what if, instead, the system learns a false rule? What if it assumes that the problem is in the subject so that Bob can’t run under anything but Alice can? The system would blindly go on assuming there are special rules for how Bob can run, and never unlearns the false rule simply because, through random chance, it never makes the same category of mistake again.

That’s what we’re doing with Deep Learning AI; we’re letting the computer solve problems without “showing its work.” And this leads to situations when we encounter a case the developers never intended. Brace yourself for this classic picture:

This user tries to swap faces with the Incredible Hulk, but the software obviously scanned the image and concluded “nope, faces are never green” and moved on to the next closest suspect, the hand. The algorithm concludes, “well, it’s shaped funny for a face, but at least it’s the right color, so we’ll go with that.”

Humans Have The Same Bugs!

This is the part where the Black Box looks into you: Humans are prone to the same kinds of faults all the time! We see it start in early childhood development, such as the lack of the concept of object permanence. As infants, we assume that just because we can’t see something, it’s gone. It’s easy to see where we’d make that mistake at first: we eat food and then we see there’s no more food, so it’s “gone.” It takes time for us to grasp, by about age two or so, that even if we can’t see something, it still exists somewhere in physical space.

The handicap of learning false rules goes well beyond childhood. In a sense, things like superstition, magical thinking, and the placebo effect all arise from the same kinds of bugs in the human mind. Humans, being busy creatures, are mostly content to accept rules of thumb, weak heuristics that work “good enough” in most cases. It’s even to our evolutionary advantage to do so: we’d never get anything done if we stopped to analyze every tiny detail of everything we observed.

Thus, one day somebody saw a black cat skitter across their path and then had an accident, and concluded that black cats are bad luck. People in ancient times believed that what we now understand as diseases were caused by an imbalance of “humours.” For the longest time, we observed bright dots in the sky that moved differently from the stars and concluded some wildly clumsy celestial machinery because we were still assuming the Earth was the center of the universe. We might think we’re superior now to these primitive beliefs, but then just recently we were all blowing on Nintendo cartridges.

Heck, our brains are so weak to being fooled that a skilled performer can convince us of something false, even while explaining it straight through that they’re intentionally fooling us. Witness the amazing Rubber Hand Illusion:

 

Yeah, that’s how far we’ve come. A little hocus-pocus and our stupid brains look down at the table and think “yep, that’s my hand now, even though I just saw it was a rubber prosthetic partially hidden by a cloth.”

What Have We Learned About Black Box AI?

It might prove that when we’re trying to get “human-like AI,” we’re actually setting our sights too low. As Deep Learning algorithms evolve and develop, they’re becoming prone to exactly the same kinds of mistakes humans make. On the other hand, how do we avoid that? The way our own brains work, perhaps, will limit how well we can program an Artificial Intelligence after all. What’s even more spooky is that AI systems might accidentally perform the correct action most of the time, but for completely wrong reasons. That Google Deep Mind bot, for instance, flails its arms about so hysterically because none of its arm movements had any effect on its success at walking… but it has obviously “learned” that they don’t hurt either, so it just assumes that it might as well hold its arms like that.

Let’s propose a new experiment: Set the Deep Mind bot up with a tiny virtual Nintendo game system, then leave it to play for a while. We’ll check back and see if it starts blowing on the cartridges.

Filed Under: Pete's Articles, Trending Tagged With: ai, Algorithm, Black Box, Deep Learning, Google

Love our content? Sign up for our newsletter!

We will send you 8 articles, once a week.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Join Our Developer Community

* indicates required

TRENDING

Five Underdog Technologies Due For A Renaissance In 2018

Five Underdog Technologies Due For A Renaissance In 2018

December 22, 2017 By Pete Trbovich

Was Dragons Lair A Thing? Stranger Things Have Happened…

Was Dragons Lair A Thing? Stranger Things Have Happened…

November 16, 2017 By "Penguin" Pete Trbovich

Maybe Black Box AI Isn’t So Black After All

Maybe Black Box AI Isn’t So Black After All

November 10, 2017 By "Penguin" Pete Trbovich

Forgotten Unix Tools: The dc Command-Line Calculator

Forgotten Unix Tools: The dc Command-Line Calculator

April 18, 2017 By Pete Trbovich

The Year Of Linux On Everything But The Desktop

The Year Of Linux On Everything But The Desktop

February 21, 2017 By Pete Trbovich

HALL OF FAME

Rob Eisenberg Answers Your Questions on Aurelia

Rob Eisenberg Answers Your Questions on Aurelia

September 26, 2016 By April Smallwood

5 Movies Hollywood Got Right About Hackers

5 Movies Hollywood Got Right About Hackers

June 1, 2016 By Pete Trbovich

6 AI Startups Disrupting the Healthcare Industry

6 AI Startups Disrupting the Healthcare Industry

May 9, 2016 By Gregory Thomas

Integrate AccountKit with Swift Universal App in 6 Steps

Integrate AccountKit with Swift Universal App in 6 Steps

May 19, 2016 By Zaid Pathan

Follow us

BeMyApp Startup Group

Hackathon.com

Stay up to date with hackathons around the world. Find the best hackathons around you or organize your own.

CoderPower

An online platform with coding challenges where devs learn new skills and compete using microlearning techniques.

HackerUnit

The first remote accelerator focused on the most disruptive technologies. Talent matters, not location.

With The Best

Themed online conferences and tailored masterclasses designed to bring the best experts to your living room.

Hackerbucks

New digital currency created by BeMyApp rewarding participants who attend hackathons, conferences, meetups, webinars, etc.

Offices

530 Howard Street, suite 450
San Francisco, CA 94105
1 St Katharine’s Way
E1W 1UN London
86 Rue de Charonne
75011 Paris
Gotzinger Straße 8
81371 München
Hackers & Founders
Herengracht 504
1017 CB Amsterdam
San Francisco / London / Paris / Munich / Amsterdam

Sitemap

Agency / Insights / Events / Blog / Careers

News

Tweets by @bemyapp
  • ©2017 BeMyApp - All right reserved.
BeMyApp's logo

Where Developers Meet