The Flaw Lurking In Every Deep Neural Net

Interesting article at Data Science Weekly on research findings about the way neural networks behave that run counter to what we believed.

It seems that deep neural networks are not continuous with respect to the decisions they make and exhibit a new sort of instability.   The implication is that application of deep neural network, such as in a self-driving car, can misclassify the view of a pedestrian standing in front of the car as a clear road.

Deep neural network have “blind spots” in the sense that there are inputs that are very close to correctly classified examples that are misclassified. Or depending on your point of view,  they are functioning correctly as the can detect differences that we can’t.

Researchers created slightly perturbed versions of visual objects by simply modifying each pixel value. While the amount was small, the photo looks exactly the same to a human. The presumption was that it also looked the same to a neural network.

Clearly,  the adversarial examples looked to a human like the original, but the network consistently misclassified them.  You have two photos that look not only like a cat but the same cat, the same exact photo to a human, but the machine gets one right and the other wrong.  See the examples in the link.

So much for my self-driving car for my commute.   Still a few kinks to work out.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s