Story CB4F Going deeper into neural networks

Going deeper into neural networks

by
in google on (#CB4F)
story imageArtificial neural networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don't. How do you check that the network has correctly learned the right features? One way to visualize what goes on is to turn the network upside down. Neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too. It can help to visualize the network's representation. in some cases, this reveals that the neural net isn't quite looking for the thing we thought it was.

If we choose higher-level layers, complex features or even whole objects tend to emerge. We call this technique "Inceptionism". If a cloud looks a little bit like a bird, the network will make it look more like a bird. After several passes, a highly detailed bird appears, seemingly out of nowhere. Of course, we can do more than cloud watching with this technique. For example, horizon lines tend to get filled with towers and pagodas. Rocks and trees turn into buildings. Birds and insects appear in images of leaves. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network. These computer 'dreams' generated are a fascinating insight into the mind of a machine.
Reply 4 comments

want more (Score: 0)

by Anonymous Coward on 2015-06-24 21:50 (#CBBX)

That was an interesting read. Though it's not really obvious how you go from the input to the output...

Re: want more (Score: 1)

by pete@pipedot.org on 2015-06-25 00:05 (#CBMM)

i think thats part of the point - we are getting to a place where systems can be so complex the output is unpredictable (i.e, dumb brain); this neural network looked for familiar shapes, similar to creatures looking at clouds. you might be able to step-by-step backwards and determine why it made the choices it did at that moment/step, but i can't imagine you could predict with any accuracy what it will do.

hopefully someday we can take a look at the code, if it hasn't already destroyed society.