Going deeper into neural networks

in google on (#CB4F)
story imageArtificial neural networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. How do you check that the network has correctly learned the right features? One way to visualize what goes on is to turn the network upside down. Neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too. It can help to visualize the network’s representation. in some cases, this reveals that the neural net isn’t quite looking for the thing we thought it was.

If we choose higher-level layers, complex features or even whole objects tend to emerge. We call this technique “Inceptionism”. If a cloud looks a little bit like a bird, the network will make it look more like a bird. After several passes, a highly detailed bird appears, seemingly out of nowhere. Of course, we can do more than cloud watching with this technique. For example, horizon lines tend to get filled with towers and pagodas. Rocks and trees turn into buildings. Birds and insects appear in images of leaves. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network. These computer 'dreams' generated are a fascinating insight into the mind of a machine.
Post Comment
The list chin, chips, foot and thumb contains how many body parts?