LXer: Huge flaw found in how facial features are measured from imageshegt/he
by LXer from LinuxQuestions.org on (#507DN)
Published at LXer:
How is it that our brains - the original face recognition program - can recognize somebody we know, even when they're far away? As in, how do we recognize those we know in spite of their faces appearing to flatten out the further they are from us? Cognitive experts say we do it by learning a face's configuration - the specific pattern of feature-to-feature measurements. Then, even as our friends' faces get optically distorted by being closer or further away, our brains employ a mechanism called perceptual constancy that optically "corrects" face shape" At least, it does when we're already familiar with how far apart our friends' features are.
Read More...


How is it that our brains - the original face recognition program - can recognize somebody we know, even when they're far away? As in, how do we recognize those we know in spite of their faces appearing to flatten out the further they are from us? Cognitive experts say we do it by learning a face's configuration - the specific pattern of feature-to-feature measurements. Then, even as our friends' faces get optically distorted by being closer or further away, our brains employ a mechanism called perceptual constancy that optically "corrects" face shape" At least, it does when we're already familiar with how far apart our friends' features are.
Read More...