Categories
Uncategorized

Face perception/recognition

Facial perception is an individual’s understanding and interpretation of the face.

Facial perception implies the presence of consciousness.

The perception of facial features is an important part of social cognition.

Information gathered from the face helps people understand each other’s identity, what they are thinking and feeling, anticipate their actions, recognize their emotions, build connections, and communicate through body language.

Developing facial recognition is a necessary building block for societal constructs.

Being able to perceive identity, mood, age, sex, and race lets people mold the way we interact with one another, and understand our immediate surroundings.

Even people born blind can learn face perception without vision, supporting the notion of a specialized mechanism for perceiving faces.

Face perception involves several stages: from basic perceptual manipulations on the sensory information to derive details about the person such as age, gender or attractiveness, to being able to recall meaningful details such as their name and any relevant past experiences of the individual.

Physical aspects of the face are used to work out age, gender or basic facial expressions.

The initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory.

The ability to produce someone’s name when presented with their face has been shown to be selectively damaged in some cases of brain injury, suggesting that naming may be a separate process from being able to produce other information about a person.

Following brain damage, faces can appear severely distorted: features can droop, enlarge, become discolored, or the entire face can appear to shift relative to the head (prosopomezamorphopsia (PMO).

In half of the reported cases of prosopomezamorphopsia distortions are restricted to either the left or the right side of the face, and this form of PMO is called hemi-prosopometamorphopsia.

Hemi-PMO often results from lesions to the splenium, which connects the right and left hemisphere.

In the other half of reported cases, features on both sides of the face appear distorted.

The perception of facial expressions can involve many areas of the brain, and damaging certain parts of the brain can cause specific impairments in one’s ability to perceive a face.

Brain imaging studies typically show activity in an area of the temporal lobe known as the fusiform gyrus.

The fusiform gyrus is known to cause prosopagnosia when damaged (fusiform face area).

While certain areas of the brain respond selectively to faces, facial processing involves many neural networks which include visual and emotional processing systems.

Prosopagnosia patients demonstrate a neuropsychological support specialized face perception mechanism as these people have deficits in facial perception, but their cognitive perception of objects remains intact.

People tend to have greater deficits in task performance when prompted to react to an inverted face than to an inverted object.

Difficulties in facial emotion processing can also be seen in individuals with traumatic brain injury.

Many studies have found that infants will give preferential attention to faces in their visual field, indicating they can discern faces from other objects.

While newborns will often show particular interest in faces at around three months of age, that preference slowly disappears, re-emerges late during the first year, and slowly declines once more over the next two years of life.

Infants turning their heads towards faces or face-like images suggest rudimentary facial processing capacities.

The re-emergence of interest in faces at three months is likely influenced by a child’s motor abilities.

At around seven months of age, infants show the ability to discern faces by emotion.

Seven-month-olds seem capable of associating emotional prosodies with facial expressions.

By the age of seven months, children are able to recognize an angry or fearful facial expression, but are not yet aware of the emotional content encoded within facial expressions.

Infants can comprehend facial expressions as social cues representing the feelings of other people before they are a year old.

Seven-month-old infants can partially understand the higher level of threat from anger directed at them: They also show activity in the occipital areas of the brain.

By seven months, infants are able to use facial expressions to understand others’ behavior.

The perception of a positive or negative emotion on a face affects the way that an individual perceives and processes that face.

A face perceived to have a negative emotion is processed in a less holistic manner than a face displaying a positive emotion.

Seven-month-olds have been found to focus more on fearful faces, and happy expressions elicit enhanced sympathetic arousal in infants.

Early perceptual experience is crucial to the development of adult visual perception, including the ability to identify familiar people and comprehend facial expressions.

Neurological mechanisms responsible for face recognition are present by age five.

Children process faces is similar to that of adults, but adults process faces more efficiently because of advancements in memory and cognitive functioning.

Infants as young as two days are capable of mimicking an adult, able to note details like mouth and eye shape as well as move their own muscles to produce similar patterns.

The fusiform face area (BA37— Brodmann area 37) is located in the lateral fusiform gyrus is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts.

The fusiform face area is also necessary for successful face detection and identification: supported by fMRI activation and studies on prosopagnosia, which involves lesions in the fusiform face area.

The occipital face area is located in the inferior occipital gyrus, and similar to the fusiform face area, this area is also active during successful face detection and identification, a finding that is supported by fMRI and MEG activation.

This suggests that the occipital face area may be involved in a facial processing step that occurs prior to fusiform face area processing.

The superior temporal sulcus is involved in recognition of facial parts and is also thought that this area is involved in gaze perception.

Perceiving an inverted human face involves increased activity in the inferior temporal cortex, while perceiving a misaligned face involves increased activity in the occipital cortex.

The right fusiform gyrus is more important for facial processing in complex situations.

In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces.

The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations.

The occipital face area recognizes the parts of the face at the early stages of recognition.

The fusiform face area shows no preference for single features, because the fusiform face area is responsible for “holistic/configural” information, meaning that it puts all of the processed pieces of the face together in later processing.

The fusiform gyri are preferentially responsive to faces, whereas the parahippocampal/lingual gyri are responsive to buildings.

Facial processing involves many neural networks, including visual and emotional processing systems.

While looking at faces displaying emotions, especially with fear facial expressions, as compared to neutral faces there is increased activity in the right fusiform gyrus, and the demonstrating connections between the amygdala and facial processing areas.

Multiple regions activated by similar face components indicates that facial processing is a complex process, with increased brain activation in precuneus and cuneus when differentiation of two faces are easy (kin and familiar non-kin faces) and the role of posterior medial substrates for visual processing of faces with familiar features?

Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery.

During facial recognition tasks, there are greater changes in the right middle cerebral artery than the left.

Men are right-lateralized and women left-lateralized during facial processing tasks.

The brain conceptually needs only ~50 neurons to encode any human face.

The fusiform gyrus is also active when study participants are asked to discriminate between different types of birds and cars, suggesting that it has a general role in the recognition of similar visual objects.

Some research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects.

During face perception, neural networks make connections with the brain to recall memories.

There are three stages of face processing: recognition of the face recall of memories and information linked with that face name recall

Names are recalled faster than semantic information in cases of highly familiar stimuli.

The face is a powerful identifier, but the voice also helps in recognition.

Thus the findings of experiments that did not control this factor lead to misleading conclusions regarding the voice recognition over the face recognition.

Semantic information can be more accessible to retrieve when individuals are recognizing faces than voices.

In this way alike words are used for the speech extracts.

In recognition of faces as it pertains to episodic memory, there is activation in the left lateral prefrontal cortex, parietal lobe, and the left medial frontal/anterior cingulate cortex.

A left lateralization during episodic memory retrieval in the parietal cortex correlated strongly with success in retrieval.

The link between face recognition and episodic memory were stronger than those of voice and episodic memory.

Evidence exists for the existence of two separate neural systems for face recognition: one for familiar faces and another for newly learned faces.

Electrophysiological techniques have demonstrated gender-related differences during a face recognition memory task and a facial affect identification tasks.

In facial perception there is no association to estimated intelligence.

Gender differences in facial recognition may suggest a role for sex hormones in females variability of psychological functions related to differences in hormonal levels during different phases of the menstrual cycle.

Humans tend to perceive people of other races than their own to all look alike:

Individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole.

Thus, to the uninitiated White American all Asiatics look alike, while to the Asiatics, all White men look alike: This phenomenon, known as the cross-race effect.

There is a reliable positive correlation between the size of the effect and the amount of interaction subjects had with members of the other race.

The cross-race effect seems to appear in humans at around six months of age.

Cross-race effects can be changed through interaction with people of other races.

Other-race experience is a major influence on the cross-race effect.

Participants with greater other-race experience were consistently more accurate at discriminating other-race faces than participants with less experience.

Holistic face processing mechanisms are more fully engaged when viewing own-race faces.

A deficit occurs when viewing people of another race because visual information takes up mental attention at the expense of individuating information.

The own-race effect likely extends beyond racial membership into in-group favoritism.

Similarly, men tend to recognize fewer female faces than women do, whereas there are no sex differences for male faces.

Autism spectrum disorder is a comprehensive neural developmental disorder that produces social, communicative, and perceptual deficits.

Individuals with autism exhibit difficulties with facial identity recognition and recognizing emotional expressions.

These deficits are suspected to spring from abnormalities in early and late stages of facial processing.

People with autism process face and non-face stimuli with the same speed.

In neurologically normal individuals, a preference for face processing results in a faster processing speed in comparison to non-face stimuli.

Individuals with autism focus on individual features rather than the face as a whole, and direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye-trained gaze of neurotypical people.

Individuals with autism display difficulty with recognition memory, specifically memory that aids in identifying faces.

Autism often manifests in weakened social ability, due to decreased eye contact, attention, interpretation of emotional expression, and communicative skills.

These facial autism deficiencies can be seen in infants as young as nine months.

Some experts use face avoidance to describe how infants who are later diagnosed with autism preferentially attend to non-face objects.

Children with autism’s difficulty in grasping the emotional content of faces is the result of a general inattentiveness to facial expression.

Many of the obstacles that individuals with autism face in terms of facial processing may be derived from abnormalities in the fusiform face area and amygdala.

Typically, the fusiform face area in individuals with autism has reduced volume.

This volume reduction has been attributed to deviant amygdala activity that does not flag faces as emotionally salient, and thus decreases activation levels.

As autistic individuals age, scores on behavioral tests assessing ability to perform face-emotion recognition increase to levels similar to controls.

Schizophrenia is known to affect attention, perception, memory, learning, processing, reasoning, and problem solving.

Schizophrenia has been linked to impaired face and emotion perception, with worse accuracy and slower response time in face perception tasks in which they are asked to match faces, remember faces, and recognize which emotions are present in a face.

Schizophrenia patients are able to easily identify a happy affect but struggle to identify faces as that are sad or fearful.

Impairments in face and emotion perception are linked to impairments in social skills, due to the individual’s inability to distinguish facial emotions.

The severity of schizophrenia symptoms has been found to correlate with the severity of impairment in face perception.

Patients diagnosed schizophrenia and antisocial personality disorder have been found to have even more impairment in face and emotion perception than individuals with just schizophrenia, having struggles to identify anger, surprise, and disgust.

Data from magnetic resonance imaging and functional magnetic resonance imaging has shown that a smaller volume of the fusiform gyrus is linked to greater impairments in face perception in schizophrenia.

The degree of schizophrenia correlates with self-face difficulties, unusual perception difficulties, and other face recognition difficulties.

Schizophrenia patients report more feelings of strangeness when looking in a mirror than do normal controls.

Hallucinations, somatic concerns, and depression have all been found to be associated with self-face perception difficulties.

Artificial intelligence known as computer vision, which uses the psychology of face perception to inform software design.

Noninvasive functional transcranial Doppler spectroscopy locates specific responses to facial stimuli.

A system provides for brain-machine interface for facial recognition, referred to as cognitive biometrics.

Current evidence suggests that facial recognition abilities are highly linked to genetic, rather than environmental, bases.

Early research focused on genetic disorders which impair facial recognition abilities, such as Turner syndrome, which results in impaired amygdala functioning.

Face Memory test were twice as similar for monozygotic twins in comparison to dizygotic twins.

The heritability of facial recognition is approximately 61%.

There is no significant relationship between facial recognition scores and other cognitive abilities.

This suggests that facial recognition abilities are heritable, and have a genetic basis independent from other cognitive abilities.

People make rapid judgements about others based on facial appearance, and are formed very quickly and accurately, with adults correctly categorizing the sex of adult faces with only a 75ms exposure and with near 100% accuracy.

People also form judgements about others’ personalities from their faces, and there is evidence of at least partial accuracy in this domain too.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *