For recognizing facial expressions from a sequence of images, most of previous researches deal with recognition of 6-basic discrete facial expressional categories such as happy, sad, etc. (changes between facial expression categories); however, they do not significantly deal with how much emotional intensity is expressed in that facial expression (changes within facial expressions). It is not a quite accurate interpretation to categorize ‘slightly happy’ and ‘very happy’ (two distinct emotions) as just ‘happy’ (a general category for distinct emotions). Therefore, it is the motivation and the position of this thesis to address the issue of accurately interpreting different distinct levels of underlying emotions from temporal changes in facial expressions.
Temporal changes in facial expressions are analyzed and recognized through the simultaneous intensity estimation of action unit (AU) and facial expression. In this regard, this thesis proposes a generative face model called Layered Semantic Network of Face (LSNF) and a two-level facial expression analysis method called Hierarchical Expert Rule Analysis (HERA). Unlike previous approaches, the proposed method accurately interprets more detailed levels of underlying emotions in a discrete and intuitive manner. The experimental results show that the proposed method successfully detected, analyzed, and recognized the temporal changes in facial expressions.