10 likes | 117 Vues
This research introduces a novel method for estimating human emotions based on facial expressions. By combining the three-dimensional average face with Adaboost, the system effectively recognizes emotions, improving robot cognitive abilities. The system consists of a training and detecting phase where face features and emotion data are extracted and processed. Utilizing adaptive boosting, the system accurately classifies expressions but is sensitive to noisy data, necessitating conjunction with other algorithms for enhanced performance. Experimental results show significant improvement in emotion classification rates, particularly when the head is freely rotating.
E N D
Human Emotions Estimation by Adaboost based on ISS-P-252 User's FacialExpression and Average Face from Different Directions Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki(Kobe University) Overview Proposed method Flowchart • Background • TV programs customer content recommendation automatic • analysis needs to collect the data of user‘s facialexpression. • The expression of customer’s face is directly related to sales • content recommendation. • To understand human emotions is able to improve robot • cognitive and interactive abilities The system includes 2 parts, the 1st one is training system, during this stage face features and emotion features data is extracted and saved as databaseindividually. The other stage the face expressions are classified and processing cost is cut down • Training system • Detecting system The face features data Obtain the face region Input video Recover into the 3D models • Conventional methods • Adaptive Boosting [1] (Adaboost) is a basic method widely used in face feature extraction and recognition. The method is operated easily, and its classification is quite precise. Though it is sensitive to noisy data and outliers. What’ s more, it is not good at processing the event that is lack of previous train. So it need to be used in conjunction with many other algorithms to improve the performance. The emotion features data create average face by 3D features Estimate emotions Recover 3D features [1]Yoav Freund and Robert Schapire,1995 [3] C. Goodall,1991 • Problems & Approaches • problem1 • When the user’s head significantly rotating, there would lead to be • obvious errors. • aproaches1 • Combine the three-dimensional average face (3DAF) with Adaboost • problem2 • The cost of processing data and time is high by the real-time detecting • aproaches2 • The average face models are projected into the 8-bit gray image, • which is intend of the original data. The original face data is recovered as 3D model to get more features points We adjust the dif-ference between the model data and the original data by the following function ,to control the error. • Procrustes analysis [3] Features Extraction Average face creation EI: control factor(≤0.001) Io : the original face data Im : 3D model data UT: Projection vector Obtained the 3D features, the coordinates Si and RGB value Ti are got easily. Boost training Cut out features Taking use of the data to calculate their average value, • SDAM [2] draft : :previous i-th iteration average value : :adjustment factors N :total of elements Be updated by each iteration The average model is projected into 8-bit gray image, we will get final average face features, which is used as emotion classification. The face marked feature points [2] T. Wang et al.,1995 Experiment Experiment information Head keeps static Head freely rotates • Trained samples: 20X3X12+75 • The person tested: 2 • Emotion groups : Natural(Nau), Happy(Hap), Unhappy(Unh) 2D features 3D features Data Explanation • The classification effects using two methods’ features are • compared; • The correct rates under conditions of two motion states are • compared. Conclusion In our research, we proposed a novel method for improving emotions estimation. Our experiments have shown that our approach improved the emotions classification rate, specifically the head was freely rotating.