Transcription

Vol-5 Issue-3 2019IJARIIE-ISSN(O)-2395-4396Facial Expression Recognition Using ImageProcessingAditi Bhadane1, Anuja Dixit2, Vivek Ingle3, Disha Shastri41,2,3,4Student, Name Computer, K. K. Wagh Institute of Engineering Education and Research, Nashik,Maharashtra, IndiaABSTRACTFacial Expressions are considered as one of the channels that convey human emotions. The task of emotionrecognition often involves the analysis of human expressions in multi-modal forms such as images, text, audio orvideo. Different emotion types are identified through the integration of features from facial expressions. Thisinformation contains particular feature points that are used to analyze expressions or emotions of the person. Thesefeature points are extracted using image processing techniques. The proposed system focuses on categorizing the setof 68 feature points into one of the six universal emotions i.e. Happy, Sad, Anger, Disgust, Surprise and Fear. Forcollecting these points, a series of images is given as input to the system. Feature points are extracted andcorresponding co-ordinates of the points are obtained. Based on the distances co-ordinates from centroid, imagesare classified into one of the universal emotions. Existing system show recognition accuracy more than 90% whenSVM (Support Vector Machine) classifier is used. In proposed system, SVM classifier is used for building theclassification model using extracted 68 feature points from 7 Region of Interests (ROI). Proposed system recognizesemotions of the person with high precision.Keyword: - facial expression recognition, feature points, image processing, expressions, emotion analysis,Support Vector Machine1. INTRODUCTIONTechnologies based on human computer interaction are increasing day by day. Facial Expression Recognition (FER)is one such technology. Facial expressions are fastest means of communication while conveying any type ofinformation. They are considered a way using which humans convey emotions. Now-a-days, devices which interactwith human, perform coded reaction. But, if machine could get real time information of the human it is interactingwith, it would help to improve human machine interaction. FER, if implemented, may lead to a natural HumanMachine Interface (HMI).Also, Knowledge of human emotions helps for implementation of: Boosting mood of an employee in industry Training AI agents Blue eyes technology (Technology which identifies mental state of user)10058www.ijariie.com266

Vol-5 Issue-3 2019IJARIIE-ISSN(O)-2395-4396Fig.: Image showing 6 universal emotions.2. MATERIAL AND METHODSThis system needs a set of images for training and building a classification model. Thus, images in Cohn-Kanadedatabase are used for training.For detection and identification of face, computer vision libraries are required. OpenCV library provides functionsfor the same. Using OpenCV, face detection and feature point location is performed. Dlib is used for training andbuilding the classification model.Study Duration: 12 months.Sample size calculation: For working of the system, series of images is/are required. At a time, a single image iscaptured. Series is formed by capturing images after every 30 seconds. A batch of 10 images is considered as oneseries. Each image from that series is classified and output is that emotion in which maximum images from a batchare classified. Thus, sample size is 10 images i.e. a single batch.Inclusion conditions:1. Operating system required is windows 7 or above2. Python programming language is used3. . png format of image is required for working of the systemAt least 10 GB memory is required for working of the system considering the temporary storage of images captured.10058www.ijariie.com267

Vol-5 Issue-3 2019IJARIIE-ISSN(O)-2395-43963. PROCEDURE METHODOLOGY3.1 Training:Faces are detected from the images in Cohn-Kanade database. Emotions of those faces are known. Feature points ofthe face refer to 68 points on 7 regions of interest on the face. So, feature points of Cohn-Kanade database areextracted and mapped to particular emotion. Using these values, classification model is prepared.3.2 Testing:Series of images is/are captured using a web camera. Face detection is performed on those images and detected faceis stored. Feature points are located and extracted. These extracted feature points must be stored in proper datastructure for classification. So, before classification, set of 68 (x, y) co-ordinates is converted to an array.After the array is formed, coordinates of centroid are calculated. Then distance of each feature point from centroid isidentified. Those distances are then given as input to classification model. Output of the system is a statement whichdepicts the emotion of captured images.Fig.: Process flow3.3 Feature points and Region of Interest1. For tracing particular emotion, feature points of only few regions on face are observed. Those regions are knownas Region of Interest (ROI).2. 7 Regions of Interest are as follows: 10058Left Eyebrowwww.ijariie.com268

Vol-5 Issue-3 2019IJARIIE-ISSN(O)-2395-4396 Right Eyebrow Left Eye Right Eye Nose Lips Jawline3. The 68 feature points are divided and mapped onto these 7 regions. Each eyebrow has 5 points, eye has 6, nosehas 9, lips have 20 and jaw line has 17 feature points.ROIFEATURE POINTSLeft Eyebrow23-27Right Eyebrow18-22Left Eye28-36Right Eye37-42Nose43-48Lips49-68Jawline1-17Table: Showing mapping of ROI and feature pointsFig.: Location of feature points on face10058www.ijariie.com269

Vol-5 Issue-3 2019IJARIIE-ISSN(O)-2395-43964. ANANLYSIS1454 images from Cohn-Kanade database are used for training the classification model. Faces from those images areextracted and 68 feature points are extracted and used for training.Extraction results in (x,y) coordinates of feature points. If only the value of coordinates is considered forclassification, the accuracy decreases sharply because if the face is tilted then values change abruptly. As this type ofchange in value do not have a pattern, building a model is difficult. Considering this problem, Euclidean distance ofeach feature point from a fixed point is calculated. That fixed point is centroid of 68 points. This method solves theproblem of changing values.Phase wise working of the system after training is done:Phase 1:Phase 1 of the system is collection of images using web camera.Phase 2:In phase 2, face is detected from the batch of images.Phase 3:Phase 3 is the step where feature points are extracted from the face detected in phase 2, calculating thecentroid and Euclidean distances.Phase 4:Phase 4 is classification of images in particular emotion using SVM classifier.Phase 5:Proposed system gives the emotion of the face in the images as a result in phases 5.5. DISCUSSIONFacial expressions are considered as a best way to convey emotions. Human Emotions are nowadays used inmultiple applications. Basically, there are six universal emotions such as happy, sad, fear, angry, disgust andsurprise. These emotions can be categorised under computer vision domain. Facial Expression Recognition usingimage processing system proposes a new approach for recognising the facial expressions using OpenCV librariesand Haar Cascade Algorithm for face detection.Dlib, a C toolkit contains Machine learning algorithms is used for training and building the classification model.It uses SVM classifier to classify the images into six universal emotions. Classification is done based on the batch ofseries of 10 images, so, that the classification accuracy increases. The SVM classifier has the accuracy greater than90% and hence, expressions can be identified with high precision. 1454 images from the Cohn-Kanade database areused to make a classification model to train the test images which are captured using the Web Camera. The imagesare compared with the training classification model and the images are classified according to its most similar traitof emotion. The output is displayed on the screen as a dialogue box showing if the person’s face shows Sad, Happy,Fear, Surprise, Disgust and Anger as an emotion.The SVM algorithm efficiently identifies the human emotion along with the use of 7 ROI. The image beingconverted to feature point and then applied for the final result, shows the correct human emotion irrespective of thefact that image is compressed for application of SVM algorithm. The algorithm being highly precise yields effectiverecognition of human emotions into the six universal emotions.There are many projects related to the topic which are implemented using predefined grids and also manymathematical formulas for extracting features. But the proposed system classifies the emotions without complexcalculations. About 50 images were tested and classified accurately to their specified classes.10058www.ijariie.com270

Vol-5 Issue-3 2019IJARIIE-ISSN(O)-2395-43966. CONCLUSIONFacial Expression Recognition provides a way to identify human emotions. Proposed system identified the samewith the help of captured series of images. It detects the face from collected images for efficient extraction of featurepoints. The system effectively classifies the images into one of the 6 universal emotions. Haar Cascade algorithmused for detection of faces works appropriately with around accuracy rate of 90%. SVM classifier used in the systemgives the expected output properly.7. REFERENCES[1] Irene Kotsia and Ioannis Pitas, "5. Facial expression recognition in image sequences using geometricdeformation features and support vector machines", IEEE Transactions on image processing, vol. 16, No. 1, January2007[2] Ma Xiaoxi, Lin Weisi, Huang Dongyan, Dong Minghui and Haizhou Li, "Facial Emotion Recognition", IEEE2nd International Conference on Signal and Image Processing, 978-1-5386-0969-9/17, 2017[3] Jyoti Kumari, R. Rajesh, KM. Pooja, "Facial expression recognition: A survey", Procedia Computer Science58(2015) 486-491[4] Jun Ou, "Classification algorithms research on facial expression recognition", Physics Procedia 25 (2012) 12411244[5] Khadija Lekdioui, Yassine Ruichek, Rochdi Messoussi, Youness Chaabi, Raja Touahni,"Facial ExpressionRecognition Using Face-Regions", 3rd International Conference on Advanced Technologies for Signal and ImageProcessing - ATSIP’2017, May 22-24, 2017, Fez, Morroco[6] W. N. Widanagamaachchi and A. T. Dharmaratne, "6. Emotion recognition with image processing and neuralnetworks"[7] Anagha S. Dhavalikar, Dr. R. K. Kulkarni, "Face Detection and Facial Expression Recognition System", 2014International Conference on Electronics and Communication System (ICECS -2014)[8] Jayalekshmi J, Tessy Mathew, "Facial Expression Recognition and Emotion Classification System for SentimentAnalysis", International Conference on Networks & Advances in Computational Technologies (NetACT), 20-22July 2017, Trivandrum, 978-1-5090-6590-5/17, IEEE 2017[9] Vasavi Gajarla, Aditi Gupta, "Emotion Detection and Sentiment Analysis of Images"[10] Ambrose A. Azeta, Nicholas A. Omoregbe, Adewole Adewumi, Dolapo Oguntade, "Design of a FaceRecognition System for Security Control", International Conference on African Development Issues (CU-ICADI)2015: Information and Communication Technology Track[11] Alptekin Durmuúo lu, Yavuz Kahraman, "Facial Expression Recognition Using Geometric Features", IWSSIP2016 - The 23rd International Conference on systems, signals and image processing. 23-25 May 2016, Bratislava,Slovakia, 978-1-4673-9555-7/16. IEEE 2016[12] Ketki R. Kulkarni, Sahebrao B. Bagal, "Facial Expression Recognition"[13] Yingli Tian, Takeo Kanade, and Jeffrey F. Cohn, "Facial Expression Recognition", Chapter from ation/227031714 Facial Expression Recognition10058www.ijariie.com271

recognition often involves the analysis of human expressions in multi-modal forms such as images, text, audio or video. Different emotion types are identified through the integration of features from facial expressions. This information contains particular feature points that are used to analyze expressions or emotions of the person.