Emoplayer: An Emotion Based Music Player
Emoplayer: An Emotion Based Music Player
Emoplayer: An Emotion Based Music Player
Abstract: The human face is an important organ of We have generated landmarks points for facial
an individual‘s body and it especially plays an features. The next step is the classification of
important role in extraction of an individual‘s emotion for which we have used multi-class SVM
behavior and emotional state. Manually classification. The generated landmarks points are
segregating the list of songs and generating an provided to the SVM for training purpose. The
appropriate playlist based on an individual‘s emotion classified by SVM is then passed to music
emotional features is a very tedious, time player and accordingly music will be played.
consuming, labor intensive and upheld task.
Various algorithms have been proposed and 2. Literature Survey
developed for automating the playlist generation
process. However the proposed existing algorithms A literature survey is a text of a scholarly
paper, which includes the current knowledge
in use are computationally slow and less accurate.
including substantive findings, as well as
This proposed system based on facial expression theoretical and methodological contributions to a
extracted will generate a playlist automatically particular topic. Literature reviews use secondary
thereby reducing the effort and time involved in sources, and do not report new or original
rendering the process manually. Facial expressions experimental work.
are given using inbuilt camera. We have used Various techniques and approaches have been
Viola-Jones algorithm and multiclass SVM proposed and developed to classify human
emotional state of behaviour. The proposed
(Support Vector Machine) for face detection and
approaches have focused only on the some of the
emotion detection respectively. basic emotions. For the purpose of feature
recognition, facial features have been categorized
Keywords: Emotion, Facial Expression, Face into two major categories such as Appearance-
Detection, Support Vector Machine. based feature extraction and Geometric based
feature extraction. Geometric based feature
extraction technique considered only the shape or
1. Introduction major prominent points of some important facial
Music player plays a vital role in everyone’s features such as mouth and eyes.
life. Most of the music lover’s users found An accurate and efficient statistical based
themselves in a hectic situation when they do not approach for analyzing extracted facial expression
features was proposed by Renuka R. Londhe. The
find songs corresponding to their mood in the
paper was majorly focused on the study of the
situation. So we have developed an emotion based changes in curvatures on the face and intensities of
music player. corresponding pixels of images. Support Vector
The main objective of this paper is to design an Machine (SVM) was used in the classification
efficient and accurate algorithm that would extracted features into 6 major universal emotions
generate a playlist based on current emotional state like anger, disgust, fear, happy, sad, and surprise.
and behaviour of the user. Face detection and facial Numerous approaches have been designed to
extract facial features and audio features from an
feature extraction from image is the first step in
audio signal and very few of the systems designed
emotion based music player. For the face detection have the capability to generate an emotion based
to work effectively, we need to provide an input music playlist using human emotions and the
image which should not be blur and tilted. We have existing designs of the systems are capable to
used Viola-Jones algorithm that is used for face generate an automated playlist using an additional
detection and facial feature extraction.
3. Methodology
The proposed algorithm in this involves an
emotion music recommendation system that
provides the generation of a customized playlist in
accordance to the user‘s emotional state. The
proposed method involves following modules
1. Input Image
2. Testing Image
3. Face Detection
4. Landmark Point Extraction
5. Training Data
6. Training of SVM & Trained SVM
7. Music Player 3.1 Viola-Jones Algorithm
This is shown in figure 3.1 as an architecture Viola-Jones algorithm is used for face detection
diagram from user’s image. Following steps are involved in
Viola-Jones algorithm:
Input Image: According to architecture diagram
capturing image is the first task to be performed. 1. Haar Classifier
We are going to capture image of user by using 2. Integral Image
webcam. There are certain conditions while 3. Adaboost
capturing image such as user should be near to 4. Cascade Image
camera in case of multiple users and also face
should not be tilted. 3.2 SVM Classifier
classifying among K classes into K binary The tracker we employ uses a face template to
problems, where each problem discriminates a initially locate the position of the 22 facial features
given class from the other K−1 classes. For this of our face model in the image and uses a filter to
approach, we require N = K binary classifiers, track their position as shown in Figure 3. For each
where the kth classifier is trained with positive expression, a vector of feature displacements is
examples belonging to class k and negative calculated by taking the euclidean distance between
examples belonging to the other K − 1 classes. feature locations in a neutral and a “peak” frame
When testing an unknown example, the classifier representative of the expression, as illustrated in
producing the maximum output is considered the Figure 3. This allows characteristic feature motion
winner, and this class label is assigned to that patterns to be established for each expression, as
example. This approach, although simple, provides given by Figure 4. Feature locations are
performance that is comparable to other more automatically captured when the amount of motion
complicated approaches when the binary classifier is at a minimum, corresponding to either the initial
is tuned well. neutral phase or the final phase of a spontaneous
expression, when motion has settled around its
How exactly emotion is detected? peak frame.
The system thus aims at providing the [5] G.Deepa G.Bhaskar Ram Emotion based Music
Windows operating system users with a cheaper, Retrieval for Gesture Recognition, International
additional hardware free and accurate emotion Journal of Computer Applications (0975 8887)
based music system. The Emotion Based Music Volume 78 No.14, September 2013.
System will be of great advantage to users looking
for music based on their mood and emotional [6] Chun-Hung Lin and Ja-LingWu, Automatic
behaviour. It will help reduce the searching time Facial Feature Extraction by Genetic Algorithms
for music thereby reducing the unnecessary IEEE transactions on image processing, vol. 8, no.
computational time and thereby increasing the 6, June 1999.
overall accuracy and efficiency of the system. The
system will not only reduce physical stress but will [7] Mahesh Pal, Multiclass Approaches for Support
also act as a boon for the music therapy systems Vector Machine Based Land Cover Classification
and may also assist the music therapist to therapize National Institute of Technology.
a patient. Also with its additional features
The future scope in the system would to design [8] Hafeez Kabani, Sharik Khan, Omar Khan,
a mechanism that would be helpful in music Shabana Tadvi, Emotion Based Music Player
therapy treatment and provide the music therapist International Journal of Engineering Research and
the help needed to treat the patients suffering from General Science Volume 3, Issue 1 ISSN 2091-
disorders like mental stress, anxiety, acute 2730, January-February, 2015.
depression and trauma.
The proposed system also tends to avoid in [9] Robert Niese, Ayoub Al-Hamadi, Axel Panning
future the unpredictable results produced in and Bernd Michaelis, Emotion Recognition based
extreme bad light conditions and very poor camera on 2D-3D Facial Feature Extraction from Color
resolution. Image Sequences, journal of multimedia, vol. 5, no.
5, october 2010.
6. Acknowledgement
[10] Nikhil Zaware, Tejas Rajgure, Amey
We would like to thank our Guide Prof. K.R Bhadang, D. D. Sapkal, Emotion Based Music
Pathak sir for the support and guidance he gave us Player, International Journal of Innovative research
on every step of the project execution. We would and development Volume 3, Issue 3 ISSN 2278
also like to thank the project review committee 0211 (Online), January-February, 2015.
members Prof. Mrs. G.G. Chiddarwar ma’am and
Prof. Mrs. D.R. Pawar ma’am. We would also like
to express our gratitude to HOD Prof. P. R. Futane
sir who helped us to accomplish this work.