0% found this document useful (0 votes)
10 views7 pages

ICACS23CameraReady ID 8295

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/384674811

Automate Appliances via Gestures Recognition for Elderly Living Assistance

Article · October 2024

CITATIONS READS

32 22

3 authors:

Muhammad Muneeb Hammad Rustam


Air University Air University
5 PUBLICATIONS 164 CITATIONS 5 PUBLICATIONS 164 CITATIONS

SEE PROFILE SEE PROFILE

Ahmad Jalal
Pohang University of Science and Technology
506 PUBLICATIONS 15,697 CITATIONS

SEE PROFILE

All content following this page was uploaded by Ahmad Jalal on 06 October 2024.

The user has requested enhancement of the downloaded file.


Automate Appliances via Gestures Recognition
for Elderly Living Assistance
Muhammad Muneeb Hammad Rustam Ahm ad Jalal
Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science
Air University Air University Air University
Islamabad, Pakistan Islamabad, Pakistan Is lamabad, Pakistan
180954@students.au.edu.pk 181126@students.au.edu.pk ahmadjalal@mail.au.edu.pk

Abstract— Smart homes have grown in popularity not only as a towards the model where gestures are recognized and action is
luxury but also because of the numerous benefits they provide. performed accordingly.
In this research, a home automation system is developed for the
elders because as the number of elders rises, so does the
probability that patients will develop geriatric problems, which
necessitates society to address the issue. It is especially beneficial
for senior citizens and disabled youngsters. Many research and
innovation is conducting on in the field of gestures recognition.
In this project, home automation is performed through the use
of gestures to control appliances and contradicting the computer
vision approaches as an elder person is not capable for ensuring
the environment for the computer vision techniques as it
requires proper lightning conditions and angle to ensure the
parameters. Sensor embedded Hand glove that collects hand
motions has been discussed in this study. The wearable device
detects and records tilting, rotation, and acceleration of the hand
movement using accelerometers and gyroscopes. Our proposed
human gestures recognition (HGR) system recognizes nine Fig. 1 Prototype of the hand glove where inertial sensor MPU 9250 is used
different hand gestures taken from benchmarked dataset. We for gestures recognition.
used a combination of features extraction algorithms and a
random forest classifier to compare our system's performance Two common approaches for activity recognition are
with other well-known classifiers. We have achieved an accuracy computer vision-based [5] and sensors-based [6,7]. Motion
of 94% over the benchmark HGR dataset. Experiments have tracking via inertial measurement units (IMUs) on mobile and
shown that the proposed approach has the capability to wearable devices has attracted significant interest in recent
recognize gestures for controlling home appliances and can be years [8] . The processes involved in identifying human
used in healthcare, residences, offices, and educational activities include the extraction of features, categorization [9],
environments.
and subject detection. The extensive advancements in inertial
sensors and their roles in interior environments make it
Keywords—Genetic algorithm, hand gestures recognition, inertial difficult to reliably identify ordinary human
sensors, t-distributed stochastic neighbor embedding. activities/interactions. Smartphones have now evolved into
wearable mini computers with numerous sensors as a result of
I. INTRODUCTION advancements in microelectronic that has increased the
Hand gestures gotten a lot of attention in past few years processing power and it has also enabled new input interfaces,
because of its massive worth for development of a variety of such as gestures-based user interfaces [10].
applications including healthcare and rehabilitation [1]. As
one of the most popular human-computer interaction is hand Internet of things (IoT) is a system where connected
motion tracking [2]. Hand gestures are a powerful and a devices (such as fans, air conditioners, and other home
natural way for people to communicate with one another and appliances) are integrated with sensors to exchange the data
they are traditionally detected and recognized using computer and provide the service [11]. In order to improve the home
vision algorithms based on cameras [3]. Such techniques, on automation experience as it is one of the most current IoT [12]
the other hand, can be slow and demand a lot of processing research initiatives [13]. This emphasizes the device's
power, which means they use a lot of energy. For resource- importance to the users [14]. Many IoT-related solutions have
constrained devices, using embedded sensors i.e., inertial been presented to make everyday objects [15] communication
sensors to conduct gestures recognition is a better option. The capabilities [16]. One of these is HGR that has emerged as a
advantage of using embedded sensors is that gestures feasible method of communication with the automated home
detection may be done on the device itself, and the accuracy is systems [17]. Hand gestures and its recognition is of an idea
unaffected by illumination or space issues. In addition, the cost associated with some action or reaction that user performs in
and energy are reduced due to embedded sensors [4]. order to achieve a goal [18]. As a result, hand gestures are
becoming more popular for controlling home automation
Hand glove the prototype that we have design to recognize systems. It is possible to distinguish between vision-based that
the gestures is shown in Fig. 1. glove includes inertial sensor involves cameras and sensor-based that involves inertial
unit (MPU 9250) connected to the Arduino UNO. MPU 9250 sensors techniques for gestures recognition. Computer vision
is configured with the Arduino UNO and the signals are access (CV)-based HGR is recognized by evaluating pictures
through port by serial communication, the signals are directed acquired from videos [19-23], whereas inertial measurement

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


unit (IMU) is an emerging technology [24] gestures
recognition [25] that is recognized by analyzing signals [26]
coming from devices [27] equipped with sensors. [28]. While
the former may raise privacy problems, light circumstances,
and space restrictions, the latter alleviates privacy concerns as
well as environmental constraints like light or space [29]. The
rising usage of mobile devices such as smartphones and smart
watches has further encouraged the old techniques. A gesture
is a communication and coordination or movement that
expresses an emotion, purpose, or order. Gesture is an
expression of emotion, intent, or demand by the movement of
a bodily part [30], its vocabulary is a collection of gestures and
their meanings. Gesture-captured sensor converts a physical
gesture into a digital representation. Fig. 2. The general scheme of proposed system architecture

Researchers in the field of HGR have proposed several A) Data Pre-processing


proposals to contribute to society's well-being. For gestures In data pre-processing process, it’s critical task to identify
recognition, different sensors like accelerometer,
and manage missing values effectively during data
magnetometer or the combination of them are being utilized.
Home automation is the essential need for the present world preparation. Therefore, the data pre-processing technique is
[31]. Paralyzed [32] or the injured persons [33] or those who applied to a dataset that transforms raw data into an
depend on others for their daily home tasks [34] especially the understandable and readable format. Fig. 3 depicts the HGR
elders [35] are hereby facilitated by wearing the hand glove dataset's filtered and unfiltered data.
and control [36] the appliances at their own requirements [37].

II. PROPOSED ACTIVITY RECOGNITION


METHOD
We will provide appropriate hand gestures based on our
study of actual smart homes so that users can remember the
gestures and operate their homes effectively [38]. This
research introduces a smart home automation [39] system that
enables users to examine and modify the status of electronic
home appliances and the operational state of sensors
wearing hand glove [40]. This work's major contribution can
be divided into three categories:
Fig. 3. Shows the filtered and unfiltered data of HGR dataset
 The technique is proposed for recognizing gestures
through hand glove for the home automation process. B) Linear Discriminant Analysis
 For the identification of nine hand gestures, we have Features extraction is the most crucial step in HGR. A
applied machine learning classification models for data good set of attributes can be more important [41] than the
obtained by inertial sensors. classifier in some cases [42]. LDA makes an attempt to shape
the dispersion in order to increase classification accuracy.
 The proposed scheme allows the user to quickly automate The projection matrix W is chosen by LDA in such a way that
a variety of appliances that includes bulb, fan, door and the ratio of between-class and within-class dispersion has
air conditioner. been maximized. Features extraction reduces the number of
resources needed to understand a huge amount of data [43].
III. SYSTEM DESIGN This can be accomplished in three steps:
The suggested approach for hand gestures recognition is i. To determine the separability of various classes
detailed in this section. We proposed the system shown in Fig. using;
2. The general architecture of our system is divided into three where, Sb = Between-class scatter matrix, Ni = Sample
layers. The purpose of features extraction is to enable us to size of class i, (𝑋) = Overall mean, (𝑋𝑖 ) = Sample mean
use these features to discriminate between different sorts of of class
objects. Features extraction is the process of lowering the 𝑔 𝑇
resources that are needed to explain a huge amount of data. 𝑆𝑏 = ∑ 𝑁𝑖 (𝑋𝑖 − 𝑋)(𝑋𝑖 − 𝑋) (1)
We use linear discriminant analysis (LDA) as a features 𝑖=1
extraction method. To minimize overfitting, the goal is to
reduce processing costs. The second features extraction ii. To determine the distance between each class’s mean
approach that we have utilized is t-SNE, as it is capable of and sample via;
capturing much of the local structure of the high-dimensional where, within class scatter matrix, 𝑁𝑖 is sample size, 𝑆𝑖
data efficiently. To get excellent accuracy, features must be is scatter matrix for class i.
processed by a genetic algorithm (GA) after features
reduction application.
𝑆𝑤 = ∑𝑔1 (𝑁𝑖 − 1)𝑆𝑖 = ∑𝑔𝑖=1 ∑𝑁𝑗=1
𝑖
(𝑋𝑖,𝑗 − 𝑋𝑖 )(𝑋𝑖,𝑗 − To achieve good accuracy, features must be processed by a
𝑇 genetic algorithm after features being reduced. GA is a
𝑋𝑖 ) (2) general-purpose searching approach that has been utilized to
solve the optimization problem successfully [49]. GA begins
with random assignment of chromosomes to the population.
iii. To develop a lower-dimensional space that maximize
between-class variance while reducing within-class
variation as;
|𝑃𝑇 𝑆𝑏 𝑃|
𝑃𝑙𝑑𝑎 = 𝑎𝑟𝑔𝑃 max (3)
|𝑃𝑇 𝑆𝑤 𝑃|

To avoid overfitting and save computational costs, a


dataset is projected in order for the reparability along with
less computational cost. Fig. 4 shows how LDA was used to
distinguish nine classes.

Fig. 5. Discriminating the nine classes using t-SNE

Generation is the process through which chromosomes iterate


themselves in a fitness function [50]. GA converges to the
best optimal solution after numerous iterations [51]. Fig. 6
shows the GA process in detail.

Fig. 4. Discriminating 9 classes using Linear Discriminant Analysis

C) t-Distributed Stochastic Neighbor Embedding (t-SNE)


t-SNE is an improved variation of the stochastic neighbor
embedding (SNE) [44]. The t-SNE feature extraction
approach is capable of capturing a considerable portion for
the high-dimensional data [45]. In t-SNE, the high-layered
Euclidean distances referenced underneath between
information focuses are transformed into restrictive
probabilities that express similarities [46] as shown in Fig. 5.
Mathematically, the conditional probability 𝑃𝑗|𝑖 is given by:
Fig. 6. Flow chart showing genetic algorithm features selection method for
where σ the variance of the Gaussian that is centered on the proposed system architecture
datapoint Xi.
Algorithm 1 : Genetic Algorithm
exp(−|| 𝑥𝑖 −𝑥𝑗 ||2 /2σ2 ) Input: Feature vectors
𝑃𝑗|𝑖 = ∑𝑘≠1 exp(−|| 𝑥𝑖 −𝑥𝑗 ||2 /2σ2 )
(4) Output: List of features
1: Select a random population of a individual.
2: Determine each individuals level of fitness
−1 3: repeat
(1+ || 𝑦𝑖 −𝑦𝑗 ||2 ) 4: Pick the most qualified individuals to provide the
𝑞𝑖𝑗 = ∑𝑘≠1(1+ || 𝑦𝑖 −𝑦𝑗 ||2 )
(5)
genetic operators.
5: Generate new individuals using crossover and
𝛿𝐶 mutation
= 4 ∑𝑗(𝑝𝑖𝑗 − 𝑞𝑖𝑗 )(𝑦𝑖 − 𝑦𝑗 )(1 + || 𝑦𝑖 −
𝛿𝑦𝑖 6: Evaluate the fitness of the new individuals
−1 7: Replace the worst individuals of the population with the
𝑦𝑗 ||2 ) (6)
best new individuals
8: until some stop criteria
Fig. 5 shows the results for the discriminating classes that
includes digits, move up, move down, move counter The GA algorithm is given a data partition and uses a
clockwise, move clockwise, move right and move left. cross-validation classification ranking approach to choose
features. The final result, which is an optimally chosen feature
Features Selection via Genetic Algorithm set, is shown. This was assessed several times for various
The hybrid feature selection employs a genetic algorithm training and testing data in order to develop a fitness-based
(GA)-based filter analysis to select optimal features [47,48]. selection.
D) Classification of HGR: Random Forest
A random forest classifier is made up of a lot of decision
trees [52]. It takes many random samples from a dataset and
builds trees based on the features chosen [53]. Then, it takes
the findings from several trees that were constructed over the
data and gives output as final result. When each tree
recognizes or creates outcomes a final result is chosen based
on the majority voting stage completion, i.e. the vote for the
most often occurring result after the majority voting stage is
completed. As a result, the most frequently recognized class
by several random forest trees has emerged. Different features
of hand movements will be randomly selected from the dataset
in our proposed technique, and n random answers will be
created, as illustrated in Fig. 7. Fig. 8. Data visualization of nine gestures recorded from sensors

Fig. 9. Different Gestures Vocabulary for HGR

B) Experimental Evaluation of HGR datasets


The suggested system is evaluated across all HGR dataset
gestures. Experiments are carried out in Google Colab (TPUs)
using python. Table I shows the confusion matrix of results
over HGR dataset.
Fig. 7. Random Forest consisting of Decision Trees
TABLE I. CONFUSION MATRIX OF RANDOM FOREST CLASSIFIER
III.EXPERIMENTAL RESULTS OVER HGR DATASET
0.50 0.26 0.04 0.01 0.01 0.02 0.02 0.01 0.13
In this section, the experimental results along with dataset ZERO

description, accuracy achieved for the gestures recognition, CCW


CIRCLE 0.20 0.60 0.07 0.03 0.01 0.01 0.00 0.00 0.09
and a comparison of our approach with other hand gestures CW
0.07 0.02 0.78 0.03 0.01 0.02 0.01 0.00 0.07
CIRCLE
recognition systems have been described. MOVE
DOWN 0.00 0.00 0.01 0.99 0.00 0.00 0.00 0.00 0.00

A) Dataset Description MOVE


0.00 0.00 0.00 0.00 0.89 0.00 0.00 0.00 0.11
LEFT
The proposed system will help users to control their smart MOVE
0.00 0.00 0.00 0.00 0.00 0.90 0.00 0.00 0.10
RIGHT
homes with nine gestures, as shown in Fig. 9. The nine
MOVE
0.00 0.00 0.00 0.00 0.00 0.00 0.99 0.00 0.01
gestures (0,1,6, right, left, move upwards, move downwards, UP

counterclockwise circle and clockwise circle) that has been ONE 0.05 0.07 0.12 0.00 0.00 0.02 0.01 0.31 0.43

used to change device states, for example mode selecting, SIX 0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.99
left/right, on/off. Furthermore, we enable continuous hand CCW CW MOVE MOVE MOVE MOVE
ZERO ONE SIX
CIRCLE
gestures recognition that detects the starting and ending of CIRCLE DOWN LEFT RIGHT UP

CCW= Counter Clockwise Circle, CW= Clockwise Circle


motions automatically. Instead of getting up for the daily
household chores, such as, to switch the buttons on or off and C) Comparison with State-of-the-art Systems
to open/close the door, the user has to simply wave in the air To test the system's performance, we used HGR in order to
and the corresponding gesture is matched and performed test our suggested method. We used different approaches to
accordingly. The benchmark dataset is taken from [54]. Our extract frames from the dataset and trained our models. The
system will be able to discriminate between events that occur accuracy, precision, recall, and F1-scores of our models across
when a user issues a command in order to make a trained different classifiers are shown in Table II for our proposed
gesture and untrained actions. To identify continuous hand algorithms when compared with the HGR dataset.
gestures, a sliding window is employed. The data is presented
visually in the Fig. 8 to show the plotting between different
classes that is presented in the Fig 9.
TABLE II. ACCURACY, PRECISION, RECALL AND F1-SCORE OF OUR IV. CONCLUSION
PROPOSED CLASSIFIERS OVER HGR DATASET

Naïve Bayes Decision Tree Random Forest In this paper, we presented an approach for the recognition
Dynamic
Activities
Re- F- Re- F- Re- F-
of hand gestures to automate appliances. The proposed HGR
Precision Precision Precision
call measure call Measure call Measure
system targets the domains of human computer interaction for
Zero (0) 0.63 0.39 0.48 0.64 0.45 0.53 0.98 0.85 0.87
controlling home appliances and applicable in healthcare,
homes, offices, and for other educational purposes. Whereas,
One (1) 0.48 0.79 0.59 0.54 0.63 0.58 0.85 0.96 0.90 wearable sensor (Hand glove) based gesture recognition
technology in comparison to computer vision-based
Six (6) 0.61 0.43 0.51 0.48 0.47 0.48 0.94 0.91 0.92
recognition, it requires only lightweight processing along with
CCW
Circle
0.48 0.44 0.46 0.41 0.44 0.42 0.85 0.84 0.85 that it is low cost and requires low power and doesn’t violate
CW
users' privacy, and is unaffected by the lighting environment.
0.66 0.59 0.63 0.62 0.70 0.66 0.98 0.93 0.96
Circle Random forest results showed that our approach outperformed
Move
0.92 0.97 0.94 0.99 0.96 0.98 1.00 1.00 1.00 the recognition accuracy for accurate gestures recognition.
Down

Move
Left
0.87 0.99 0.93 0.90 0.90 0.89 1.00 1.00 1.00 REFERENCES
Move
[1] S. Perez-Gamboa, Q. Sun and Y. Zhang, "Improved Sensor Based Human
1.00 0.99 0.99 0.99 0.99 0.99 1.00 1.00 1.00
Right Activity Recognition via Hybrid Convolutional and Recurrent Neural
Networks,"IEEE International Symposium on Inertial Sensors and
Move Up 1.00 0.99 1.00 1.00 0.99 1.00 1.00 1.00 1.00
Systems (INERTIAL), 2021.
CCW= Counter Clockwise Circle, CW= Clockwise Circle [2] P. Meier, K. Rohrmann, M. Sandner and M. Prochaska, "Application of
magnetic field sensors for hand gesture recognition with neural networks,"
During the comparison we have compared the accuracy 2019 IEEE 1st Global Conference on Life Sciences and Technologies
with Naïve Bayes [55], random forest [56] and decision tree (LifeTech), 2019.
[57] to examine the system performance [58]. The comparison [3] N. Dawar and N. Kehtarnavaz, "Real-Time Continuous Detection and
Recognition of Subject-Specific Smart TV Gestures via Fusion of Depth and
table is visualized in Fig. 10. Inertial Sensing," in IEEE Access, vol. 6, 2018.
[4] Y. -T. Wang and H. -P. Ma, "Real-Time Continuous Gesture Recognition with
Wireless Wearable IMU Sensors," IEEE 20th International Conference on e-
Health Networking, Applications and Services (Healthcom), 2018.
[5] A. Jalal, S. Kamal and D. Kim, "Shape and Motion Features Approach
for Activity Tracking and Recognition from Kinect Video Camera,"
2015 IEEE 29th International Conference on Advanced Information
Networking and Applications Workshops, 2015, pp. 445-450, doi:
10.1109/WAINA.2015.38.
[6] A. Jalal, Quaid, M.A.K, "Wearable sensors based human behavioral
pattern recognition using statistical features and reweighted genetic
algorithm. Multimed", Tools Appl 79, 6061–6083 (2020).
https://doi.org/10.1007/s11042-019-08463-7
[7] A. Jalal, S. Kamal, "A Hybrid Feature Extraction Approach for Human
Detection, Tracking and Activity Recognition Using Depth
Sensors". Arab J Sci Eng 41, 1043–1051 (2016).
https://doi.org/10.1007/s13369-015-1955-8
[8] M. Javeed, A. Jalal and K. Kim, "Wearable Sensors based Exertion
Fig. 10. Comparison chart for Accuracy, Precision, Recall and F-score for
Recognition using Statistical Features and Random Forest for Physical
the proposed classifiers over HGR dataset Healthcare Monitoring," 2021 International Bhurban Conference on
Applied Sciences and Technologies (IBCAST), 2021, pp. 512-517, doi:
In Fig. 11 for the corresponding classifiers, the error rate 10.1109/IBCAST51254.2021.9393014.
for each classifier is listed. The utilized classifiers in proposed [9] A. Jalal, Quaid, M.A.K. & Kim, K. "A Wrist Worn Acceleration Based
method include random forest, decision tree and naïve Bayes Human Motion Analysis and Classification for Ambient Smart Home
for the comparison and accuracy improvement purposes. System". J. Electr. Eng. Technol. 14, 1733–1739 (2019).
https://doi.org/10.1007/s42835-019-00187-w
[10] A. Jalal, M. A. K. Quaid and A. S. Hasan, "Wearable Sensor-Based
Human Behavior Understanding and Recognition in Daily Life for
Smart Environments," 2018 International Conference on Frontiers of
Information Technology (FIT), 2018, pp. 105-110, doi:
10.1109/FIT.2018.00026.
[11] K. Feng and F. Yuan, "Static hand gesture recognition based on HOG
characters and support vector machines," 2013 2nd International
Symposium on Instrumentation and Measurement, Sensor Network and
Automation (IMSNA), 2013, pp. 936-938
[12] A. Jalal, and S. Kim, "The Mechanism of Edge Detection using the
Block Matching Criteria for the Motion Estimation," Proc. HCI,
pp.484-489, 2005.
[13] A. Jalal, and S. Kim, "A complexity removal in the floating point and
rate control phenomenon," Proc. KMS, pp.48-51, 2005
[14] A. Jalal and S. Kamal, "Real-time life logging via a depth silhouette-
based human activity recognition system for smart home services,"
2014 11th IEEE International Conference on Advanced Video and
Signal Based Surveillance (AVSS), 2014, pp. 74-80, doi:
10.1109/AVSS.2014.6918647.
[15] A. Jalal, Y. Kim, and D. Kim, “Ridge body parts features for human
pose estimation and recognition from RGB-D video data,” ICCNT, pp.
Fig. 11. Error rate calculated for the different classifiers 1-6, 2014.
[16] A. Jalal and Y. Kim, “Dense Depth Maps-based Human Pose Tracking [41] A. Jalal, S. Kamal and C. Cecer, “Depth maps-based human segmentation and
and Recognition in Dynamic Scenes Using Ridge Data, AVSS, pp. 119- action recognition using full-body plus body color cues via recognizer engine,
124, 2014. JEET, 2018.
[17] T. Kim, A. Jalal, H. Han, H. Jeon and J. Kim, “Real-Time Life Logging via [42] H. AlShu'eili, G. S. Gupta and S. Mukhopadhyay, "Voice recognition
Depth Imaging-based Human Activity Recognition towards Smart Homes based wireless home automation system," 2011 4th International
Services,” ISRESHB, pp. 63, 2013. Conference on Mechatronics (ICOM), 2011, pp. 1-6, doi:
[18] H. Zou, Y. Zhou, J. Yang, H. Jiang, L. Xie and C. J. Spanos, "WiFi- 10.1109/ICOM.2011.5937116.
enabled Device-free Gesture Recognition for Smart Home [43] Y. Zhao, W. Wang and Y. Wang, "A real-time hand gesture recognition
Automation," 2018 IEEE 14th International Conference on Control and method," International Conference on Electronics, Communications
Automation (ICCA), 2018, pp. 476-481, doi: and Control (ICECC), pp. 2475-2478, 2011.
10.1109/ICCA.2018.8444331. [44] F. -T. Liu, Y. -T. Wang and H. -P. Ma, "Gesture recognition with
[19] A. Yang, S. M. Chun, and J. Kim, "Detection and recognition of hand gesture wearable 9-axis sensors," IEEE International Conference on
for wearable applications in iomt," in 20th international conference on Communications (ICC), pp. 1-6, 2017
advanced communication technology, ICACT, pp. 1046-1053, 2018. [45] J. Wang and F. Chuang, "An Accelerometer-Based Digital Pen With a
[20] P. Mahwish, A. Jalal, and K. Kim, “Hybrid algorithm for multi people counting Trajectory Recognition Algorithm for Handwritten Digit and Gesture
and tracking for smart surveillance,” IEEE IBCAST, 2021. Recognition," in IEEE Transactions on Industrial Electronics, vol. 59,
[21] A. Ahmed, A. Jalal, and K. Kim, “Multi‑objects detection and segmentation no. 7, pp. 2998-3007, 2012
for scene understanding based on Texton forest and kernel sliding perceptron, [46] G. E. Hinton and S. T. Roweis, "Stochastic neighbor embedding,"
JEET,, 2020. Advances in Neural Information Processing Systems, pp. 833–840,
[22] A. Jalal, A. Ahmed, A. Rafique and K. Kim “Scene semantic recognition based 2002.
on modified Fuzzy c-mean and maximum entropy using object-to-object [47] Fábio Henrique M. Oliveira, Alessandro R. P. Machado, Adriano O.
relations”, IEEE Access, 2021. Andrade, "On the Use of t-Distributed Stochastic Neighbor Embedding
[23] B. W. Min, H. S. Yoon, J. Soh.,Y. M. Yang, and T. Ejima, "Hand gesture for Data Visualization and Classification of Individuals with
recognition using hidden Markov models," IEEE international conference on Parkinson’s Disease," Computational and Mathematical Methods in
systems, man, and cybernetics, vol. 5, pp. 4232-4235, 1997. Medicine, vol. 2018, Article ID 8019232, 17 pages, 2018.
[24] V. I. Pavlovic, R. Sharma. and T. S. Huang, "Visual interpretation of hand [48] Md. Rezwanul Ahsan, Muhammad Ibn Ibrahimy, Othman O. Khalifa,
gestures for human-computer interaction," A review. IEEE transactions on "Electromygraphy (EMG) Signal based Hand Gesture Recognition
pattern analysis and machine intelligence, vol. 19, no. 7, 1997. using Artificial Neural Network (ANN)," 4th International Conference
on Mechatronics (ICOM), 2011.
[25] R. Bisma and A. Jalal, “Object detection and segmentation for scene
understanding via multi-features and random forest,” in IEEE Conference on [49] Rashedul Islam, Sheraz Ali Khan, Jong-myon Kim, "Discriminant
Advancements in Computational Sciences, 2023. Feature Distribution Analysis-Based Hybrid Feature Selection for
Online Bearing Fault Diagnosis in Induction Motors," Journal of
[26] A. Hamid, Samia A. Chelloug and A. Jalal, “3D shape estimation from RGB
Sensors, Article ID 7145715, 16 pages, 2016.
data using 2.5D features and deep learning,” in IEEE ICACS, 2023.
[50] Quaid, M.A.K., Jalal, A., "Wearable sensors based human behavioral
[27] A. Faisal, Samia A. Chelloug and A. Jalal, “Multi-pedestrians anomaly
pattern recognition using statistical features and reweighted genetic
detection via conditional random field and deep learning,” in IEEE Conference
algorithm. Multimed Tools Appl," 2020.
on Advancements in Computational Sciences, 2023.
[51] S. Rakesh, G. Kovács, H. Mokayed, R. Saini and U. Pal, "Static Palm
[28] A. Shuja and A. Jalal, “Vehicle detection and tracking from Aerial imagery via Sign Gesture Recognition with Leap Motion and Genetic Algorithm,"
YOLO and centroid tracking,” in IEEE ICACS, 2023. 2021 Swedish Artificial Intelligence Society Workshop (SAIS), 2021.
[29] W. Manahil, J. Madiha and A. Jalal, “A novel deep learning model for [52] S. Canavan, W. Keyes, R. Mccormick, J. Kunnumpurath, T. Hoelzel and
understanding two-person interactions using depth sensors,” in proc. ICIC, L. Yin, "Hand gesture recognition using a skeleton-based feature
2021. representation with a random regression forest," 2017 IEEE
[30] J. Madiha and A. Jalal, “Body-worn hybrid-sensors based motion patterns International Conference on Image Processing (ICIP), 2017.
detection via bag-of-features and Fuzzy logic optimization,” in proc. ICIC, [53] W. Liu, Y. Fan, T. Lei and Z. Zhang, "Human gesture recognition using
2021. orientation segmentation feature on random rorest," 2014 IEEE China
[31] A. Ayesha and A. Jalal, “Automated body parts estimation and detection using Summit & International Conference on Signal and Information
salient maps and Gaussian matrix model,” in proc. IBCAST, 2021. Processing (ChinaSIP), 2014.
[32] A. Shahzad and A. Jalal, “A smart surveillance system for pedestrian tracking [54] K. N. Trong, H. Bui and C. Pham, "Recognizing hand gestures for
and counting using template matching,” in proc. ICRAI, 2021. controlling home appliances with mobile sensors," 2019 11th
[33] T. Kim, A. Jalal, H. Han, H. Jeon and J. Kim, “Real-time life logging via depth International Conference on Knowledge and Systems Engineering
imaging-based human activity recognition towards smart homes services,” (KSE), 2019.
ISRESHB, 2013. [55] A. Jalal, M. Quaid, S. Tahir, and K. Kim, “A study of accelerometer
[34] J. Madiha, Samia A. Chelloug and A. Jalal, “Deep activity recognition based and gyroscope measurements in physical life-log activities detection
on patterns discovery for healthcare monitoring,” in IEEE Conference on systems”, Sensors, 2020.
Advancements in Computational Sciences, 2023. [56] A. Rafique, A. Jalal, and K. Kim, “Automated sustainable multi-object
[35] T. Fatima, H. Rahman and A. Jalal, “A novel framework for human action segmentation and recognition via modified sampling consensus and
recognition based on features fusion and decision tree,” in IEEE ICACS, 2023. Kernel sliding perceptron”, Symmetry, 2020.
[36] A. Israr, Samia A. Chelloug and A. Jalal, “Abnormal action recognition in [57] A. Jalal, I. Akhtar, and K. Kim, “Human posture estimation and
crowd scenes via deep data mining and random forest,” in IEEE Conference sustainable events classification via Pseudo-2D stick model and K-ary
on Advancements in Computational Sciences, 2023. tree hashing”, Sustainability, 2020.
[37] M. Asifa, Abdul Haleem Butt and A. Jalal, “Highway traffic surveillance over [58] A. Jalal, M. Batool, and K. Kim, “Sustainable wearable system: Human
UAV dataset via Blob detection and histogram of gradient,” in IEEE ICACS, behavior modeling for life-logging activities using K-Ary tree hashing
2023. classifier”, Sustainability, 2020.
[38] N. Amir, A. Jalal, and K. Kim, “Automatic human posture estimation for sport
activity recognition with robust body parts detection and entropy markov
model, MTA, 2021.
[39] I. Akhter, A. Jalal, and K. Kim, “Adaptive pose estimation for Gait event
detection using context‑aware model and hierarchical optimization,” JEET,
2021.
[39] U. Azmat and A. Jalal, “Smartphone inertial sensors for human locomotion
activity recognition based on template matching and codebook generation,”
ICT, 2021.
[40] A. Jalal and S. Kamal, “Improved behavior monitoring and classification
using cues parameters extraction from camera array images, IJIMAI, 2018.

View publication stats

You might also like