0% found this document useful (0 votes)
13 views7 pages

A Natural Language Processing For Sentiment Analysis From Text Using Deep Learning Algorithm

Uploaded by

satya prakash
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
13 views7 pages

A Natural Language Processing For Sentiment Analysis From Text Using Deep Learning Algorithm

Uploaded by

satya prakash
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 7

Proceedings of the Second International Conference on Edge Computing and Applications (ICECAA 2023)

IEEE Xplore Part Number: CFP23BV8-ART; ISBN: 979-8-3503-4757-9

A Natural Language Processing for Sentiment


Analysis from Text using Deep Learning Algorithm
Ghamya Kotapati1 Suma Kamalesh Gandhimathi2 Palthiya Anantha Rao3
Department of CSE (AI&ML) School of Computer Science Department of CSE,
School of Computing, and Engineering Koneru Lakshmaiah Education
Mohan Babu University, Vellore Institute of Technology-AP Foundation,
2023 2nd International Conference on Edge Computing and Applications (ICECAA) | 979-8-3503-4757-9/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICECAA58104.2023.10212127

Tirupati, India Amaravati, India Vaddeswaram, India


ghamyakotapati@gmail.com Sumakamalesh.g@vitap.ac.in Pananth534@kluniversity.in
Ganesh Karthik Muppagowni4* K Ragha Bindu5 M Sharath Chandra Reddy6
Department of CSE Department of CSE, Department of CSE,
GITAM School of Technology, Sree Vidyanikethan Engineering Sree Vidyanikethan Engineering
GITAM University, College, College,
Bengaluru, India Tirupati, India Tirupati, India
mkarthikrio88@gmail.com raghabindu.katta@gmail.com sharat.mudigeti@gmail.com

its offerings. For instance, in response to customer feedback,


ABSTRACT
online retailers and food suppliers continually improve their
Sentiment analysis has its large application in a natural language services. The challenge here, though, is manually navigating
processing. Natural language processing have a large range of through the feedback, which takes far too much time and
applications like machine translation, aspect-oriented product effort.
analysis, product reviews, text classification and sentiment
analysis for spam filtering and email categorization. Lexicons are Some of the existing methods are Sentiment
widely used in emotion detection systems can be defined as a list Classification from online Customer reviews Using
of words that the emotions convey or complex machine learning Contextual Sentence Structure is one of the existing systems.
algorithms. In this implementation, BERT (Bidirectional It is a domain independent rule based method for semantically
Encoder Representations for Transformers) is used as a deep classifying sentiment from customer reviews. The
learning-based unsupervised method for natural language
disadvantage with the system is it is neither a learning based
processing that enables computers to understand text
representations in terms of context when performing tasks like approach nor a lexicon based approach. This method has taken
question-answering, language inference, and text summarization. less number of datasets. The accuracy at sentence level is less
The suggested method divides the text into various emotional than the accuracy at review level. The accuracy seems of the
states, such as neutral, sadness, fear, joy, anger, etc. existing system is better than the average results of other
methods but there is no comparison with other lexicon based
Keywords: Deep Learning, Natural language processing, Emotion methods neither with learning methods. This is also time
Recognition, BERT (Bidirectional Encoder Representations for consuming and have high complexities.The BoW approach,
Transformers). which has been used for a while, is simple and depends on
predetermined sentiment lexicons or dictionaries to assess the
1.INTRODUCTION sentiment of a text based on the frequency or presence of
particular words or phrases linked to positive or negative
sentiment. N-gram models take into account n-word
The practice of analysing the text data to make
sequences in order to capture the context of sentiment. These
certain sentiments or emotions conveyed in the text is known
models can be combined with different machine learning
as sentiment analysis. Today, people communicate their ideas
methods to categorize sentiment based on the existence of
electronically on a variety of subjects, such as online book or
particular n-grams, such as Naive Bayes, Support Vector
film reviews. Interpreting people's intentions consequently
Machines (SVM), or Random Forests.But the disadvantage is
requires weighing various points of view. In general,
it has limited contextual understanding,data sparsity, lack of
sentiment refers to two distinct thought types—positive or
semantic understanding,fixed window size,lack of
negative—across many platforms where the weight of popular
adaptability,order sensitivity.
opinion is relevant. The core of sentiment categorization is
sentiment extraction, and extensive research has been done in By classifying the sentiment polarity linked to a
this area.For instance, in response to customer feedback, food person's perspective using BERT, we can solve this problem.
suppliers and online retailers constantly improve their services. BERT (Bidirectional Encoder Representations from
For instance, the most widely used ride-sharing service in Transformers) is known as a pre-trained language which has
Bangladesh, Pathao, and uses customer feedback to enhance proven to be very successful at tasks requiring natural

979-8-3503-4757-9/23/$31.00 ©2023 IEEE 1028


Authorized licensed use limited to: BMS College of Engineering. Downloaded on October 18,2024 at 10:01:53 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Edge Computing and Applications (ICECAA 2023)
IEEE Xplore Part Number: CFP23BV8-ART; ISBN: 979-8-3503-4757-9

language processing, such as sentiment analysis.In order to reviews from various online and social networking platforms,
consider the context of each and every word in a sentence, including those for products, services, entertainment, and
BERT uses a transformer-based architecture. This is crucial politics. First, we improved the BERT, then we used the
for sentiment analysis because the surrounding words can aggregating layer as the text embedding, and finally, we
affect how a word is perceived. The first step in using BERT created a highly integrated model called CNN-BiLSTM for
for sentiment analysis is to hone the pre-trained model using a decision-making. Due to its high accuracy in tasks requiring
set of labelled text data. This entails teaching the model to natural language processing and its capacity to comprehend
anticipate the tone of a sentence given its surrounding context. the context of each word in a sentence, BERT is, overall, an
effective tool for sentiment analysis of text data.
By taking into account the other words in a sentence,
BERT is able to capture contextual word representations. This 2. LITERATURE SURVEY
enables BERT to grasp subtleties and clear out any potential
ambiguities by understanding words in the context of the Hou.Y et. al [1] proposed Sentiment analysis in 2015
entire phrase. Such contextual information is not captured by on the Chinese poetry,” Analyzing sentiment in classical
conventional models like Bag-of-Words or lexicon-based Chinese poetry”. These studies show how effective BERT is
techniques. for sentiment analysis while also highlighting ongoing work to
enhance and broaden this methodology. A corpus of
An N-gram model relay on the occurrence of traditional Chinese poetry is examined using natural language
sequence of words but not on the relationship among the processing techniques in order to extract sentiment-related
words but BERT relies on the relationship among the words components. The poems are then divided into several
and also it has bidirectional attention. sentiment categories using machine learning models that we
train and test.
Ghosh.A et. al [2] proposed ‘‘SemEval-2015 task 11:
Sentiment analysis of a figurative language in the Twitter’’ in
2015 used a series of heuristics, this first finds tweets that use
figurative language, and then it uses machine learning
techniques to identify the sentiment communicated in the
tweet. They experimented with lexical, syntactic, and
sentiment-specific feature sets, among others. This study
contributes to the development of sentiment analysis
techniques for social media content because figurative
language is frequently used in it.
Sreeja.P.S et. al [3] has proposed “Emotion
recognition from poems by maximum posterior probability”,
2016. It has the potential for practical applications like
sentiment analysis in poetry collections or recommendation
systems for readers based on their emotional preferences. The
suggested method for emotion identification from poems
utilising the Maximum Posterior Probability methodology
yields encouraging results.
Kaur.J et. al [4] proposed ‘‘Punjabi poetry
classification: The test of 10 machine learning algorithms” in
2017.The study stresses the significance of feature extraction
techniques in obtaining high classification accuracy and offers
insights into the efficacy of several machine learning
algorithms for categorising Punjabi poetry. The findings can
be used to guide the creation of automated systems for the
study and preservation of Punjabi poetry ten machine learning
algorithms are tested on how well they categorize Punjabi
Figure 1.1 Block Diagram of Proposed Method poems into six groups based on author and genre such as
A probability score that indicates the likelihood that a Decision Tree Algorithm (DT), Random Forest Algorithm
given sentence expresses a specific sentiment, such as a (RF), K-Nearest Neighbour Algorithm (KNN), Naive Bayes
positive, negative, or neutral sentiment, is the usual output of Algorithm (NB), Logistic Regression Algorithm (LR),
BERT for sentiment analysis. Text data can be categorized Support Vector Machine Algorithm (SVM), Gradient
into various sentiment categories using this score. This Boosting Machine Algorithm (GBM), XGBoost Algorithm
involves analysing a sizable sentiment dataset that contains (XGB), Multilayer Perceptron Algorithm (MLP), and
Convolutional Neural Network (CNN).

979-8-3503-4757-9/23/$31.00 ©2023 IEEE 1029


Authorized licensed use limited to: BMS College of Engineering. Downloaded on October 18,2024 at 10:01:53 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Edge Computing and Applications (ICECAA 2023)
IEEE Xplore Part Number: CFP23BV8-ART; ISBN: 979-8-3503-4757-9

Mohanty.G et. al [5] has proposed a study to compile varioustechniques for the sentiment analysis and machine
a corpus of Odia poetry with information on sentiment learning algorithms are outperformed on several datasets.
polarity in 2018,”Sad or glad? Corpus creation for an Odia
Raza.M.R et. al [12] has proposed a study between
poetry with the sentiment polarity information“, specifically
the deep learning approaches and the importance of
indicating whether the poem expresses happiness or sadness.
implementing the sentimental analysis for text in cloud
The corpus is produced by gathering a wide range of Odia
computing by comparing with the existing approaches.
poems from various sources and manually annotating them for
sentiment polarity. The study assesses how well various
machine learning algorithms predict the sentiment polarity of Mahmud M.S et. al [13] has proposed a model and
the poems as well as the inter-annotator agreement among the evaluated the sentimental analysis to detect the opinions from
the reviews of text using the DistillBert algorithm and it
annotators. Researchers interested in examining the sentiment
outperformed when compared to the other approaches by
and emotions expressed in Odia poetry may find the corpus
achieving the 98% of highest accuracy.
useful for sentiment analysis tasks as well.
Devlin.J et. al [6] ”BERT: Deep Bidirectional Table 2.1: Overview of Survey
Transformers for Language Understanding Pre-training”.
Human Language Technologies: Proceedings of the 2019 S.No Authors Year of Technique
Conference of the Association for Computational Linguistics' publication
North American Chapter (NAACL-HLT 2019). The BERT 1 Hou.Y and 2015 Natural language
model was introduced in this paper, which also demonstrated Frank.A processing
how it performed better than earlier cutting-edge models at a
number of NLP tasks, including sentiment analysis. 2 Ghosh.A, Li.G, 2015 machine learning
Xiong.C et. al [7] Using BERT to Create an Veale.T,
Auxiliary Sentence for Aspect-Based Sentiment Rosso.P,
Analysis.2019 Conference on Empirical Methods in Natural Shutova.E,
Language Processing and the 9th International Joint Barnden.J, and
Conference on Natural Language Processing This study Reyes.A
suggests a novel approach for aspect-based sentiment analysis 3 Sreeja.P.S and 2016 Maximum
using BERT and auxiliary sentence construction. Mahalakshmi.G. Posterior
S Probability
Carbonell.J.G et. al [8] ” Generalized Autoregressive
4 Kaur.J and 2017 Decision Tree
Pretraining for Language Understanding is an XLNet
Saini.J.R Algorithm,
technology”. In Neural Information Processing Systems:
Random Forest
Advances (NeurIPS 2019). The XLNet model, which this
Algorithm, K-
paper proposed, is similar to BERT but has some advantages
Nearest Neighbor
over it due to the permutation-based training method it
Algorithm, Naive
employs. The researchers demonstrated that XLNet produced
Bayes Algorithm,
cutting-edge outcomes on a number of benchmark datasets,
Logistic
including sentiment analysis.
Regression
Chen.Y et. al [9] ” Classification of Sentences Using Algorithm,
BERT Fine-Tuning”. Reports from the 2020The sixth Support Vector
international conference on robotics, automation, and control Machine
(ICCAR 2020). This study offered a thorough examination of Algorithm,
various BERT fine-tuning strategies for sentiment analysis Gradient
and suggested a brand-new strategy based on data Boosting
augmentation. Machine
Algorithm,
LeCun.Y et. al [10] “Convolutional networks at the XGBoost
character level for text classification”. In the 28th Algorithm,
International Conference on Machine Learning Proceedings Multilayer
(ICML 2020). The new sentiment analysis model proposed in Perceptron
this paper outperformed BERT on several benchmark datasets
Algorithm, and
and was based on character-level convolutional neural
Convolutional
networks (CNNs). Neural Network
Rahat, A. Met. al [11] developed an area where Algorithm.
people can easily detect their choices using the sentimental 5 Mohanty.G and 2018 creating a corpus
analysis. This study offered a thorough examination of Mishra.P of Odia poetry

979-8-3503-4757-9/23/$31.00 ©2023 IEEE 1030


Authorized licensed use limited to: BMS College of Engineering. Downloaded on October 18,2024 at 10:01:53 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Edge Computing and Applications (ICECAA 2023)
IEEE Xplore Part Number: CFP23BV8-ART; ISBN: 979-8-3503-4757-9

with sentiment assigned as a vector and for the words in sentence 2 is


polarity assigned with index 1 as a vector.
information
Position embeddings:
6 Devlin,J.,Toutan 2018 pre-training a
ova,K., & Chang, deep bidirectional Position embeddings are known as vectors that
M.W. transformer shows where a word's position is in a sentence.Despite being a
model really significant and useful embedding, token embeddings do
7 Carbonell, J.G., 2019 XLNet not provide information regarding the token's placement
Salakhutdinov, within a sentence.
R., Yang, Z., Dai,
Z., Yang, Y., &
Le, Q.V
8 LeCun, Y., Zhao, 2020 Convolution
L., and Zhang, X. Neural Network

3. METHODOLOGY

Let's take a closer look at these unique tokens that


provide a distinct meaning to the encoder.

CLS:
The word means classification. It is included to
signify classification at the sentence level. In order for it to
fully represent the sentence,it is typically added at the
beginning of a sequence.
SEP:
When predicting the following statement, the SEP is
utilized. It acts as a delimiter token to inform the model about
the next sentence. It is merely added at the conclusion of one
sentence. During the MLM phase, this token is utilized. It
serves as the model's representation of the masked word.
Token embeddings:
Token embeddings procedure involves pre training of Figure 3.1 BERT Model
different words and these pre trained embeddings are created
Therefore, a different embedding known as the
using Word Piece Tokenisation which is a segmentation
position embeddings are applied to solve it.The absolute
algorithm.This algorithm merges set of different characters to
position embedding is used to explain how a token at one
build vocabulary which is used as language model.New words
place attends to another token at a different position. Up to
can be created by using different combinations of characters
512 charactersinput sequences can be processed by BERT.
initialized which increases the accuracy of the training
The authors used BERT to train a vector representation for
data.Each word fragment token can be transformed into a 768-
each point, taking advantage of the input sequences' sequential
dimensional vector representation using the token embeddings
nature. In position embeddings layer, the vector representation
layer. By using basic elements built in the language model, it
of any word in the first position for first row and the second
can define the size of number of word units to be reached or a
position for second row, and so on, is basically a lookup table
certain threshold can be used.
with a range of (512, 768) rows.For BERT to support
Segment embeddings: temporal ordering, segment and position embeddings are
required. The total embeddings such as Position,
This is followed by token embeddings we can know
Segmentation and token embeddings are called as input
by inserting a fixed token into sentences, a process known as
embeddings. In order to create a single representation with
segment embedding, we can determine if a given token
shape, the elements from all three of the aforementioned
belongs to a given sentence or not. It encodes a sentence into a
embeddings are added together (1, n, 768). The input
vector and has 2 vector representations in this layer.We assign
representation used by BERT's encoder layer is this. One of
these vectors to the words based on sentence number. For
the most important factors contributing to the model's speed
example, if a word belongs to sentence 1 an index 0 is
and exceptional performance is its BERT embeddings.

979-8-3503-4757-9/23/$31.00 ©2023 IEEE 1031


Authorized licensed use limited to: BMS College of Engineering. Downloaded on October 18,2024 at 10:01:53 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Edge Computing and Applications (ICECAA 2023)
IEEE Xplore Part Number: CFP23BV8-ART; ISBN: 979-8-3503-4757-9

Hidden State: iv) Fine-tuning the BERT model:


As the BERT's output, the tokens in the input A labelled dataset of positive and negative text
sequence are represented by a hidden state vector of a samples is used to train the pre-trained BERT model, which is
predetermined hidden size.Then, for various NLP tasks, these then fine-tuned for a sentiment analysis job. The BERT
concealed states from the final layer of BERT are utilized. A model's output layer is altered to generate an output for binary
transformer is constructed from numerous related layers that classification (positive or negative sentiment).
are placed on top of one another. There are inputs and outputs
v) Testing model:
for every tier. Hence, the input of layer n is the output of layer
n-1. The model must be tested on a fresh batch of text
samples in order to see how well it performs. The model’s
Classification Layer:
correctness is assessed by using a metrics such as recall,
A classification layer calculates the cross-entropy precision and F1 score.
loss for the weighted classification and un-weighted
Step by Step Procedure to implement the sentimental Analysis:
classification problems with the classes that are equally
limited. The layer deduces the quantity of classes from the Step1: Import required libraries, Import emotions and text
result size of the first layer.However, a number of variables, dataset.
quantity of training data, quality of training data, Step2: Pre-process data to remove missing data.
hyperparameters, and the kind of text data being analyzed, Step3: Assign train dataset and test dataset to variables
might affect how well the BERT model performs sentiment
Step4: Encode the input data
analysis. Additionally, BERT's utility in low-resource
environments is constrained by the high computational cost of Step5: Build the classifier model for the mentioned deep
fine-tuning BERT.In conclusion, BERT-based sentiment learning algorithm based on training set.
analysis from text has demonstrated significant promise for Step6: Test the Classifier model for the mentioned deep
enhancing the precision and effectiveness of sentiment learning algorithm based on test set.
analysis activities. BERT-based sentiment analysis is likely to Step7: Perform Comparison Evaluation of the experimental
become more effective and more applicable to a variety of performance results obtained for each classifier.
topics and languages with more research and development in Step 8: The model predicts the person emotions given by a
this field. Overall, the pre-trained model on a labelled dataset person.
of positive and negative samples may be fine-tuned to conduct
sentiment analysis on text data using the BERT technique. 4. RESULTS
The model that is produced may then be used to categories BERT models are excellent at identifying context
fresh text samples as positive or negative in accordance with and interpreting sentiment within the context of the entire
their emotion. input text. This means that while determining sentiment, the
BERT Algorithm: algorithm takes into account both the links between words and
sentences and the whole context. In order to ensure correct
Bert Algorithm mainly includes five steps they are as comprehension of the sentiment analysis results, it is crucial to
follows: take into account the context of the input text.

i) Text pre-processing: The model is honed for a particular job, like


sentiment analysis, after being trained on a lot of text input.
Pre-processing the text data is the initial stage. Stop The model is evaluated on a validation set to determine the
words, punctuation, and other superfluous characters must be performance of model. The Bert model is trained and
eliminated. validated with a dataset of text associated with emotion. After
ii) Tokenization: training and validating the model on dataset, it is obtained an
accuracy of 95% which means in other words, 95% of the text
Tokenization is the act of dividing the text into in the validation set had their sentiment accurately predicted
separate words, phrases, or sub words. These numerical values by the model. From the graphs, it can say that the model
are then allocated to each individual token. Word Piece accuracy has been increasing and gradually the model loss has
tokenization, used by BERT, divides words into sub words also been decreasing during training the model.
according to their frequency.
The subjective aspect of sentiment itself affects
iii) Text encoding: sentiment analysis utilising BERT. Different people may
Positional encoding is the method BERT employs to understand and show emotion in various ways. BERT models
encode the input text. This entails putting a numerical value can also record delicate emotions or mixed feelings, in
on each word or sub word that represents where it falls in the addition to more complex sentiments. The inherent
input sequence. subjectivity and nuanced nature of sentiment analysis should
be taken into account when evaluating the results. Figure 4.1

979-8-3503-4757-9/23/$31.00 ©2023 IEEE 1032


Authorized licensed use limited to: BMS College of Engineering. Downloaded on October 18,2024 at 10:01:53 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Edge Computing and Applications (ICECAA 2023)
IEEE Xplore Part Number: CFP23BV8-ART; ISBN: 979-8-3503-4757-9

and Figure 4.2 representing the accuracy of the model and loss 5. CONCLUSION
of the model during training and testing. The accuracy of the
Sentiment analysis, often known as opinion mining, is a
model during initial training was very low as number of
methodwhich is used in natural language processing (NLP) to
training epochs increased the accuracy of the model has been
determine the emotional undertone of a document. It can be
increased which shows the increase in confidence level of the
used to determine whether a piece of writing conveys positive,
model in predicting the statement. Also the loss the model
neutral, or negative emotions. Since BERT was trained on a
during initial stage of training was very low and it is increased
huge text corpus, the architecture or model was able to learn a
when number of training epochs increased and during
variety of data patterns, comprehend language better, and
validation the loss has become minimal due to training it large
effectively generalize to a variety of NLP applications. The
number of epochs.The model accuracy has also been
refined BERT model can outperform other conventional
gradually increasing during testing. This shows the model is
machine learning models and deep learning models on a
able to predict the sentiment correctly of the given sentence as
variety of sentiment analysis datasets. Organizations
the training set increases and also by running more number of
frequently employ this strategy to identify and group concepts
epochs the graph during validation proves this.
related to a particular product, service, or idea. BERT is an
effective language model that can capture intricate verbal
representations and patterns, making it suitable for sentiment
analysis tasks.The following are some advantages of
sentiment analysis: gathering a lot of unstructured data from
several sources. Monitor the moment of customer mood and
feedback regarding a company's name, goods, and services.
Giving suggestions on how to make products, services, and
the consumer experience better.

6. REFERENCES
[1] Y. Hou and A. Frank, ‘‘Analyzing sentiment in classical Chinese
poetry,’’ in Proc. 9th SIGHUM Workshop Lang. Technol. Cultural
Heritage, Social Sci., Hum. (LaTeCH), 2015, pp. 15–24.

[2]A. Ghosh, G. Li, T. Veale, P. Rosso, E. Shutova, J. Barnden, and A.


Reyes, ‘‘SemEval-2015 task 11: Sentiment analysis of figurative
language in Twitter,’’ in Proc. 9th Int. Workshop Semantic Eval.
(SemEval), 2015, pp. 470–478.
Figure 4.1 A graph shows model accuracy versus epochs during training and [3] P. S. Sreeja and G. S. Mahalakshmi, ‘‘Emotion recognition from poems
validation. by maximum posterior probability,’’ Int. J. Comput. Sci. Inf. Secur.,
vol. 14, pp. 36–43, 2016.

[4] J. Kaur and J. R. Saini, ‘‘Punjabi poetry classification: The test of 10


machine learning algorithms,’’ in Proc. 9th Int. Conf. Mach. Learn.
Comput. (ICMLC), 2017, pp. 1–5.

[5] G. Mohanty and P. Mishra, ‘‘Sad or glad? Corpus creation for Odia
poetry with sentiment polarity information,’’ in Proc. 19th Int. Conf.
Comput. Linguistics Intell. Text Process. (CICLing), Hanoi, Vietnam,
2018

[6] Devlin, J., Toutanova, K., & Chang, M.W. (2018). “BERT: Deep
Bidirectional Transformers for Language Understanding Pre-
training”.

[7]Xiong, C., Sun, C., Huang, L., and Huang, L. (2019).” Using BERT to
Create an Auxiliary Sentence for Aspect-Based Sentiment Analysis”.

[8] Carbonell, J.G., Salakhutdinov, R., Yang, Z., Dai, Z., Yang, Y., & Le,
Q.V. (2019).” Generalized Autoregressive Pretraining for Language
Understanding is an XLNet technology”.

[9] Chen Y., Chen X., Chen Z., and Wang, J. (2020). “Classification of
Sentences Using BERT Fine-Tuning”.

Figure 4.2 A sample line graph shows model loss versus epochs during [10] LeCun, Y., Zhao, L., and Zhang, X. (2020). “Convolutional networks
training and validation. at the character level for text classification”.

979-8-3503-4757-9/23/$31.00 ©2023 IEEE 1033


Authorized licensed use limited to: BMS College of Engineering. Downloaded on October 18,2024 at 10:01:53 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Edge Computing and Applications (ICECAA 2023)
IEEE Xplore Part Number: CFP23BV8-ART; ISBN: 979-8-3503-4757-9

[11] Rahat, A. M., Kahir, A., &Masum, A. K. M. (2019, November).


Comparison of Naive Bayes and SVM Algorithm based on sentiment
analysis using review dataset. In 2019 8th International Conference
System Modeling and Advancement in Research Trends (SMART)
(pp. 266-270).
[12] Raza, M. R., Hussain, W., Tanyıldızı, E., &Varol, A. (2021, June).
Sentiment analysis using deep learning in cloud. In 2021 9th
International Symposium on Digital Forensics and Security (ISDFS)
(pp. 1-5).
[13] Mahmud, M. S., Bonny, A. J., Saha, U., Jahan, M., Tuna, Z. F., & Al
Marouf, A. (2022, March). Sentiment analysis from user-generated
reviews of ride-sharing mobile applications. In 2022 6th
International Conference on Computing Methodologies and
Communication (ICCMC) (pp. 738-744).

979-8-3503-4757-9/23/$31.00 ©2023 IEEE 1034


Authorized licensed use limited to: BMS College of Engineering. Downloaded on October 18,2024 at 10:01:53 UTC from IEEE Xplore. Restrictions apply.

You might also like