Towards the Importance of the Type of Deep Neural Network and Employment of Pre-trained Word Vectors for Toxicity Detection: An Experimental Study
DOI:
https://doi.org/10.13052/jwe1540-9589.2082Keywords:
word embedding, word vector, deep neural network, convolutional neural network, recurrent neural network, toxic comment detectionAbstract
As a natural consequence of offering many advantages to their users, social media platforms have become a part of daily lives. Recent studies emphasize the necessity of an automated way of detecting the offensive posts in social media since these ‘toxic’ posts have become pervasive. To this end, a novel toxic post detection approach based on Deep Neural Networks was proposed within this study. Given that several word embedding methods exist, we shed light on which word embedding method produces better results when employed with the five most common types of deep neural networks, namely, , , , , and a combination of and . To this end, the word vectors for the given comments were obtained through four different methods, namely, () , () , () , and () the layer of deep neural networks. Eventually, a total of twenty benchmark models were proposed and both trained and evaluated on a gold standard dataset which consists of tweets. According to the experimental result, the best , , was obtained on the proposed model without employing pre-trained word vectors which outperformed the state-of-the-art works and implies the effective embedding ability of s. Other key findings obtained through the conducted experiments are that the models, that constructed word embeddings through the layers, obtained higher s and converged much faster than the models that utilized pre-trained word vectors.
Downloads
References
S. Kemp, “Digital 2020,” We Are Social & Hootsuite, 2020. https://wearesocial.com/digital-2020 (accessed Jan. 29, 2021).
E. Whittaker and R. M. Kowalski, “Cyberbullying Via Social Media,” J. Sch. Violence, vol. 14, no. 1, pp. 11–29, 2015, doi: 10.1080/15388220.2014.949377.
S. Hinduja and J. W. Patchin, “Bullying, Cyberbullying, and Suicide,” Arch. Suicide Res., vol. 14, no. 3, pp. 206–221, 2010, doi: 10.1080/ 13811118.2010.494133.
K. Khieu and N. Narwal, “CS224N: Detecting and Classifying Toxic Comments.”
J. Risch and R. Krestel, “Toxic Comment Detection in Online Discussions,” in Deep Learning-Based Approaches for Sentiment Analysis, Springer, 2020, pp. 1–27.
S. Agrawal and A. Awekar, “Deep Learning for Detecting Cyberbullying Across Multiple Social Media Platforms,” in Proceedings of the Advances in Information Retrieval – 40th European Conference on IR Research (ECIR 2018), 2018, pp. 141–153, doi: 10.1007/978-3-319-76941-7_11.
N. Nandakumar, B. Salehi, and T. Baldwin, “A Comparative Study of Embedding Models in Predicting the Compositionality of Multiword Expressions,” in Proceedings of the Australasian Language Technology Association Workshop 2018 (ALTA 2018), 2018, pp. 71–76.
S. Liu, N. Yang, M. Li, and M. Zhou, “A Recursive Recurrent Neural Network for Statistical Machine Translation,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), 2014, pp. 1491–1500, doi: 10.3115/v1/p14-1140.
D. Tang, F. Wei, N. Yang, M. Zhou, T. Liu, and B. Qin, “Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), 2014, pp. 1555–1565, doi: 10.3115/v1/p14-1146.
H. Li, X. Li, D. Caragea, and C. Caragea, “Comparison of Word Embeddings and Sentence Encodings as Generalized Representations for Crisis Tweet Classification Tasks,” in Proceedings of the ISCRAM Asian Pacific 2018 Conference, 2018, pp. 1–13.
T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed Representations of Words and Phrases and their Compositionality,” in Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013, 2013, pp. 3111–3119.
T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient Estimation ofWord Representations in Vector Space,” in Proceedings of the International Conference on Learning Representations (ICLR 2013), 2013, pp. 1–12.
T. Mikolov, E. Grave, P. Bojanowski, C. Puhrsch, and A. Joulin, “Advances in Pre-Training Distributed Word Representations,” in Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), 2018, pp. 52–55.
J. Pennington, R. Socher, and C. D. Manning, “GloVe: Global Vectors for Word Representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), 2014, pp. 1532–1543.
T. Davidson, D. Warmsley, M. Macy, and I. Weber, “Automated Hate Speech Detection and the Problem of Offensive Language,” in Proceedings of the 11th International Conference on Web and Social Media (ICWSM 2017), 2017, pp. 1–4.
Z. Zhang, D. Robinson, and J. Tepper, “Detecting Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network,” in Proceedings of the 15th International Conference (ESWC 2018), 2018, pp. 1–15, doi: 10.1007/978-3-319-93417-4_48.
P. Badjatiya, S. Gupta, M. Gupta, and V. Varma, “Deep Learning for Hate Speech Detection in Tweets,” in Proceedings of the 26th International World Wide Web Conference 2017 (WWW ’17 Companion), 2017, pp. 759–760, doi: 10.1145/3041021.3054223.
Z. Waseem and D. Hovy, “Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter,” in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2016), 2016, pp. 88–93, doi: 10.18653/v1/n16-2013.
A. Arango, J. Pérez, and B. Poblete, “Hate Speech Detection is Not as Easy as You May Think: A Closer Look at Model Validation,” in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19), 2019, pp. 45–54, doi: 10.1016/j.is.2020.101584.
M. Zampieri, S. Malmasi, P. Nakov, S. Rosenthal, N. Farra, and R. Kumar, “Predicting the Type and Target of Offensive Posts in Social Media,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 1415–1420, doi: 10.18653/v1/n19-1144.
J. H. Park and P. Fung, “One-step and Two-step Classification for Abusive Language Detection on Twitter,” in Proceedings of the 1st Workshop on Abusive Language Online to be held at the annual meeting of the Association of Computational Linguistics (ACL) 2017 (ALW1), 2017, pp. 41–45, doi: 10.18653/v1/w17-3006.
C. Van Hee et al., “Automatic detection of cyberbullying in social media text,” PLoS One, pp. 1–22, 2018, doi: 10.17605/OSF.IO/RGQW8.
N. Chakrabarty, “A Machine Learning Approach to Comment Toxicity Classification,” in Proceedings of the 1st International Conference on Computational Intelligence in Pattern Recognition (CIPR 2019), 2019, pp. 1–10.
K. Reynolds, A. Kontostathis, and L. Edwards, “Using machine learning to detect cyberbullying,” in Proceedings of the 2011 10th International Conference on Machine Learning and Applications and Workshops (ICMLA 2011), 2011, pp. 241–244, doi: 10.1109/ICMLA.2011.152.
R. Rehurek and P. Sojka, “Software Framework for Topic Modelling with Large Corpora,” in Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 2010, pp. 45–50.
Z. Bairong, W. Wenbo, L. Zhiyu, Z. Chonghui, and T. Shinozaki, “Comparative Analysis of Word Embedding Methods for DSTC6 End-to-End Conversation Modeling Track,” 2017.
S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997, doi: 10.1162/neco.1997.9. 8.1735.
F. Chollet, “Keras: the Python deep learning API,” 2021. https://keras.io (accessed Jan. 29, 2021).
M. Abadi et al., “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2016), 2016, pp. 265–283.
F. Chollet, Deep Learning with Python. Manning Publications, 2017.
T. Zhang, W. Zheng, Z. Cui, Y. Zong, J. Yan, and K. Yan, “A Deep Neural Network-Driven Feature Learning Method for Multi-view Facial Expression Recognition,” IEEE Trans. Multimed., vol. 18, no. 12, pp. 2528–2536, 2016, doi: 10.1109/TMM.2016.2598092.
A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV 2016), 2016, pp. 1–10, doi: 10.1109/WACV.2016.7477450.
Y. Liu, J. A. Starzyk, and Z. Zhu, “Optimized Approximation Algorithm in Neural Networks Without Overfitting,” IEEE Trans. Neural Networks, vol. 19, no. 6, pp. 983–995, 2008, doi: 10.1109/TNN.2007.915114.
V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” in Proceedings, of the 27th International Conference on Machine Learning (ICML 2010), 2010, pp. 807–814.
J. Xiong, K. Zhang, and H. Zhang, “A Vibrating Mechanism to Prevent Neural Networks from Overfitting,” in Proceedings of the 2019 15th International Wireless Communications and Mobile Computing Conference (IWCMC 2019), 2019, pp. 1737–1742, doi: 10.1109/IWCMC.2019.8766500.
H. Wang, Y. Zhang, and X. Yu, “An Overview of Image Caption Generation Methods,” Comput. Intell. Neurosci., vol. 2020, pp. 1–13, 2020, doi: 10.1155/2020/3062706.
A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM and other neural network architectures,” Neural Networks, vol. 18, no. 2005 Special Issue, pp. 602–610, 2005, doi: 10.1016/j.neunet.2005.06.042.
W. Wang and B. Chang, “Graph-based Dependency Parsing with Bidirectional LSTM,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), 2016, pp. 2306–2315, doi: 10.18653/v1/p16-1218.
D. P. Kingma and J. L. Ba, “Adam: A Method for Stochastic Optimization,” in Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), 2015, pp. 1–15.
H. Robbins and S. Monro, “A Stochastic Approximation Method,” Ann. Math. Stat., vol. 22, no. 3, pp. 400–407, 1951, doi: 10.1214/aoms/1177729586.
B. Wu, Z. Liu, Z. Yuan, G. Sun, and C. Wu, “Reducing Overfitting in Deep Convolutional Neural Networks Using Redundancy Regularizer,” in 26th International Conference on Artificial Neural Networks (ICANN 2017), 2017, pp. 49–55, doi: 10.1007/978-3-319-68612-7_6.
X. Ying, “An Overview of Overfitting and its Solutions,” in Proceedings of the International Conference on Computer Information Science and Application Technology (CISAT 2018), 2018, pp. 1–6, doi: 10.1088/1742-6596/1168/2/022022.
R. F. Liao, H. Wen, J. Wu, H. Song, F. Pan, and L. Dong, “The Rayleigh Fading Channel Prediction via Deep Learning,” Wirel. Commun. Mob. Comput., vol. 2018, pp. 1–11, 2018, doi: 10.1155/2018/6497340.
“Tweepy,” 2021. https://www.tweepy.org (accessed Jan. 29, 2021).
J. Camacho-Collados and M. T. Pilehvar, “On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis,” in Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 2018, pp. 40–46, doi: 10.18653/v1/w18-5406.
S. Bird, E. Klein, and E. Loper, Natural Language Processing With Python, 1st ed. O’Reilly Media, 2009.
M. Pumperla, “Hyperas by maxpumperla,” 2021. http://maxpumperla.com/hyperas/ (accessed Jan. 29, 2021).
“Colaboratory,” Google, 2021. https://colab.research.google.com (accessed Jan. 29, 2021).
Z. Zhang and L. Luo, “Hate Speech Detection: A Solved Problem? The Challenging Case of Long Tail on Twitter,” Semant. Web, vol. 1, no. 0, pp. 1–21, 2018.