Toxic Message Classification Using Convolution And Gru Network

D. Ramana kumar, P.Manideep Pasulad, Afreen Shaik, Nimish Reddy

Abstract


Now-a-days, derogatory comments are often made by one another, not only in offline environments but also immensely in social networking websites and online communities. Thus an identification model which reads any piece of text appearing in any platform is classified and detects the type of toxicity like obscenity, threats, insults and identity-based hatred. Existing system works on the LSTM type of RNN model. Long Short-Term Memory (LSTM) networks are a type of recurrent neural network capable of learning order dependence in sequence prediction problems. In order to improve the efficiency and accuracy of the existing system, we propose Concurrent GRU (ensemble of CNN and GRU models). The motivation for our project is to build a model that can detect toxic comments and find bias with respect to the mention of select identities. The data pre-processing consists of text tokenization and normalization which varies slightly when processing features for the model.


Full Text:

PDF




Copyright (c) 2020 D. Ramana kumar, P.Manideep Pasulad, Afreen Shaik, Nimish Reddy

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

All published Articles are Open Access at  https://journals.pen2print.org/index.php/ijr/ 


Paper submission: ijr@pen2print.org