Header menu link for other important links
X
Comparison of neutrosophic approach to various deep learning models for sentiment analysis[Formula presented]
Published in Elsevier B.V.
2021
Volume: 223
   
Abstract
Deep learning has been widely used in numerous real-world engineering applications and for classification problems. Real-world data is present with neutrality and indeterminacy, which neutrosophic theory captures clearly. Though both are currently developing research areas, there has been little study on their interlinking. We have proposed a novel framework to implement neutrosophy in deep learning models. Instead of just predicting a single class as output, we have quantified the sentiments using three membership functions to understand them better. Our proposed model consists of two blocks, feature extraction, and feature classification. Having a separate feature extraction block enables us to use any model as a feature extractor. We experimented with BiLSTM using GloVe (Global Vectors for word representation), BERT (Bidirectional Encoder Representations from Transformers), ALBERT (A Lite BERT), RoBERTa (Robustly optimized BERT approach), MPNet, and stacked ensemble models. Feature classification performs prediction and dimensionality reduction of features. Experimental analysis was done on the SemEval 2017 Task 4 dataset (Subtask A). We used the intermediate layer features to define membership functions of Single Valued Neutrosophic Sets (SVNS). We used these membership functions for prediction as well. We have compared our models with the top five teams of the task and recent state-of-the-art systems. Our proposed stacked ensemble model achieved the best recall (0.733) score. © 2021 Elsevier B.V.
About the journal
JournalData powered by TypesetKnowledge-Based Systems
PublisherData powered by TypesetElsevier B.V.
ISSN09507051