In the contemporary world, people share their thoughts rapidly in social media. Mining and extracting knowledge from this information for performing sentiment analysis is a complex task. Even though automated machine learning algorithms and techniques are available, and extraction of semantic and relevant key terms from a sparse representation of the review is difficult. Word embedding improves the text classification by solving the problem of sparse matrix and semantics of the word. In this paper, a novel architecture is proposed by combining long short-term memory (LSTM) with word embedding to extract the semantic relationship between the neighboring words and also a weighted self-attention is applied to extract the key terms from the reviews. Based on the experimental analysis on the IMDB dataset, the authors have shown that the proposed architecture word-embedding self-attention LSTM architecture achieved an F1 score of 88.67%, while LSTM and word embedding LSTM-based models resulted in an F1 score of 84.42% and 85.69%, respectively. © 2021, IGI Global.