Sign Languages also popularly called Signed Languages. This is the only language by which deaf and dumb can communicate with each other and one of the most unexplored areas. So the proposed paper represents a model that can be used to translate given sign language to simple English text so that it could be used for dumb people. By late proceedings in the field of profound learning, there exist many applications that neural network systems can solve one such problem is this sign language. To meet the requirements, the proposed model uses deep learning techniques to solve the problem. One such technique is Convolutional Neural Networks for image depiction and classification. The model designed in the proposed paper has a dataset that consists of 87,002 images. The dataset has been split into training data and test data in the ratio of 9:1. The proposed model was trained on 78,300 images and the testing was performed on 8700 images which were classified in 29 classes. The model has produced a training accuracy of 98.67%. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd 2021.