With the advent of digitization of every possible avenue, Automatic Speech and Emotion Recognition is a widely researched topic that is a subset of Human Computer Interface HCI and has a range of applications. With machines taking over many menial jobs it has become important for the computer to understand us as we understand it. Features such as MFCC, pitch and amplitude are extracted from a given sample and run across the existing and growing database of training samples. MFCC is being used to detected speaker and utterance, while SVM is used to distinguish the emotion of the sample given. An SVM classifier differentiates between anger, happiness, fear, sadness and updates the database as it goes. © 2018 IEEE.