Header menu link for other important links
Evaluating the Importance of each Feature in Classification task
Published in IEEE
Pages: 151 - 155
In Machine Learning and statistics attribute/feature selection is used in predictive model construction. This help the Machine in interpreting the features easily by discovering good insight and improves efficiency in predictive modeling. The objective of our research is to improve the classification accuracy by knowing the most important feature from any given dataset. In this research, we used two techniques namely Data partition and K Fold, in evaluating the importance of each feature from the randomly generated dataset with 5399 instances and 20 attributes. In Data partitioning, the attribute with lowest accuracy is filtered out. Where as in K Fold cross validation, attributes with biggest error is removed from the original dataset. In our experiments, the evaluation parameters considered are Recall. Precision and F-Measure. Finally the accuracy rate of both the techniques are compared. The finding in our research stats that K Fold approach achieves better accuracy of 97.03% than Data partitioning(96.11%) in estimating the importance of features in classification. © 2018 IEEE.
About the journal
JournalData powered by Typeset2018 8th International Conference on Communication Systems and Network Technologies (CSNT)
PublisherData powered by TypesetIEEE
Open Access0