Feature selection – say no to bad attributes

By this chapter, we will have a level of comfort when dealing with new datasets. We will have under our belt the abilities to understand and clean the data in front of us. Once we are able to work with the data given to us, we can start to make big decisions such as, at what point is a feature actually an attribute. Recall that by this distinction, feature versus attribute, the question really is, which columns are not helping my ML pipeline and therefore are hurting my pipeline and should be removed? This chapter focuses on techniques used to make the decision of which attributes to get rid of in our dataset. We will explore several statistical and iterative processes that will aid us in this decision.

Among these processes are:

  • Correlation coefficients
  • Identifying and removing multicollinearity
  • Chi-squared tests
  • Anova tests
  • Interpretation of p-values
  • Iterative feature selection
  • Using machine learning to measure entropy and information gain

All of these procedures will attempt to suggest the removal of features and will give different reasons for doing so. Ultimately, it will be up to us, the data scientists, to make the final call over which features will be allowed to remain and contribute to our machine learning algorithms.