Introducing machine learning concepts

Machine learning is a concept that was defined by Arthur Samuel in 1959 as a field of study that gives computers the ability to learn without being explicitly programmed. Tom M. Mitchel provided a more formal definition for machine learning, in which he links the concept of samples with experience data, labels, and performance measurement of algorithms.

The machine learning definition by Arthur Samuel is referenced in Some Studies in Machine Learning Using the Game of Checkers in IBM Journal of Research and Development (Volume3, Issue: 3), p. 210. It was also referenced in The New Yorker and Office Management in the same year. 
The more formal definition from Tom M. Mitchel is referenced in Machine Learning Book, McGray Hill 1997: ( http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/mlbook.html ).

Machine learning involves pattern recognition and learning theory in artificial intelligence, and is related with computational statistics. It is used in hundreds of applications, such as optical character recognition (OCR), spam filtering, search engines, and thousands of computer vision applications, such as the example that we will develop in this chapter, where a machine learning algorithm tries to classify objects that appear in the input image.

Depending on how machine learning algorithms learn from the input data, we can divide them into three categories:

  • Supervised learning: The computer learns from a set of labeled data. The goal here is to learn the parameters of the model and rules that allow computers to map the relationship between data and output label results.
  • Unsupervised learning: No labels are given and the computer tries to discover the input structure of the given data.
  • Reinforcement learning: The computer interacts with a dynamic environment, reaching their goal and learning from their mistakes.

Depending on the results we wish to gain from our machine learning algorithm, we can categorize the results as follows:

  • Classification: The space of the inputs can be divided into N classes, and the prediction results for a given sample are one of these training classes. This is one of the most used categories. A typical example can be email spam filtering, where there are only two classes: spam and non-spam. Alternatively, we can use OCR, where only N characters are available and each character is one class.
  • Regression: The output is a continuous value instead of a discrete value like a classification result. One example of regression could be the prediction of a house price given the house's size, number of years since it was built, and location.
  • Clustering: The input is to be divided into N groups, which is typically done using unsupervised training.
  • Density estimation: Finds the (probability) distribution of inputs.

In our example, we are going to use a supervised learning and classification algorithm where a training dataset with labels is used to train the model and the result of the model's prediction is one of the possible labels. In machine learning, there are several approaches and methods for this. Some of the more popular ones include the following: support vector machines (SVM), artificial neural networks (ANN), clustering, k-nearest neighbors, decision trees, and deep learning. Almost all of these methods and approaches are supported, implemented, and well documented in OpenCV. In this chapter, we are going to explain support vector machines.