Building machine learning systems with Python (779436), страница 7
Текст из файла (страница 7)
You have probably already used this formof machine learning as a consumer, even if you were not aware of it. If you have anymodern e-mail system, it will likely have the ability to automatically detect spam.That is, the system will analyze all incoming e-mails and mark them as either spamor not-spam. Often, you, the end user, will be able to manually tag e-mails as spam ornot, in order to improve its spam detection ability. This is a form of machine learningwhere the system is taking examples of two types of messages: spam and ham (thetypical term for "non spam e-mails") and using these examples to automatically classifyincoming e-mails.The general method of classification is to use a set of examples of each class to learnrules that can be applied to new examples.
This is one of the most important machinelearning modes and is the topic of this chapter.Working with text such as e-mails requires a specific set of techniques and skills, andwe discuss those in the next chapter. For the moment, we will work with a smaller,easier-to-handle dataset. The example question for this chapter is, "Can a machinedistinguish between flower species based on images?" We will use two datasetswhere measurements of flower morphology are recorded along with the species forseveral specimens.We will explore these small datasets using a few simple algorithms. At first,we will write classification code ourselves in order to understand the concepts,but we will quickly switch to using scikit-learn whenever possible.
The goal isto first understand the basic principles of classification and then progress to usinga state-of-the-art implementation.[ 29 ]Classifying with Real-world ExamplesThe Iris datasetThe Iris dataset is a classic dataset from the 1930s; it is one of the first modernexamples of statistical classification.The dataset is a collection of morphological measurements of several Iris flowers.These measurements will enable us to distinguish multiple species of the flowers.Today, species are identified by their DNA fingerprints, but in the 1930s, DNA'srole in genetics had not yet been discovered.The following four attributes of each plant were measured:• sepal length• sepal width• petal length• petal widthIn general, we will call the individual numeric measurements we use to describeour data features.
These features can be directly measured or computed fromintermediate data.This dataset has four features. Additionally, for each plant, the species was recorded.The problem we want to solve is, "Given these examples, if we see a new flower out inthe field, could we make a good prediction about its species from its measurements?"This is the supervised learning or classification problem: given labeled examples,can we design a rule to be later applied to other examples? A more familiar exampleto modern readers who are not botanists is spam filtering, where the user can marke-mails as spam, and systems use these as well as the non-spam e-mails to determinewhether a new, incoming message is spam or not.Later in the book, we will look at problems dealing with text (starting in the nextchapter).
For the moment, the Iris dataset serves our purposes well. It is small(150 examples, four features each) and can be easily visualized and manipulated.Visualization is a good first stepDatasets, later in the book, will grow to thousands of features. With only four in ourstarting example, we can easily plot all two-dimensional projections on a single page.We will build intuitions on this small example, which can then be extended to largedatasets with many more features. As we saw in the previous chapter, visualizationsare excellent at the initial exploratory phase of the analysis as they allow you to learnthe general features of your problem as well as catch problems that occurred withdata collection early.[ 30 ]Each subplot in the following plot shows all points projected into two of thedimensions.
The outlying group (triangles) are the Iris Setosa plants, while IrisVersicolor plants are in the center (circle) and Iris Virginica are plotted with x marks.We can see that there are two large groups: one is of Iris Setosa and another is amixture of Iris Versicolor and Iris Virginica.In the following code snippet, we present the code to load the data and generatethe plot:>>> from matplotlib import pyplot as plt>>> import numpy as np>>> # We load the data with load_iris from sklearn>>> from sklearn.datasets import load_iris[ 31 ]Classifying with Real-world Examples>>> data = load_iris()>>> # load_iris returns an object with several fields>>> features = data.data>>> feature_names = data.feature_names>>> target = data.target>>> target_names = data.target_names>>> for t in range(3):...if t == 0:...c = 'r'...marker = '>'...elif t == 1:...c = 'g'...marker = 'o'.........elif t == 2:c = 'b'marker = 'x'...plt.scatter(features[target == t,0],...features[target == t,1],...marker=marker,...c=c)Building our first classification modelIf the goal is to separate the three types of flowers, we can immediately make a fewsuggestions just by looking at the data.
For example, petal length seems to be ableto separate Iris Setosa from the other two flower species on its own. We can write alittle bit of code to discover where the cut-off is:>>> # We use NumPy fancy indexing to get an array of strings:>>> labels = target_names[target]>>> # The petal length is the feature at position 2>>> plength = features[:, 2]>>> # Build an array of booleans:[ 32 ]>>> is_setosa = (labels == 'setosa')>>> # This is the important step:>>> max_setosa = plength[is_setosa].max()>>> min_non_setosa = plength[~is_setosa].min()>>> print('Maximum of setosa: {0}.'.format(max_setosa))Maximum of setosa: 1.9.>>> print('Minimum of others: {0}.'.format(min_non_setosa))Minimum of others: 3.0.Therefore, we can build a simple model: if the petal length is smaller than 2, thenthis is an Iris Setosa flower; otherwise it is either Iris Virginica or Iris Versicolor.
Thisis our first model and it works very well in that it separates Iris Setosa flowers fromthe other two species without making any mistakes. In this case, we did not actuallydo any machine learning. Instead, we looked at the data ourselves, looking for aseparation between the classes. Machine learning happens when we write code tolook for this separation automatically.The problem of recognizing Iris Setosa apart from the other two species wasvery easy.
However, we cannot immediately see what the best threshold is fordistinguishing Iris Virginica from Iris Versicolor. We can even see that we will neverachieve perfect separation with these features. We could, however, look for the bestpossible separation, the separation that makes the fewest mistakes. For this, we willperform a little computation.We first select only the non-Setosa features and labels:>>> # ~ is the boolean negation operator>>> features = features[~is_setosa]>>> labels = labels[~is_setosa]>>> # Build a new target variable, is_virginica>>> is_virginica = (labels == 'virginica')Here we are heavily using NumPy operations on arrays.
The is_setosa array is aBoolean array and we use it to select a subset of the other two arrays, features andlabels. Finally, we build a new boolean array, virginica, by using an equalitycomparison on labels.[ 33 ]Classifying with Real-world ExamplesNow, we run a loop over all possible features and thresholds to see which oneresults in better accuracy.
Accuracy is simply the fraction of examples that themodel classifies correctly.>>> # Initialize best_acc to impossibly low value>>> best_acc = -1.0>>> for fi in range(features.shape[1]):...# We are going to test all possible thresholds...thresh = features[:,fi]...for t in thresh:...# Get the vector for feature `fi`...feature_i = features[:, fi]...# apply threshold `t`...pred = (feature_i > t)...acc = (pred == is_virginica).mean()...rev_acc = (pred == ~is_virginica).mean()...if rev_acc > acc:...reverse = True...acc = rev_acc......else:reverse = False......if acc > best_acc:...best_acc = acc...best_fi = fi...best_t = t...best_reverse = reverseWe need to test two types of thresholds for each feature and value: we test a greaterthan threshold and the reverse comparison.
This is why we need the rev_acc variablein the preceding code; it holds the accuracy of reversing the comparison.[ 34 ]The last few lines select the best model. First, we compare the predictions, pred,with the actual labels, is_virginica. The little trick of computing the mean of thecomparisons gives us the fraction of correct results, the accuracy. At the end of thefor loop, all the possible thresholds for all the possible features have been tested,and the variables best_fi, best_t, and best_reverse hold our model.
This is allthe information we need to be able to classify a new, unknown object, that is, toassign a class to it. The following code implements exactly this method:def is_virginica_test(fi, t, reverse, example):"Apply threshold model to a new example"test = example[fi] > tif reverse:test = not testreturn testWhat does this model look like? If we run the code on the whole data, the model thatis identified as the best makes decisions by splitting on the petal width. One wayto gain intuition about how this works is to visualize the decision boundary.