Support Vector Machine in scikit-learn

Support Vector Machine in scikit-learn – part 1

Posted on Posted in Machine Learning, scikit-learn

Support Vector Machines

has become one of the state-of-the-art machine learning models for many tasks with excellent results in many practical applications. One of the greatest advantages of Support Vector Machines is that they are very effective when working on high-dimensional spaces, that is, on problems which have a lot of features to learn from. They are also very effective when the data is sparse (think about a high-dimensional space with very few instances). Besides, they are very efficient in terms of memory storage, since only a subset of the points in the learning space is used to represent the decision surfaces.

Support Vector Machines (SVM) are supervised learning methods that try to obtain these hyperplanes in an optimal way, by selecting the ones that pass through the widest possible gaps between instances of different classes. New instances will be classified as belonging to a certain category based on which side of the surfaces they fall on.

To mention some disadvantages, SVM models could be very calculation intensive while training the model and they do not return a numerical indicator of how confident they are about a prediction.

We will apply SVM to image recognition, a classic problem with a very large dimensional space

Let us start by importing and printing the data’s description

In [3]:
import sklearn as sk
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import fetch_olivetti_faces
faces = fetch_olivetti_faces()
print (faces.DESCR)
downloading Olivetti faces from to C:\Users\piush\scikit_learn_data
Modified Olivetti faces dataset.

The original database was available from (now defunct)

The version retrieved here comes in MATLAB format from the personal
web page of Sam Roweis:

There are ten different images of each of 40 distinct subjects. For some
subjects, the images were taken at different times, varying the lighting,
facial expressions (open / closed eyes, smiling / not smiling) and facial
details (glasses / no glasses). All the images were taken against a dark
homogeneous background with the subjects in an upright, frontal position (with
tolerance for some side movement).

The original dataset consisted of 92 x 112, while the Roweis version
consists of 64x64 images.

In [4]:
print (faces.keys())
print (faces.images.shape)
print (
print (
dict_keys(['data', 'images', 'target', 'DESCR'])
(400, 64, 64)
(400, 4096)

The dataset contains 400 images of 40 different persons. The photos were taken with different light conditions and facial expressions (including open/closed eyes, smiling/not smiling, and with glasses/no glasses).

Looking at the content of the faces object, we get the following properties: images, data, and target. Images contain the 400 images represented as 64 x 64 pixel matrices. data contains the same 400 images but as array of 4096 pixels. target is, as expected, an array with the target classes, ranging from 0 to 39.

Do we need to normaize?

It is important for the application of SVM to obtain good results.

In [28]:
print (np.max(
print (np.min(
print (np.mean(

Therefore, we do not have to normalize the data.

Plot the first 20 images.We can see faces from two persons. We have 40 individuals with 10 different images each.
In [6]:
def print_faces(images, target, top_n):
    # set up the figure size in inches
    fig = plt.figure(figsize=(12, 12))
    fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
    for i in range(top_n):
        # plot the images in a matrix of 20x20
        p = fig.add_subplot(20, 20, i + 1, xticks=[], yticks=[])
        # label the image with the target value
        p.text(0, 14, str(target[i]))
        p.text(0, 60, str(i))
In [7]:
print_faces(faces.images,, 20)

Leave a Reply

Your email address will not be published. Required fields are marked *