# Support Vector Machine in scikit-learn- part 2 continued from part 1

In :
```print_faces(faces.images, faces.target, 400)  ```

### Training a Support Vector Machine

###### Support Vector Classifier (SVC) will be used for classification

The SVC implementation has different important parameters; probably the most relevant is kernel, which defines the kernel function to be used in our classifier

In :
```from sklearn.svm import SVC
svc_1 = SVC(kernel='linear')
print (svc_1)
```
```SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma='auto', kernel='linear',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
```
###### Split our dataset into training and testing datasets
In :
```from sklearn.cross_validation import train_test_split

X_train, X_test, y_train, y_test = train_test_split(
faces.data, faces.target, test_size=0.25, random_state=0)
```
###### A function to evaluate K-fold cross-validation.
In :
```from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem

def evaluate_cross_validation(clf, X, y, K):
# create a k-fold croos validation iterator
cv = KFold(len(y), K, shuffle=True, random_state=0)
# by default the score used is the one returned by score method of the estimator (accuracy)
scores = cross_val_score(clf, X, y, cv=cv)
print (scores)
print (("Mean score: {0:.3f} (+/-{1:.3f})").format(np.mean(scores), sem(scores)))
```

#### Cross-validation with five folds

In :
```evaluate_cross_validation(svc_1, X_train, y_train, 5)
```
```[ 0.93333333  0.86666667  0.91666667  0.93333333  0.91666667]
Mean score: 0.913 (+/-0.012)
```

#### Accuracy of 0.933

###### Function to perform training on the training set and evaluate the performance on the testing set
In :
```from sklearn import metrics

def train_and_evaluate(clf, X_train, X_test, y_train, y_test):

clf.fit(X_train, y_train)

print ("Accuracy on training set:")
print (clf.score(X_train, y_train))
print ("Accuracy on testing set:")
print (clf.score(X_test, y_test))

y_pred = clf.predict(X_test)

print ("Classification Report:")
print (metrics.classification_report(y_test, y_pred))
print ("Confusion Matrix:")
print (metrics.confusion_matrix(y_test, y_pred))
```
In :
```train_and_evaluate(svc_1, X_train, X_test, y_train, y_test)
```
```Accuracy on training set:
1.0
Accuracy on testing set:
0.99
Classification Report:
precision    recall  f1-score   support

0       0.86      1.00      0.92         6
1       1.00      1.00      1.00         4
2       1.00      1.00      1.00         2
3       1.00      1.00      1.00         1
4       1.00      1.00      1.00         1
5       1.00      1.00      1.00         5
6       1.00      1.00      1.00         4
7       1.00      0.67      0.80         3
9       1.00      1.00      1.00         1
10       1.00      1.00      1.00         4
11       1.00      1.00      1.00         1
12       1.00      1.00      1.00         2
13       1.00      1.00      1.00         3
14       1.00      1.00      1.00         5
15       1.00      1.00      1.00         3
17       1.00      1.00      1.00         6
19       1.00      1.00      1.00         4
20       1.00      1.00      1.00         1
21       1.00      1.00      1.00         1
22       1.00      1.00      1.00         2
23       1.00      1.00      1.00         1
24       1.00      1.00      1.00         2
25       1.00      1.00      1.00         2
26       1.00      1.00      1.00         4
27       1.00      1.00      1.00         1
28       1.00      1.00      1.00         2
29       1.00      1.00      1.00         3
30       1.00      1.00      1.00         4
31       1.00      1.00      1.00         3
32       1.00      1.00      1.00         3
33       1.00      1.00      1.00         2
34       1.00      1.00      1.00         3
35       1.00      1.00      1.00         1
36       1.00      1.00      1.00         3
37       1.00      1.00      1.00         3
38       1.00      1.00      1.00         1
39       1.00      1.00      1.00         3

avg / total       0.99      0.99      0.99       100

Confusion Matrix:
[[6 0 0 ..., 0 0 0]
[0 4 0 ..., 0 0 0]
[0 0 2 ..., 0 0 0]
...,
[0 0 0 ..., 3 0 0]
[0 0 0 ..., 0 1 0]
[0 0 0 ..., 0 0 3]]
```

### Classify the faces of people with and without glasses

First thing to do is to define the range of the images that show faces wearing glasses.

In :
```# the index ranges of images of people with glasses
glasses = [
(10, 19), (30, 32), (37, 38), (50, 59), (63, 64),
(69, 69), (120, 121), (124, 129), (130, 139), (160, 161),
(164, 169), (180, 182), (185, 185), (189, 189), (190, 192),
(194, 194), (196, 199), (260, 269), (270, 279), (300, 309),
(330, 339), (358, 359), (360, 369)
]
```

We’ll define a function that from those segments returns a new target array that marks with 1 for the faces with glasses and 0 for the faces without glasses (our new target classes):

In :
```def create_target(segments):
# create a new y array of target size initialized with zeros
y = np.zeros(faces.target.shape)
# put 1 in the specified segments
for (start, end) in segments:
y[start:end + 1] = 1
return y
```
In :
```target_glasses = create_target(glasses)
```
##### Perform the training/testing split
In :
```X_train, X_test, y_train, y_test = train_test_split(
faces.data, target_glasses, test_size=0.25, random_state=0)
```
In :
```#a  new SVC classifier

svc_2 = SVC(kernel='linear')
```
In :
```#check the performance with cross-validation

evaluate_cross_validation(svc_2, X_train, y_train, 5)
```
```[ 1.          0.95        0.98333333  0.98333333  0.93333333]
Mean score: 0.970 (+/-0.012)
```
In :
```train_and_evaluate(svc_2, X_train, X_test, y_train, y_test)
```
```Accuracy on training set:
1.0
Accuracy on testing set:
0.99
Classification Report:
precision    recall  f1-score   support

0.0       1.00      0.99      0.99        67
1.0       0.97      1.00      0.99        33

avg / total       0.99      0.99      0.99       100

Confusion Matrix:
[[66  1]
[ 0 33]]
```

Let’s separate all the images of the same person, sometimes wearing glasses and sometimes not. We will also separate all the images of the same person, the ones with indexes from 30 to 39, train by using the remaining instances, and evaluate on our new 10 instances set. With this experiment we will try to discard the fact that it is remembering faces, not glassed-related features

In :
```X_test = faces.data[30:40]
y_test = target_glasses[30:40]

print (y_test.shape)

select = np.ones(target_glasses.shape)
select[30:40] = 0
X_train = faces.data[select == 1]
y_train = target_glasses[select == 1]

print (y_train.shape)
```
```10
390
```
In :
```svc_3 = SVC(kernel='linear')
```
In :
```train_and_evaluate(svc_3, X_train, X_test, y_train, y_test)
```
```Accuracy on training set:
1.0
Accuracy on testing set:
0.9
Classification Report:
precision    recall  f1-score   support

0.0       0.83      1.00      0.91         5
1.0       1.00      0.80      0.89         5

avg / total       0.92      0.90      0.90        10

Confusion Matrix:
[[5 0]
[1 4]]
```

From the 10 images, only one error, still pretty good results, let’s check out which one was incorrectly classified. First, we have to reshape the data from arrays to 64 x 64 matrices:

In :
```y_pred = svc_3.predict(X_test)
eval_faces = [np.reshape(a, (64, 64)) for a in X_test]
print_faces(eval_faces, y_pred, 10)
``` The image number 8 in the preceding figure has glasses and was classified as no glasses. If we look at that instance, we can see that it is different from the rest of the images with glasses (the border of the glasses cannot be seen clearly and the person is shown with closed eyes), which could be the reason it has been misclassified.

In the particular case of SVM, we can try with different kernel functions; if linear does not give good results, we can try with polynomial or RBF kernels. Also the C and the gamma parameters may affect the results.

` excerpt from`