Build a Classiﬁcation Model using Tree-Based Algorithms
In this implementation, selected features to build classiﬁcation model include subject’s education, gender, age and a small list of biomarkers as discussed in diﬀerent studies (Li et al., 2017), (Goyal et al., 2018), (Azvan et al., 2018). The biomarkers are cognitive tests such as Clinical Dementia Rating Sum of Boxes (CDRSB) Mini-Mental State Exam (MMSE), MRI of the whole brain, hippocampus, middle temporal gyrus and entorhinal, PET measures of Fludeoxyglucose (FDG) and Florbetapir (AV45) and cerebrospinal ﬂuid (CSF) measures of amyloid-beta, tau and phosphorylated tau level. A new feature is created from age to reﬂect the age when the subject visited the clinic. Gender and education are converted into dummy variables. The missing values are imputed and then all the values are scaled. The data is then divided into training and test data sets. The model is trained on the training data and evaluated on test data. This is a continuation from here.
Implementation, Evaluation and Results of Decision Tree using Diﬀerent Numbers of Leaf Nodes
Decision tree is a non-linear, non-parametric algorithm which uses a tree-like graph in which each branch is an outcome of a conditional test and leaf node is a class label. The advantages include ease of understanding, identify relationships between two or more features and can handle both numeric and categorical features. However, the learners can create complex trees which overﬁt and are unstable because small variation in the data can result in diﬀerent trees being created. The weaknesses are handled through methods such as bagging and boosting. It is implemented using scikit-learn library and the function used is DecisionTreeClassiﬁer( ). The model is trained using 5, 50, 500, 5000, 50000 leaf nodes. The model resulted in average AUROC score of 0.845 using 5 leaf nodes, average AUROC score of 0.850 using 50 leaf nodes, average AUROC score of 0.78 using 500 leaf nodes, average AUROC score of 0.795 using 5000 leaf nodes and average AUROC score of 0.795 using 50,000 leaf nodes. The metrics show that the increase in the number of leaf nodes results in poor performance for the model.
Implementation, Evaluation and Result of Random Forest
Random forest and gradient boosting trees are ensemble learning methods and combine outputs from multiple individual trees. Random forest uses a random sample of data to train tree independently while gradient boosting trees build a new tree to correct the errors of the previous tree. Random forest is less likely to overﬁt than gradient boosting trees. Yet, the algorithm is slow to make predictions, biased towards features with more level and smaller groups are preferred if correlated features of the smaller groups. It is implemented using scikit-learn and the function is RandomForestClassiﬁer(). The model resulted in predicting normal with AUROC score of 0.728 against dementia and MCI, MCI with AUROC score of 0.60 against normal and dementia and classify dementia with AUROC score of 0.877 against normal and MCI when the threshold is unﬁxed.
Implementation, Evaluation and Result of XGBoost
Gradient boosting trees can solve ranking problems because it is possible to write a gradient but take a long time to train as trees are built sequentially. Extreme Gradient Boosting (XGBoost) uses parallel computing to implement gradient boosting algorithm. It has regularization to reduce overﬁtting and built-in methods to handle missing values and cross-validation. It is implemented using XGBoost library and function used to implement is XGBClassiﬁer() with the number of estimators set at 100.
Figure is a normalized confusion matrix and shows the values of correctly predicted class i.e., 0.90 for normal (NL), 0.38 for MCI and 1.0 for dementia.
The oﬀ-diagonal elements are mistakenly confused with the other classes. Therefore, it is better in classifying clinical stages of normal and dementia than MCI. The value of 1.0 for dementia shows that the model overﬁts when the threshold is ﬁxed at 0.5. Hence, the threshold should be selected carefully.
The model resulted in predicting normal with AUROC score of 0.908 against dementia and MCI, MCI with AUROC score of 0.659 against normal and dementia and classify dementia with AUROC score of 0.853 against normal and MCI.
The figure is the AUROC curve and shows that the classiﬁer is better in classifying normal against the other two classes than when predicting dementia or MCI.
Comparison of Developed Models
Decision tree with 50 leaf nodes shows the best performance but it tends to overﬁt. XGBoost is the best model to distinguish between a class and other classes with AUROC score 0.908 for normal, 0.659 for MCI and 0.853 for dementia. It also shows that all the algorithms have a higher AUROC score for normal and dementia than MCI.
Interpreting Machine Learning Model
Figure shows the interpretation of the model prediction.
The report continues here.