This post provides a link to my Github repository for my submissions for Udacity’s AI for Healthcare Nanodegree Program.
I learned to build, evaluate, and integrate predictive models that have the power to transform patient outcomes and uses AI for Healthcare. I started by classifying and segmenting 2D and 3D medical images to augment diagnosis and then moved on to modelling patient outcomes with electronic health records to optimize clinical trial testing decisions. Finally, I build an algorithm that uses data collected from wearable devices to estimate the wearer’s pulse rate in the presence of motion.
Applying AI to 2D Medical Imaging Data
I learnt the fundamental skills needed to work with 2D medical imaging data and how to use AI to derive clinically-relevant insights from data gathered via different types of 2D medical imaging such as x-ray, mammography, and digital pathology. In this project, I analyzed data from the NIH Chest X-ray dataset and trained a CNN to classify a given chest X-ray for the presence or absence of pneumonia. First, I curated training and testing sets that are appropriate for the clinical question at hand from a large collection of medical images. Then, I created a pipeline to extract images from DICOM files that can be fed into CNN for model training. Lastly, I wrote an FDA 501(k) validation plan that formally describes my model, the data that it was trained on, and a validation plan that meets FDA criteria in order to obtain clearance of the software being used as a medical device. AI for Healthcare is used in the project.
Applying AI to 3D Medical Imaging Data
I learnt the fundamental skills to work with 3D medical imaging datasets and frame insights derived from the data in a clinically relevant context. In this project, I went through the steps to create an algorithm that will helps clinicians assess hippocampal volume in an automated way and integrated this algorithm into a clinician’s working environment. Hippocampus is one of the major structures of the human brain with functions that are primarily connected to learning and memory. The volume of the hippocampus may change over time, with age, or as a result of the disease. In order to measure hippocampal volume, a 3D imaging technique with good soft-tissue contrast is required. MRI provides such imaging characteristics, but manual volume measurement still requires careful and time-consuming delineation of the hippocampal boundary.
Applying AI to EHR Data
I learnt the fundamental skills to work with EHR data and build and evaluate compliant, interpretable models.
In this project, I worked with real, de-identified EHR data to build a regression model to predict the estimated hospitalization time for a patient and select/filter patients for the study. I analyzed an EHR dataset, transform it to the right level, build powerful features with TensorFlow, and modelled the uncertainty and bias with TensorFlow Probability and Aequitas.
Applying AI to Wearable Device Data
I learnt how to build algorithms that process the data collected by wearable devices and surface insights about the wearer’s health. In this project, I build an algorithm that combines information from two of the sensors – the IMU and PPG sensors. The algorithm can estimate the wearer’s pulse rate in the presence of motion. I relied on my knowledge of the sensors, the techniques that I learned in this course, and my own creativity to design and implement an algorithm that accomplishes the task.