Medical Image segmentation for detecting structural anomalies under data-constrained settings
Faculty: Dr. Vaanathi Sundaresan (Computational and Data Science) & Dr. Jaya Prakash (Instrumentation and Applied Physics)
Proposal: Medical imaging has been playing a major role in healthcare, and being non-invasive in nature, has played a major role in diagnosis and treatment of various diseases. Various widely used imaging modalities such as MRI, CT, X-ray and Ultrasound provide a wealth of information regarding different disease signs, thus helping in their precision diagnosis and long-time prognosis. Computer vision and deep learning methods for ‘outlier’ detection (or more generally ‘out-of-distribution’ detection) have been proposed to detect anomalies based on their deviation from the normal structures in different modalities (e.g., cardiac MRI, abdominal CT scans) of different organs of the body. Especially, deep learning methods have been widely used for detection, segmentation, spatial modelling and prognosis of pathological signs in neuroimaging data. One of the most popular feed-forward DL architectures, the convolutional neural networks, has been shown to outperform classical ML methods. However, in clinical real-world applications, often times it is difficult to obtain large amount of manually annotated data by radiologists that has representative set of lesions. This is further complicated by practical limitations such as missing modalities or inconsistent set of modalities across different datasets.
Hence, the aim of this project is to develop robust open-source artificial intelligence (AI)-based deep learning tools for accurate automated segmentation of pathological findings in medical imaging modalities for the early and differential diagnosis of various diseases. For developing such methods, the idea is to use various automated semi-/self- and weakly-supervised computer-aided diagnostic techniques, using machine learning and computer vision. While applying deep learning models for clinical applications, adaptation across domains would be particularly useful in multimodal tasks, when one of the modalities is missing/has low image resolution or needs to be adapted to the modality that has manual labels.
The project also aims for the detection of anomalies (abnormalities) using multiple diverse imaging modalities, and their classification and quantification. The performance of the algorithms will be evaluated on multiple datasets, compared against existing state-of-the-art methods.
Background needed: Linear Algebra, signal processing, machine learning, programming.
Basic Qualifications: B.E./B.Tech. in EE/ECE/IN/CS/IT/BME (or) M.Sc. (Mathematics/Physics)