World Class Interactive Webinars Congress.

Speakers

Yizhou Yu

Professor at The University of Hong Kong, and was a faculty member at University of Illinois at Urbana-Champaign

Biography: Yizhou Yu is a professor at The University of Hong Kong, and was a faculty member at University of Illinois at Urbana-Champaign for more than ten years. Prof Yu has served on the editorial board of IEEE Transactions on Image Processing, IET Computer Vision, IEEE Transactions on Visualization and Computer Graphics, and The Visual Computer. He has also served on the program committee of many leading international conferences. His current research interests include computer vision, deep learning, AI for medicine, and geometric computing.

Title: Deep Learning Based Diagnostic Systems for Chest Abnormalities and Major Respiratory Diseases.

Abstract: Respiratory diseases impose a tremendous global health burden on large patient populations. In this speech, I first introduce DeepMRD, a deep learning-based medical image interpretation system for the diagnosis of major respiratory diseases based on the automated identification of a wide range of radiological abnormalities through computed tomography (CT) and chest X-ray (CXR) from real-world, large-scale datasets. DeepMRD comprises four deep neural networks (two CT-Nets and two CXR-Nets) that exploit contrastive learning to generate pre-training parameters that are fine-tuned on the retrospective dataset collected from a single institution. The performance of DeepMRD was evaluated for abnormality identification and disease diagnosis on data from two different institutions: one was an internal testing dataset from the same institution as the training data and the second was collected from an external institution to evaluate the model generalizability and robustness to an unrelated population dataset. In such a difficult multi-class diagnosis task, our system achieved the average area under the receiver operating characteristic curve (AUC) of 0.856 (95% confidence interval (CI):0.843–0.868) and 0.841 (95%CI:0.832–0.887) for abnormality identification, and 0.900 (95%CI:0.872–0.958) and 0.866 (95% CI:0.832–0.887) for major respiratory diseases’ diagnosis on CT and CXR datasets, respectively. Furthermore, to achieve a clinically actionable diagnosis, we deployed a preliminary version of DeepMRD into the clinical workflow, which was performed on par with senior experts in disease diagnosis. These findings demonstrate the potential to accelerate the medical workflow to facilitate early diagnosis as a triage tool for respiratory diseases which supports improved clinical diagnoses and decision-making.

Pre-training lays the foundation for recent successes in radiograph analysis supported by deep learning. It learns transferable image representations by conducting large-scale fully- or self-supervised learning on a source domain; however, supervised pre-training requires a complex and labour-intensive two-stage human-assisted annotation process, whereas self-supervised learning cannot compete with the supervised paradigm. To tackle these issues, in the second part of this speech, I introduce a cross-supervised methodology called reviewing free-text reports for supervision (REFERS), which acquires free supervision signals from the original radiology reports accompanying the radiographs. The proposed approach employs a vision transformer and is designed to learn joint representations from multiple views within every patient study. REFERS outperforms its transfer learning and self-supervised learning counterparts on four well-known X-ray datasets under extremely limited supervision. Moreover, REFERS even surpasses methods based on a source domain of radiographs with human-assisted structured labels; it therefore has the potential to replace canonical pre-training methodologies.