Medical Imaging Convention

Olympia London


 Oncology ConventionCPD Member

News & Press Releases

Subpage Hero

31 Mar 2021

Covid-focused AI – how useful is it?

Seamus Daley-Dee

Since the beginning of the pandemic, AI and machine learning technologies have had an increasing say in how we diagnose patients and what treatments can be offered as a result. Most of the stories we have seen in the last year have been largely positive but now a team of academic researchers in the UK say different.  


As AI in Healthcare first reported a few days ago, a team of academic researchers in the UK have completed a systematic review of 62 representative studies on the use of AI for Covid-19 diagnostics and prognostics on X-rays and CT scans. Their findings have taken them by surprise with the reviewers, when investigating, found “deficiencies so widespread that, by the team’s lights, the entire body of research is rendered moot.” Quite the surprise considering there have been stories such as AI outperforming Breast clinicians in diagnosing patients successfully with Breast cancer.  


The researchers also went on to say that not even one of the machine learning models described in the 62 is “of potential clinical use due to methodological flaws and/or underlying biases.” Michael Roberts of the University of Cambridge added: “This is a major weakness, given the urgency with which validated Covid-19 models are needed.” 


Among the flaws Roberts and colleagues describe in their review, published this month in Nature Machine Intelligence: 

  • Almost all papers had a high (45 of 62) or unclear (11 of 62) risk of bias for their participants. The reviewers deemed only six as having a low risk of bias. 

  • For 38 of the 62 papers, the reviewers could not judge biases in predictors because the predictors were based on either unknown or abstract features in the medical images. 

  • Just 10 papers had a low risk of bias in their authors’ analysis. 


This bias obviously distorts the quality and data evaluated when reviewing AI models ready for clinical practice. In the review, Roberts offers corresponding recommendations for most of the shortcomings they identify. Those recommendations fall into five primary areas: the data used for model development and common pitfalls; the evaluation of trained models; reproducibility; documentation in manuscripts; and the peer-review process. Overall, the review recommends that AI and machine learning systems “can be continuously improved if researchers worldwide submit their data for public review.” 


To add injury to insult however, a separate review has now added their findings and published them via Science Translational Medicine and it doesn’t make for pretty reading. The researchers behind the second exercise found just 23% of healthcare machine-learning studies were reproducible with differing datasets. By comparison, 80% of computer vision studies and 58% of NLP studies had such conceptual reproducibility. IEEE SPECTRUM analysed both literature reviews and added “Healthcare is an especially challenging area for machine learning research because many datasets are restricted due to health privacy concerns and even experts may disagree on a diagnosis for a scan or patient,” writes freelance journalist Megan Scudellari. “Still, researchers are optimistic that the field can do better.” 


It would seem all the progress that has been made in the last year with Covid-focused AI has been rendered useless across the globe. However, it’s clear that new technology does have a remit in the healthcare industry, with the AI healthcare market expected to be worth around 26 billion globally by 2026, it’s clear from these reviews that more data, and quality unbiased data more importantly, is needed before we can implement AI into clinical practice.  

View all News & Press Releases