Stanford, U.S. A: In a major technological breakthrough, researchers of Stanford University have developed a new artificial intelligence, AI algorithm that can reliably screen chest X-rays for multiple types of diseases.
The algorithm named CheXNeXt is the first to simultaneously evaluate X-rays for a multitude of possible maladies and return results that are consistent with the readings of radiologists.
The study revealed that scientists trained the algorithm to detect 14 different pathologies. For 10 diseases, the algorithm performed just as well as radiologists; for three, it underperformed compared with radiologists; and for one, the algorithm outdid the experts.
“Usually, we see AI algorithms that can detect a brain haemorrhage or a wrist fracture — a very narrow scope for single-use cases,” said Matthew Lungren, MD, MPH, assistant professor of radiology. “But here we’re talking about 14 different pathologies analyzed simultaneously, and it’s all through one algorithm.”
In a similar study published in a research paper on Cornell University’s online distribution system for research, arXiv.org, Speciality Medical Dialogues has reported that Artificial intelligence with deep learning algorithm trained on a large quantity of labelled data can accurately detect abnormalities on Chest x-ray.
The scientists used about 112,000 X-rays to train the algorithm. A panel of three radiologists then reviewed a different set of 420 X-rays, one by one, for the 14 pathologies. Their conclusions served as a “ground truth”— a diagnosis that experts agree is the most accurate assessment — for each scan. This set would eventually be used to test how well the algorithm had learned the telltale signs of disease in an X-ray. It also allowed the team of researchers to see how well the algorithm performed compared to the radiologists.
Read Also: Artificial intelligence helps with earlier detection of melanoma
“We treated the algorithm like it was a student; the NIH data set was the material we used to teach the student, and the 420 images were like the final exam,” Lungren said. To further evaluate the performance of the algorithm compared with human experts, the scientists asked an additional nine radiologists from multiple institutions to also take the same “final exam.”
The research team is working on a subsequent version of CheXNeXt that will bring the researchers even closer to in-clinic testing.
For full information log on to