- Home
- Editorial
- News
- Practice Guidelines
- Anesthesiology Guidelines
- Cancer Guidelines
- Cardiac Sciences Guidelines
- Critical Care Guidelines
- Dentistry Guidelines
- Dermatology Guidelines
- Diabetes and Endo Guidelines
- Diagnostics Guidelines
- ENT Guidelines
- Featured Practice Guidelines
- Gastroenterology Guidelines
- Geriatrics Guidelines
- Medicine Guidelines
- Nephrology Guidelines
- Neurosciences Guidelines
- Obs and Gynae Guidelines
- Ophthalmology Guidelines
- Orthopaedics Guidelines
- Paediatrics Guidelines
- Psychiatry Guidelines
- Pulmonology Guidelines
- Radiology Guidelines
- Surgery Guidelines
- Urology Guidelines
Robot radiology: Low cost AI could screen for cervical cancer better than humans
Artificial intelligence commonly known as A.I. is already exceeding human abilities. Self-driving cars use A.I. to perform some tasks more safely than people. E-commerce companies use A.I. to tailor product ads to customers' tastes quicker and with more precision than any breathing marketing analyst.
And, soon, A.I. will be used to "read" biomedical images more accurately than medical personnel alone providing better early cervical cancer detection at lower cost than current methods.
However, this does not necessarily mean radiologists will soon be out of business.
"Humans and computers are very complementary," says Sharon Xiaolei Huang, associate professor of computer science and engineering at Lehigh University in Bethlehem, PA. "That's what A.I. is all about."
Huang directs the Image Data Emulation & Analysis Laboratory at Lehigh where she works on artificial intelligence related to vision and graphics, or, as she says: "creating techniques that enable computers to understand images the way humans do." Among Huang's primary interests is training computers to understand biomedical images.
Now, as a result of 10 years work, Huang and her team have created a cervical cancer screening technique that, based on an analysis of a very large data set, has the potential to perform as well or better than human interpretation on other traditional screening results, such as Pap tests and HPV tests at a much lower cost. The technique could be used in less-developed countries, where 80% of deaths from cervical cancer occur.
The researchers are currently seeking funding for the next step in their project, which is to conduct clinical trials using this data-driven detection method.
A more accurate screening tool, at lower cost
Huang's screening system is built on image-based classifiers (an algorithm that classifies data) constructed from a large number of Cervigram images. Cervigrams are images taken by digital cervicography, a noninvasive visual examination method that takes a photograph of the cervix. The images, when read, are designed to detect cervical intraepithelial neoplasia (CIN), which is the potentially precancerous change and abnormal growth of squamous cells on the surface of the cervix.
"Cervigrams have great potential as a screening tool in resource-poor regions where clinical tests such as Pap and HPV are too expensive to be made widely available," says Huang. "However, there is concern about Cervigrams' overall effectiveness due to reports of poor correlation between visual lesion recognition and high-grade disease, as well as disagreement among experts when grading visual findings."
Huang thought that computer algorithms could help improve accuracy in grading lesions using visual information a suspicion that, so far, is proving correct.
Because Huang's technique has been shown, via an analysis of the very large data set, to be both more sensitive able to detect abnormality as well as more specific (fewer false positives), it could be used to improve cervical cancer screening in developed countries like the U.S.
"Our method would be an effective low-cost addition to a battery of tests helping to lower the false positive rate since it provides 10% better sensitivity and specificity than any other screening method, including Pap and HPV tests," says Huang.
Correlating visual features and patient data to cancer
To identify the characteristics that are most helpful in screening for cancer, the team created hand crafted pyramid features (basic components of recognition systems) as well as investigated the performance of a common deep learning framework known as convolutional neural networks (CNN) for cervical disease classification.
They describe their results in an article in the March issue of Pattern Recognition called: "Multi-feature base benchmark for cervical dysplasia classification." The researchers have also released the multi-feature data set and extensive evaluations using seven classic classifiers.
To build the screening tool, Huang and her team used data from 1,112 patient visits, where 345 of the patients were found to have lesions that were positive for moderate or severe dysplasia (considered high-grade and likely to develop into cancer) and 767 had lesions that were negative (considered low-grade with mild dysplasia typically cleared by the immune system).
These data were selected from a large medical archive collected by the U.S. National Cancer Institute consisting of information from 10,000 anonymized women who were screened using multiple methods, including Cervigrams, over a number of visits. The data also contains the diagnosis and outcome for each patient.
"The program we've created automatically segments tissue regions seen in photos of the cervix, correlating visual features from the images to the development of precancerous lesions," says Huang. "In practice, this could mean that medical staff analyzing a new patient's Cervigram could retrieve data about similar cases not only in terms of optics, but also pathology since the dataset contains information about the outcomes of women at various stages of pathology."
From the study: ." with respect to accuracy and sensitivity, our hand-crafted PLBP-PLAB-PHOG feature descriptor with random forest classifier (RF.PLBP-PLAB-PHOG) outperforms every single Pap test or HPV test, when achieving a specificity of 90%. When not constrained by the 90% specificity requirement, our image-based classifier can achieve even better overall accuracy. For example, our fine-tuned CNN features with Softmax classifier can achieve an accuracy of 78.41% with 80.87% sensitivity and 75.94% specificity at the default probability threshold 0.5. Consequently, on this data set, our lower-cost image-based classifiers can perform comparably or better than human interpretation based on widely-used Pap and HPV tests."
According to the researchers, their classifiers achieve higher sensitivity in a particularly important area: detecting moderate and severe dysplasia or cancer.
Exploring classification with improved imaging technique
Among Huang's other projects is a collaboration with Chao Zhou, assistant professor of electrical and computer engineering at Lehigh. They are working on the use of an established medical imaging technique called optical coherence microscopy (OCM) most commonly used in ophthalmology to analyze breast tissue to produce computer-aided diagnoses. Their analysis is designed to help surgeons minimize the tissue removed while operating on cancer patients by providing highly accurate, real-time information about the health of the excised tissue.
They recently conducted a feasibility study with promising results that have been published in an article in Medical Image Analysis called: "Integrated local binary pattern texture features for classification of breast tissue imaged by optical coherence microscopy."
Huang and Zhou used multi-scale and integrated image features to improve classification accuracy and were able to achieve high sensitivity (100%) and specificity (85.2%) for cancer detection using OCM images.
"Chao has done a lot of work in new instrumentation improving the quality of biomedical images," says Huang. "Since he works on the images or data inputs and I work on the results of the data analysis or outputs, our collaboration is a natural fit."
You can read the full Article by clicking on the link :
Sunhua Wan, Hsiang-Chieh Lee, Xiaolei Huang, Ting Xu, Tao Xu, Xianxu Zeng, Zhan Zhang, Yuri Sheikine, James L. Connolly, James G. Fujimoto, Chao Zhou. Integrated local binary pattern texture features for classification of breast tissue imaged by optical coherence microscopy. Medical Image Analysis, 2017; 38: 104 DOI: 10.1016/j.media.2017.03.002
Disclaimer: This site is primarily intended for healthcare professionals. Any content/information on this website does not replace the advice of medical and/or health professionals and should not be construed as medical/diagnostic advice/endorsement or prescription. Use of this site is subject to our terms of use, privacy policy, advertisement policy. © 2020 Minerva Medical Treatment Pvt Ltd