Larry is the associate editor of technology for
To find abnormal tissue in mammograms, radiologists have increasingly relied on computers to spot the potential tumors. In fact, three out of four screening mammograms include computer-assisted detection (CAD) in the U.S., and Medicare pays for the procedure to the tune of $30 million annually. A group of researchers reports, however, that the technology doesn’t improve a doctor’s chance of detecting cancer. Nor did CAD significantly decrease false positives (when an initial exam erroneously suggested a tumor was present), which lead to more screenings and biopsies.
The U.S. Food and Drug Administration approved CAD in 1998 with the hope that software artificial intelligence and digital image processing algorithms would help physicians more accurately identify abnormalities present in radiological images. CAD generally relies on pattern recognition and highlights for radiologists possible areas of concern.
U.S. lawmakers bought into the technology—earlier this year they amended the Social Security Act to provide an increased payment for chest X-rays using CAD for early detection of lung cancer. Medicare has also been amended to include supplemental coverage for CAD use, according to the researchers, led by Joshua Fenton, an assistant professor of family and community medicine at the University of California, Davis.
The researchers, whose study appeared online July 27 in the Journal of the National Cancer Institute, studied data from 684,956 women and more than 1.6 million mammograms administered at Breast Cancer Surveillance Consortium facilities that using CAD between 1998 and 2006. They investigated the relationship between performance, cancer detection rates and breast cancer prognostic capabilities and CAD use, concluding that CAD was not associated with higher breast cancer detection rates or more favorable stage, size or lymph node status of invasive breast cancer.
In short, "it is unclear if the benefits of CAD during screening mammography outweigh its potential risks and costs," the researchers wrote.
Fenton’s concerns about CAD use are not new—he expressed them in the April 5, 2007, issue of the New England Journal of Medicine after conducting a study sponsored by the National Cancer Institute. Every time the CAD software marks a real cancer, according to the 2007 research, a radiologist has to consider about 2,000 additional false-positive marks, "making it very difficult to distinguish between real cancers and those that are not cancer."
Meanwhile, researchers such as those at the Ohio Supercomputer Center are working to improve the quality of CAD.
Image courtesy of cbsva, via iStockphoto.com