Deep neural networks (DNNs) help radiologists locate, characterize, and segment tumors. These software tools augment imagery in regions that are otherwise obscured by signal noise or streaks. DNNs reduce imaging session times and thereby reduce radiation exposure. On the other hand, “black box” DNNs defy clinical scrutiny, make patient consultations confusing, and informed consent nearly impossible. For DNN-based automated tasks to be medically sound, physicians must be able to explain them. Unfortunately, many x-rays, MRIs, and other medical images are difficult for either clinicians or machines to classify as either healthy or diseased. Automatic decisions made by uncertain DNNs lead to poor consequences: incorrect diagnoses, unnecessary procedures, and unacceptable radiation risk. Unfortunately, few DNNs disclose their level of uncertainty.
Rowan University (RWN)’s team has developed eVI, a tool to quantify and reduce uncertainty in DNNs. eVI promises to increase transparency, confidence in and accuracy from machine learning-based medical imaging.
Competitive Advantages
Opportunity
Rowan University is seeking a partner(s) for further development and commercialization of this technology. The inventor is available to collaborate with interested companies.