The Computer Scientist Peering Inside AI’s Black Boxes
Source:https://www.quantamagazine.org/cynthia-rudin-builds-ai-that-humans-can-understand-20230427/#comments The Computer Scientist Peering Inside AI’s Black Boxes 2023-05-01 21:58:07

But why does knowing what’s going on under the hood matter?

If you want to trust a prediction, you need to understand how all the computations work. For example, in health care, you need to know if the model even applies to your patient. And it’s really hard to troubleshoot models if you don’t know what’s in them. Sometimes models depend on variables in ways that you might not like if you knew what they were doing. For example, with the power company in New York, we gave them a model that depended on the number of neutral cables. They looked at it and said, “Neutral cables? That should not be in your model. There’s something wrong.” And of course there was a flaw in the database, and if we hadn’t been able to pinpoint it, we would have had a serious problem. So it’s really useful to be able to see into the model so you can troubleshoot it.

When did you first get concerned about non-transparent AI models in medicine?

My dad is a medical physicist. Several years ago, he was going to medical physics and radiology conferences. I remember calling him on my way to work, and he was saying, “You’re not going to believe this, but all the AI sessions are full. AI is taking over radiology.” Then my student Alina [Barnett] roped us into studying [AI models that examine] mammograms. Then I realized, OK, hold on. They’re not using interpretable models. They’re using just these black boxes; then they’re trying to explain their results. Maybe we should do something about this.

So we decided we would try to prove that you could construct interpretable models for mammography that did not lose accuracy over their black box counterparts. We just wanted to prove that it could be done.

How do you make a radiology AI that shows its work?

We decided to use case-based reasoning. That’s where you say, “Well, I think this thing looks like this other thing that I’ve seen before.” It’s like what Dr. House does with his patients in the TV show. Like: “This patient has a heart condition, and I’ve seen her condition before in a patient 20 years ago. This patient is a young woman, and that patient was an old man, but the heart condition is similar.” And so I can reason about this case in terms of that other case.

We decided to do that with computer vision: “Well, this part of the image looks like that part of that image that I’ve seen before.” This would explain the reasoning process in a way that is similar to how a human might explain their reasoning about an image to another human.

These are high-complexity models. They’re neural networks. But as long as they’re reasoning about a current case in terms of its relationship to past cases, that’s a constraint that forces the model to be interpretable. And we haven’t lost any accuracy compared to the benchmarks in computer vision.

Would this ‘Dr. House’ technique work for other areas of health care?

You could use case-based reasoning for anything. Once we had the mammography project established, my students Alina Barnett and Stark Guo, and a physician collaborator named Brandon Westover, transferred their knowledge directly to EEG scans for critically ill patients. It’s a similar neural architecture, and they trained it within a couple of months, very quick.

Uncategorized Source:https://www.quantamagazine.org/cynthia-rudin-builds-ai-that-humans-can-understand-20230427/#comments

Leave a Reply

Your email address will not be published. Required fields are marked *