The Researcher Who Would Teach Machines to Be Fair
Source:https://www.quantamagazine.org/he-protects-privacy-and-ai-fairness-with-statistics-20230310/ The Researcher Who Would Teach Machines to Be Fair 2023-03-13 21:58:09

How did you eventually come to studying fairness?

I taught a fairness and machine learning course in 2017. That gave me a good idea of the open problems in the field. And together with that, I gave a talk called “21 Fairness Definitions and Their Politics.” I explained that the proliferation of technical definitions was not because of technical reasons, but because there are genuine moral questions at the heart of all this. There’s no way you can have one single statistical criterion that captures all normative desiderata — all the things you want. The talk was well received, so those two together convinced me that I should start to get into this topic.

You also gave a talk on detecting AI snake oil, which was also well received. How does that relate to fairness in machine learning?

So the motivation for this was that there’s clearly a lot of genuine technical innovation happening in AI, like the text-to-image program DALL·E 2 or the chess program AlphaZero. It’s really amazing that this progress has been so rapid. A lot of that innovation deserves to be celebrated.

The problem comes when we use this very loose and broad umbrella term “AI” for things like that as well as more fraught applications, such as statistical methods for criminal risk prediction. In that context, the type of technology involved is very different. These are two very different kinds of applications, and the potential benefits and harms are also very different. There is almost no connection at all between them, so using the same term for both is thoroughly confusing.

People are misled into thinking that all this progress they’re seeing with image generation would actually translate into progress toward social tasks like predicting criminal risk or predicting which kids are going to drop out of school. But that’s not the case at all. First of all, we can only do slightly better than random chance at predicting who might be arrested for a crime. And that accuracy is achieved with really simple classifiers. It’s not getting better over time, and it’s not getting better as we collect more data sets. So all of these observations are in contrast to the use of deep learning for image generation, for instance.

How would you distinguish different types of machine learning problems?

This is not an exhaustive list, but there are three common categories. The first category is perception, which includes tasks like describing the content of an image. The second category is what I call “automating judgment,” such as when Facebook wants to use algorithms to determine which speech is too toxic to remain on the platform. And the third one is predicting future social outcomes among people — whether someone would be arrested for a crime, or if a kid is going to drop out of school.

In all three cases, the achievable accuracies are very different, the potential dangers of inaccurate AI are very different, and the ethical implications that follow are very different.

For instance, face recognition, in my classification, is a perception problem. A lot of people talk about face recognition being inaccurate, and sometimes they’re right. But I don’t think that’s because there are fundamental limits to the accuracy of face recognition. That technology has been improving, and it’s going to get better. That’s precisely why we should be concerned about it from an ethical perspective — when you put it into the hands of the police, who might be unaccountable, or states who are not transparent about its use.

Uncategorized Source:https://www.quantamagazine.org/he-protects-privacy-and-ai-fairness-with-statistics-20230310/

Leave a Reply

Your email address will not be published. Required fields are marked *