Twitter said they were delighted by the results even though they implied that "biases seem to be embedded in the core saliency model." Seems like a weird thing to be delighted about, but they seem delighted that people actually took the time to investigate their biased AI.
What’s Next?
Twitter announced they're "already working towards no longer using saliency-based cropping on Twitter," but noted that "saliency modeling is not unique to Twitter, and there are likely many places where saliency is still in use today. "I'm not sure how hard it is to, let me make sure I have this right, "stop using AI" and instead "just let me crop my own images," but considering the Photos app does a decent job of it, I think Twitter's Machine Learning team can figure it out.
Twitter is definitely right when they say that they're not the only company using saliency models, and they're not the only app that's suffering because of it. Instagram has a long history of stifling women's posts about sexuality, even banning a woman for reporting men who sent her dick pics. Instagram's history with algorithmic sexism is even longer than Twitter's, and instead of being focused on image cropping, it has to do with what content is even allowed on the app.
Meanwhile, TikTok is so plagued by discrimination claims they seem to think promoting lighter-skinned content creators is a feature, not a bug. I mean, while all of this was kicking off with the old social media, TikTok was actually telling people to suppress the poor and ugly. As someone who's poor and ugly on TikTok, I can confirm that this policy is still enacted.
Top Image: Pexels/Pixabay