Twitter admits its AI has a racial bias in cropping images; vows to give users more choices
Earlier this month Twitter was accused of having a racial bias in the way images were cropped and displayed on the site. Admitting that its image cropping algorithms may still not be mature enough to stop the bias, Twitter has vowed to give users more choices in how images appear on its platform.
The micro-blogging platform tested the existing machine learning (ML) system that decides how to crop images before bringing it to its platform.
"While our analyses to date haven't shown racial or gender bias, we recognise that the way we automatically crop photos means there is a potential for harm," Twitter CTO Parag Agrawal and CDO Dantley Davis wrote in a blog post on Thursday.
"We should've done a better job of anticipating this possibility when we were first designing and building this product," they added. The company said it will decrease its reliance on ML-based image cropping by giving people more visibility and control over what their images will look like in a Tweet.
Twitter is currently conducting additional analysis to add further rigour to its testing and is exploring ways to open-source the analysis.
The image cropping system relies on saliency, which predicts where people might look first.
For its initial bias analysis, Twitter tested pair-wise preference between two demographic groups (White-Black, White-Indian, White-Asian and male-female).
"We located the maximum of the saliency map, and recorded which demographic category it landed on. We repeated this 200 times for each pair of demographic categories and evaluated the frequency of preferring one over the other," Agrawal said.
Twitter said giving people more choices for image cropping and previewing what they'll look like in the Tweet composer may help reduce the risk of harm.
Going forward, the company will follow the "what you see is what you get" principles of design, meaning the photo you see in the Tweet composer is what it will look like in the Tweet.
There may be some exceptions to this, such as photos that aren't a standard size or are really long or wide.
"Bias in ML systems is an industry-wide issue, and one we're committed to improving on Twitter. We're aware of our responsibility, and want to work towards making it easier for everyone to understand how our systems work," Agrawal said.
*Edited from an IANS report