Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Apple’s AI-Powered Image Playground App Reportedly Has Bias Issues

Share

Apple’s Image Playground app is said to have some bias issues. A machine learning scientist recently shared several outputs generated using the artificial intelligence (AI) app and claimed that it contained incorrect skin tone and hair texture on several occasions. These inaccuracies were also said to be paired with specific racial stereotypes, adding to the problem. It is difficult to state whether the alleged issue is a one-off incident or a widespread issue. Notably, the Cupertino-based tech giant first introduced the app as a part of Apple Intelligence suit with the iOS 18.2 update.

Apple’s Image Playground App Might Have Bias Issues

Jochem Gietema, the Machine Learning Science Lead at Onfido, shared a blog post, highlighting his experiences using Apple’s Image Playground app. In the post, he shared several sets of outputs generated using the Image Playground app and highlighted the instances of racial biases by the large language model powering the app. Notably, Gadgets 360 staff members did not notice any such biases while testing out the app.

“While experimenting, I noticed that the app altered my skin tone and hair depending on the prompt. Professions like investment banker vs. farmer produce images with very different skin tones. The same goes for skiing vs. basketball, streetwear vs. suit, and, most problematically, affluent vs. poor,” Gietema said in a LinkedIn post.

image playground bias Image Playground bias

Alleged biased outputs generated using the Image Playground app
Photo Credit: Jochem Gietema

Such inaccuracies and biases are not unusual with LLMs, which are trained on large datasets which might contain similar stereotypes. Last year, Google’s Gemini AI model faced backlash for similar biases. However, companies are not completely helpless to prevent such generations and often implement various layers of security to prevent them.

Apple’s Image Playground app also comes with certain restrictions to prevent issues associated with AI-generated images. For instance, the Apple Intelligence app only supports cartoon and illustration styles to avoid instances of deepfakes. Additionally, the generated images are also generated with a narrow field of vision which usually only captures the face along with a small amount of additional details. This is also done to limit any such instances of biases and inaccuracies.

The tech giant also does not allow any prompts that contain negative words, names of celebrities or public figures, and more to limit users abusing the tool for unintended use cases. However, if the allegations are true, the iPhone maker will need to include additional layers of safety to ensure users do not feel discriminated against while using the app.