Human rights advocacy group Amnesty International has come under fire for using artificial intelligence (AI) to produce images for its social media accounts.
The group was criticized for the authenticity of the images, which were created to publicize police brutality in Colombia during national protests in 2021.
However, discrepancies in the images, such as uncanny-looking faces, dated police uniforms, and incorrect flags of Colombia, led to criticism of the group’s use of AI.
The images were ultimately retracted by Amnesty International, which also included a disclaimer at the bottom of each image stating that they were generated using AI.
The use of AI to create images and visual media is becoming increasingly common, and the controversy surrounding the images has sparked a wider debate on the ethical use of AI in advocacy campaigns.
Media scholar Roland Meyer commented on the images, stating that “image synthesis reproduces and reinforces visual stereotypes almost by default,” before adding that they were “ultimately nothing more than propaganda.”
The controversy highlights the potential dangers of using AI to create media, particularly in advocacy campaigns where authenticity and accuracy are paramount.
The controversy surrounding the Amnesty International campaign follows similar concerns raised by HustleGPT founder Dave Craige, who recently posted a video of the United States Republican Party using AI-generated imagery in its political campaign.
As AI continues to advance and become more widely used, it is essential that we consider its ethical implications and ensure that it is used responsibly and transparently.