We need to think about language – Animated Inclusive Personae (Part 2)

By Katie Stripe, Imperial College London.

This post is the second of a series based on the Animated Inclusive Personae (AIP) project (Stripe and Meadows, 2024) which, at it’s heart, is about creating digital personae that genuinely represent the diversity of our students. This post is less of an update on the project and more an exploration of some of the issues that have arisen, primarily around language, and a request to the community for reference material, thoughts, and ideas for collaboration.

In part one of this series (Stripe, 2024), we discussed the difficulties with finding appropriate images that was the driver for the project. In response to this we are commissioning artwork from humans, which is going really well, but there is another issue of scalability and the language we use to talk about what we want from images, particularly when it comes to racial identity.

As part of the project we run a workshop on developing inclusive curricula using digital personas (Stripe and Dallison, 2024) that explores how to create an inclusive persona in which we share the stock images that are we used for the initial project.

a set of 14 stock photo images that shows a group of people with a range of racial diversity but limited diversity in terms of age and body type

There is a broad range of visual diversity represented but one of the most frequent comments is that they are all ‘beautiful’. They all look like models. In some ways that is inevitable, because they probably are. How else does an image get into a photo library? This is why we started the project and hired artists to create illustrations which we are in control of. However, this has raised a different set of questions, one that revolves around the language used to instruct an artist to create an illustration that represents a certain demographic.

This has wider implications than just commissioning artwork. As we discussed in the last post one of the issues with searching image banks is the way assets are tagged. The language that is used and the elements that are described in the metadata is important for searching out appropriate images but this is by no means the only place the descriptive language is used. We have not yet explored Generative AI for creating images, partly because supporting artists by commission them to create assets for is a good thing to do if you have the resources, but there is also the consideration of what exactly you would ask it to generate.

What language would you use?

Finally, however the images are generated, there needs to be alt text for the assets. Which again poses the question what elements do you describe and what language do you use?

Shutter Stock has requirements for the metadata (Shutterstock, Inc., 2024) on submitted images that ask for a minimum of seven and maximum of fifty keywords and the definition of an image against a set of categories. The categories are a finite list but there is no real guidance on what should be in the keywords. Nappy.co (SHADE and Boogie Brands, 2024) is an image library for ‘beautiful photos of Black and Brown people’ which in some instances references the colour of the person’s skin in the metadata but not all. With such a diverse range of skin tones, representing broad and diverse communities, is it enough to simply say ‘Black’ when defining an image? If not, what do you say?

When tagging images, adding alt text, or using AI, the advice is almost always to be as specific as possible. When discussing race and ethnicity we are being steered away from using the term BAME (Cabinet Office, 2021), and rightly so, as the term covers so many identities that is unhelpful. However, when tagging, and hence searching, we are forced to write statements like ‘East Asian’ because, unless it was tagged with knowledge of the person in the image we do not know for sure if that image is of a Chinese person, or Japanese, or American. So, if you want to represent a Chinese American what options do you have?

Discussing with our artists (all students) what was needed in order to support them in generating images for racial identities different from their own resulted in an image trawl of different identities for research. A valid approach, and essentially what a Generative AI tool would do, but in doing that you could be forgiven for concluding that all Korean people look like K Pop stars. Consider who has their images on the internet, or in image libraries, and what metadata will be associated with them. As already discussed, image libraries provide a certain type of image, and anyone with their biographical details on the internet is likely to be famous in some way and therefore are unlikely to represent a range of ‘normal’ people from that demographic. This again brings us to the question, if you want to represent a Chinese American what options do you have?

One could ask, and legitimately so, why this is important. The ethos of this project is to represent the broad range of diversity in our student cohort, and for us that means creating images, which while they are not photorealistic, still need to be appropriate when it comes to racial diversity. We are also developing backgrounds, and stories for these characters so it is important to get an image that matches the story.

Furthermore, once these assets are produced, they need to be tagged appropriately and given relevant and descriptive alt text. The argument for describing diverse traits in alt text is clear and underpins the whole reason for the project and the need to ask these questions. ‘When we don’t describe the race of someone in an image, we push the narrative that what our society deems as the default (usually a white person), is the default.’ (Adegbite, 2022)

However, with such a sensitive subject I do not feel that we, as a society, or a group of educators and designers, have enough language to describe, safely and confidently, what we need to in order to change the way we tag images, commission artwork (from humans or AI) or provide details for assistive technology.

And we need that.

References

Did you enjoy reading this? To become a member of our community, see Membership details here https://www.alt.ac.uk/membership

Leave a Reply

Your email address will not be published. Required fields are marked *