Subscribe & Follow
Jobs
- Sales and Business Development Manager Cape Town
- Content Curator Ilovo, Sandton
- Digital Archive Intern Cape Town
- Junior Digital Art and Social Media Marketing Coordinator Johannesburg
- Digital Writer - 2 Posts Cape Town
- Digital Graphic Designer - User Experience Cape Town
- Digital Graphic Designer - Digital Graphics Cape Town
- Digital Marketing Learnership Cape Town
- Social Media Intern Pretoria, Johannesburg
- CRM Digital Manager (6-Month Contract) Johannesburg
Is Siri sexist?
New “woke” world
Bias affects the entire world and media is no exception. There has been a significant shift in terms of representation in media, with brands creating campaigns like Always’ #LikeAGirl, and even conferences like SXSW, which historically have focused on tech trends, are now looking at how topics of inclusion, representation and bias are reflected in tech, not to mention the record-breaking box office release, Black Panther.
However, even in this new “woke” world, inherent bias is still rampant and people, brands and agencies are still getting it wrong, perfect examples of this are Pepsi’s ad with Kendall Jenner, the token feminism displayed by brands like BIC “For Her” pens and McDonalds turning their M upside-down for International Women’s Day.
While creative work will always be open to human bias, there is something meant to take human bias out of our decisions – the great AI and all the algorithms it comes with.
AI and algorithms have been built by humans
AI is meant to spit out objective facts that we can use to make perfectly unbiased decisions. But here’s the thing; AI and its algorithms have been built by humans (who, I think we’ve made clear, are undeniably biased) – see the problem with that? Every time someone programs anything, years of personal and communal bias are weaved in alongside the code.
There have already been examples of this bias in AI creating problems – certain voice and facial recognition technology works better for some groups of people over others and doesn’t take into account accents, dialects and nuance in facial features, and most recently Facebook’s algorithms impacted a presidential election of one of the most influential countries in the world by serving fake content to the most easily influenced users.
These “bugs in the system” can all be traced back to one thing, the bias, whether intentional or not, of the companies and individuals making the AI only taking into account certain faces, voices, and ideologies when creating these tools. What this ends up doing, is only making AI that is representative and inclusive of a very specific demographic, which isn’t representative or inclusive at all.
Gender stereotypes reinforced
Does the fact that Siri and Alexa, bots designed as personal assistants, are both voiced by very polite sounding females confirm and reinforce gender stereotypes? Does the fact that both the companies that built these technologies are run by men do so even more? You can probably guess my opinion, but then again, I’m female so I’m biased.
From a South African media perspective, we rely heavily on systems and tools that are fed by data collected and collated by humans, and due to our history, probably a very specific human demographic. It's therefore important to acknowledge and account for the bias that might exist in these tools and data sets when using them.
There are ways to combat this bias and ensure that one day AI will be the undeniable provider of objective fact it’s meant to be and this comes down to making sure everything we do, and build, and create is done by companies that are representative and inclusive.
We have to ask ourselves, are we comfortable to use technology built, more than likely, by traditional Silicon Valley programmers who represent a very specific demographic, as the undeniable truth? I’m not. I’m certainly not saying don’t use it, it’s useful and it makes our lives easier, just remember its bias, and more importantly, your own.