Racism and Facial Recognition

Is the technology racist?

My morning routine consists of waking up and checking the news for the latest damage report. This morning, as I was scrolling through Twitter, this headline caught my attention:

George Floyd: Amazon bans police use of facial recognition tech

“Civil rights advocates raised concerns about potential racial bias in surveillance technology.”

Speaking as both a journalist and PR co-ordinator for a facial recognition company, and yes, with some obvious added bias thrown into the mix, the implied ‘racism’ in facial recognition simply doesn’t apply. 

Artificial Intelligence technology, and specifically facial recognition technology, cannot be racially biased. Bias implies societal influence, and AI doesn’t have a conscience. It can’t be inherently racist, as this is a social construct.  

The best way to explain AI, as a concept, without getting all techy and nerdy, is to compare it to a puppy going to training classes. In its initial stages, the puppy has no idea what you’re asking of it when you command it to ‘sit’. It’s running all over the place and peeing on your carpet. The job of the owner is to encourage the puppy to connect the command with the action; sit, means sit. 

Equally, AI, before it’s been asked a question, doesn’t know what it’s being asked to identify. It needs to be thrown a load of data and asked a question about this data before it’ll comply. It’s a learning process. 

So, you start by teaching the puppy to sit. That’s one command; one question you ask of the puppy. Once you’ve got that nailed, you can begin to introduce other commands. Before you know it, you’re asking it to sit, wait, heel, roll over, lie down, stay, and ‘no!’. 

AI is the same. Before you know it, you’re throwing a bunch of data at it, and asking it to identify eyes, a nose, mouth, jawline, lips, contours of bone structure. The more data you show it, so, the more faces it sees, the better it becomes at identifying each of these commands. Equally, the more ethnically diverse the data is, the better it becomes at answering the questions you’re asking when the environment or data is different. 

Harrison, our software developer here at TouchByte explained this to me this morning: “As a programmer, I would want to throw as many ethnically diverse faces at my software as possible, to make the most optimal algorithm.” 

“Race is completely irrelevant. It’s simply a case of teaching the software to work with humans; not skin colour.”

In short, the better you train your puppy and the more you allow it to learn, the better you can expect it to behave when it’s out in the world, working with the police force on mass-surveillance. It’s as simple as that. 

Previous
Previous

Facial Recognition in a post-COVID world.