'Coded Bias' Paints a Terrifying Picture Of Facial Recognition's Real Impact On Lives

Algorithms are power and this power is with governments and with a few corporations.

Scientific thought is charming for many reasons. It expands human capabilities. It shrinks the world to fit our palms. It accelerates the passage of time: due to the explosion of the Internet, smartphones, social media, the last two decades feel much longer and richer. It also carries a moral halo, because science, unlike humans, doesn’t discriminate. Two plus two will always be four, irrespective of your gender, race, or nationality.

But what if I say that the interpretation of brute data changes for different people? A new Netflix documentary, Coded Bias, examines the disturbing prejudices of machines. Such biases impact hiring, housing, criminal profiling – the very lives of the people.

The documentary is a slim 85-minute piece, but it digs deep. It probes the historical definition of intelligence, which often hinged on a narrow conception: the ability to excel at chess for instance. It traces the genesis of Artificial Intelligence (AI), reaching Dartmouth College in 1956, where a few dozen (white) men decided the foundation of the technology. It discusses science fiction, the default domain of straight white men for decades.

This approach befits the material because history is hardcoded in data – especially the kinds fed to algorithms to understand human behaviour. The new machines are not creating a new society, argues the documentary, but merely replicating the discriminatory patterns of the old world, such as sexism, classism, racism.

Filmmaker Shalini Kantayya both portrays and pricks this story. The movie is centred on three main characters: Joy Buolamwini, a computer scientist at Massachusetts Institute of Technology; Cathy O’Neil, a data scientist and author of the bestselling book, The Weapon of Math Destruction; and Silkie Carlo, the director of a UK-based non-profit organisation campaigning against state surveillance, The Big Brother Watch. A crucial commonality binds them all: They’re all women and Buolamwini is a person of colour — or, more accurately, the victims of the discriminatory automated software. Buolamwini, who eventually became a digital activist, has a personal story, too. She realised that the facial recognition software couldn’t register her presence, as it preferred lighter-skinned subjects. The absence of acknowledgement is an erasure of identity. Racism by any other means – or any other algorithm – smells as rotten.

Coded Bias asks perceptive questions about the state of our world, juxtaposing technological advancement with social fissures. When Amazon used automated software for hiring, it rejected all women applicants. That discrimination was not random: Less than 14% women occupy powerful positions in the tech giant. Facial recognition software disproportionally victimises Black men, targeting them for crimes they didn’t commit. The AI tools are more likely to be initiated in poor neighbourhoods, experimenting on marginalised people.

The reach of AI – and its use to control a large populace – is not just restricted to one country. China gets a lot of flak for its social credit score but move your gaze westwards: 117 million people in the US have their faces on the databases of law enforcement agencies – that is around one out of three people in the country. Street surveillance cameras have become ubiquitous in the UK, making every citizen a suspect. The Hong Kong protesters in the recent past spray-painted such cameras in defiance and defence.

Analysing different facets of technological growth, Coded Bias depicts a comprehensive – at times a terrifying – picture of a world struggling in its grasp. It examines the vicious side of machines, pondering the increasing chasm between the mathematical and ethical. Technology, in the universe of Coded Bias, is not something that will save us – rather it is something we should save ourselves from. The unchecked faith in AI also encourages subservience – and “being fully efficient, always doing what you’re told”, says Zeynep Tufekci, the Turkish sociologist, “is not always the most human thing”.

A facial recognition system at work. Photo: Reuters/Bobby Yip

Which is one of the main preoccupations of the documentary: machines dehumanising people. An automated software, for instance, has begun grading teachers in Texas, determining their competency. But those yardsticks are not disclosed, exacerbating the humiliating examining process. “How can this algorithm define me?” asks a teacher in Houston. “How dare it?” Especially if it’s an unfair “black box” algorithm.

Coded Bias presents a disconcerting vision of the future: machines shaping a tyrannical world denying people their basic rights. But the documentary extrapolates that question: Who is orchestrating the real control? Corporations, intent on selling people in different ways. First, they programmed and monetised our buying preferences and now they’re stifling our civil liberties. “It’s not what AI will do to us on its own,” says Tufekci. “It’s what the powerful will do to us with AI.” Our masters haven’t changed; they’ve just delegated machines to do the dirty work for them.

Even though Kantayya follows three characters – besides interviewing other experts and splicing archival footage – Coded Bias doesn’t have a dense plot. It often relies on explaining key information, making it talk-heavy. Some of it seems repetitive; some of it reminds you of another accomplished documentary, The Social Dilemma. Kantayya is aware of the limitation, constantly seeking ways to dramatise her scenes. She only falters occasionally, especially when adopting a fly-on-the-wall approach, when snippets of conversations among characters look staged for the camera.

Also Read: ‘The Social Dilemma’: An All Embracing Addiction That is Both Alluring and Alienating

But these quibbles fade in light of the remarkable revelations. Growing up, we were taught that “knowledge is power”. That aphorism faces its most crucial test in the 21st century. Because if knowledge is linked to power, then who gets to wield that power — based on what kind of knowledge? If the scientific advancement still produces software that reproduces the same hierarchy and segregations, then maybe we should ask ourselves, “What does this progress even mean?” Some, like Buolamwini, who testified before Congress advocating federal regulations, are starting to challenge that narrative. Amazon has paused police use of its facial recognition program for a year. Some American cities have banned the technology. So, maybe there’s hope. Maybe people can win this time. But even if we resolve this particular tussle, the documentary hints that the bigger questions – around individual privacy, institutional discrimination, industrial dehumanisation – will still prevail. Even if we save ourselves from machines, who will save us from… ourselves?