header
Tech

‘We’ll Never Undermine the Privacy Promise of People Who Rely on Us’: Signal’s Meredith Whittaker

Whittaker spoke to The Wire in San Jose in Costa Rica at RightsCon. She said AI can and must be regulated. She added that Big Tech, which is speaking of AI in apocalyptic terms, is trying to distract from the pressing issues that need attention. The only thing intelligent in ‘AI’ is human labour that goes into it.

San Jose (Costa Rica): The president of Signal, Meredith Whittaker, is a scholar, long-term tech worker, who was formerly with Google, and an advisor to governments before her current role. An impassioned advocate of privacy and the role of humans in so-called ‘Artificial Intelligence’, she spoke to The Wire on the ground rules of the popular messaging app Signal and the tumult currently in the world of AI.

When asked for her response to governments that want to break encryption by saying that messaging apps cannot be agnostic about the messages they relay, Whittaker said, “We push back, as for thousands of years the default was not being surveilled by a large corporation or a powerful government when we communicated with each other. That was the expectation. We need to be clear that the right to privacy does not end because we have been forced to adopt digital technology. It is very important to remember that.”

Whittaker, a scholar who has worked on the history of computational tech, said that with the UK government’s proposed Bill, demanding that apps break government’s encryption, it was setting a “damaging precedent” based on “magical thinking on what technology can do.”

The UK government claiming that it has the right to do mass surveillance to do something not seen before will be copied and pasted by other jurisdictions watching. She categorically said it was not possible to “see all messages – without breaking encryption.” Both things are “fundamentally contradictory.”

The surveillance model

Speaking specifically on India’s controversial IT rules notified in April, Whittaker said Signal will push back if required to break encryption.

“Absolutely. We will never undermine the privacy promise of people who rely on us. We would shut down before we would do that. It is a red line, no compromise, but we will fight. We are a small organisation. But we are a non-profit, so we have no other reason to exist than to provide a truly private communication app to people around the world. And, if we cannot do that, then we will go out and do something else. That is our standard.”

Also read: The Amendments to the IT Rules Approach Censorship but Are More Complicated Than Apparent

Would Signal, like WhatsApp (with Reliance), contemplate getting into a business model of commerce linked to the messaging platform, which has raised all manner of concerns?

“We would want to be very careful. As a dominant business model – that has dominated the tech industry for the past 20 years – has been monetising the surveillance model. Selling ads, using user data to train AI or other derivative functions. So, that is the dominant business model and that’s why Signal chose to be a non-profit. We never want pressure from investors or shareholders to undermine privacy – they may say, ‘we are not making money, so we can undermine a bit of that privacy to make more money.’ So, we are never going to adopt the surveillance business model.”

“We will never undermine the privacy promise of people who rely on us. We would shut down before we would do that. It is a red line, no compromise, but we will fight,” says Whittaker. Credit: Pixabay

The term ‘AI’

Speaking on her understanding of Artificial Intelligence or AI, Whittaker said, “Privacy and AI hype are related. We are seeing many tech companies making claims that AI is conscious, and that it will have the power to end human life. [They are] making claims that have no scientific evidence.”

This fuels the idea of “magical thinking by machines – which politicians also believe” and which hampers the parameters of regulation, often threatening the framework to control Big Tech.

The famous science fiction writer Ted Chiang in an interview to Financial Times recently said that the term AI was problematic and it would have been more appropriate to term it what it is, ‘Applied Statistics.’

Whittaker said, “We don’t have a counterfactual history – While speaking to Re:publica in Berlin recently, I reflected on the history of that name – how contingent the name AI was. Chosen by cognitive and computer scientist John McCarthy in the mid 1950s, largely as he wanted something attractive to military funders who love overstatement. And, to exclude his academic competitor Norman Weiner, who coined the term ‘cybernetics’, under which, at the time, most of the field was organised.”

“Going back to Ted Chiang, this was a term revived in the late 2010s, to describe a whole host of corporate tech, which needed huge amounts of data, which we need to understand if surveillance and huge amounts of computational infrastructure, which resources only a handful of companies have. So, it is a marketing term more than a technical term and a powerful term as calling something intelligent every day, ascribes to its magical qualities.”

Also read: What India Should Remember When It Comes to Experimenting With AI

Regulating AI

Whittaker does not look very kindly at the dramatic one-line statement put out by several investors and those at the forefront of AI, speaking of the threat of human “extinction” at the hands of AI in the future. “I read it as doing two things – I am not saying that all who believe this are disingenuous but the work it is doing is harming our chances of democratically governing these companies and technology. Alarmism in the far future – 20, 40, 70 years…ignoring harms through monopolisation, surveillance, and the harms [that are]already happening as this technology is removing discretion from people’s everyday lives.”

She said that another thing they are doing is “reframing the problem from a regulatory perspective.”

“We have been talking about workers’ rights, taking discretion away by automating, and we are talking about fundamental issues that are happening now, and that was the problem regulators were tasked with addressing. They have now reshaped the problem – far in the future and asking regulators to regulate based on a fantasy not reality. This is a very classic tactic of hollowing out regulation – trying to co-opt regulation when businesses recognise that it is inevitable.”

Former Prime Minister of New Zealand, Jacinda Ardern, recently spoke of the task of regulating AI being akin to trying to fix a rocket while it was flying. Does she see regulation of AI as an impossible task too?

“I don’t see it as an impossible task – there are many pressures against it but the pressures are political not technological. If there is a law in Europe to ban surveillance, we will be halting the AI rocket. Because we would be cutting off flows of data and economic imperative at the centre of AI. A law mandating structural separation, or breaking up these consolidated [tech] monopolies, we would be going a long way towards curbing these harms. There are many practical ways we can do this.”

She said, “This is not science run amok but corporate entities with too much power and we know how to address that.”

The role of workers in AI

Whittaker as a scholar has worked on the role of labour in modern technology and the systematic drive to minimise human effort in building large ‘automatic’ and so-called ‘artificial’ systems.

“If you begin to trace the labour required to create AI systems, we tell a much different story of what those systems are and where intelligence lives. So many workers are required to create those data sets which AI is trained on, to label those data sets, to calibrate and train those systems, to make them acceptable for everyday rules, so they are not spitting out racist misogynist garbage all the time, which they will tend to do because they are trained on the internet, right? And then you need workers at the other end, who are checking their outputs, recalibrating them for the dynamic environment in which they are placed, you need people tending to them because they never quite work because the world is infinitely more complex and interesting than an automated system trained to do statistical predictions will be able to handle.”

“If there is a law in Europe to ban surveillance, we will be halting the AI rocket. Because we would be cutting off flows of data and economic imperative at the centre of AI,” says Whittaker. Photo: Frank V./Unsplash, (CC BY-SA)

Whittaker is clear that “the intelligence in AI is the perception of the workers; it is the embodied skill of the workers, it is the ability of the workers to intervene with their real intelligence to put the statistical model back on track so that it appears to be intel, and the attribution of the machine as intelligence while alienating the worker is not a new phenomenon. It is something, however, [that is] supercharged by these technologies.”

So can we pull back from where we are and get workers back in the equation and the human element?

“We are going to do what we can,” she said.

AI and democracy

Whittaker is hopeful about the Writers Guild of America, the Hollywood writers who are on strike about the huge issues due to the potential application of AI. They are demanding the right to decide if AI is used or not. They are saying that they won’t allow production houses to use this tech to undermine their work and their abilities and livelihoods, and to turn their work into gig-work. The Screen Actors Guild has authorised this strike.

So, is there hope for regulation? That is the hope, frontlines of AI policy right now.

Also read: Can AI-Based Tools Like ChatGPT Function as Moral Agents?

With enormous amounts of information that so many versions of ChatGPT and other tools are spewing out, there are loads of ‘information’ out there. Information and an informed citizenry is the essence of a healthy democracy.

When asked how she sees the connection of AI with information and eventually democracy panning out, she said, “I am very concerned. Because we already have a very fractured information system thanks in large part to the platform business model that has undermined local news and our ability to support journalism, so that has been ongoing.”

“Now we add on top of that these opaque centralised systems that are very good at spitting out meaningless factually vacant content that looks like it is plausible but has no relationship to truth and no understanding of veracity. So this certainly supercharges issues of campaigns to spread misinformation – deep fakes and other tech. I think it is extraordinarily irresponsible for these companies to have deployed these tools at this moment.”

Any hope? Yes, she nods. “Organising – with the unions on the frontline.”