latest Post

Facebook whistleblower, Frances Haugen, raises trust and security questions over its e2e encryption

Frances Haugen, one of (now) multiple Facebook whistleblowers who have come forward in recent years with damning testimony related to product safety, gave testimony in front of the UK parliament today — where, in one key moment, she was invited to clarify her views on end-to-end encryption following a report in the British newspaper the Telegraph yesterday.

The report couched Facebook’s plan to extend its use of e2e encryption as “controversial” — aligning the newspaper’s choice of editorial spin with long-running UK government pressure on tech giants not to expand their use of strong encryption in order that platforms can be ordered to decrypt and hand over message content data on request.

In its interview with Haugen, the Telegraph sought to link her very public concerns about Facebook’s overall lack of accountability to this UK government anti-e2ee agenda — claiming she had suggested Facebook’s use of e2e encryption could disrupt efforts to protect Uighur dissidents from Chinese state efforts to inject their devices with malware.

The reported remarks were quickly seized upon by certain corners of the Internet (and at least one other ex-Facebook staffer who actually worked on adding e2e encryption to Messenger and is now self-styling as a ‘whistleblower’) — with concerns flying that her comments could be used to undermine e2e encryption generally and, therefore, the safety of scores of Internet users. 

Sounding unimpressed with the Telegraph’s spin, Haugen told UK lawmakers that her views on e2e encryption had been “misrepresented” — saying she fully supports “e2e open source encryption software”; and, indeed, that she uses it herself on a daily basis.

What she said she had actually been querying was whether Facebook’s claim to be implementing e2e encryption can be trusted, given the tech giant does not allow for full external inspection of its code as is the case with fully open source e2ee alternatives.

This is another reason why public oversight of the tech giant is essential, Haugen told the joint committee of the UK parliament which is scrutinizing (controversial) draft online safety legislation.

“I want to be very, very clear. I was mischaracterised in the Telegraph yesterday on my opinions around end-to-end encryption,” she said. “I am a strong supporter of access to open source end to end encryption software.

“I support access to end-to-end encryption and I use open source end-to-end encryption every day. My social support network is currently on an open source end-to-end encryption service.”

“Part of why I am such an advocate for open source software in this case is that if you’re an activist, if you’re someone who has a sensitive need, a journalist, a whistleblower — my primary form of social software is an open source, end-to-end encryption chat platform,” she also said, without naming exactly which platform she uses for her own e2ee messaging (Signal seems likely — a not-for-profit rival to Facebook-owned WhatsApp which has benefited from millions of dollars of investment from WhatsApp founder Brian Action, another former Fb staffer turned critic; so maybe ‘meta’ would in fact be a perfect new brand name for Facebook).

“But part of why that open source part is so important is you can see the code, anyone can go and look at it — and for the top open source end-to-end encryption platform those are some of the only ways you’re allowed to do chat in say the defence department in the US.

“Facebook’s plan for end-to-end encryption — I think — is concerning because we have no idea what they’re doing to do. We don’t know what it means, we don’t if people’s privacy is actually protected. It’s super nuanced and it’s also a different context. On the open source end-to-end encryption product that I like to use there is no directory where you can find 14 year olds, there is no directory where you can go and find the Uighur community in Bangkok. On Facebook it is trivially easy to access vulnerable populations and there are national state actors that are doing this.

“So I want to be clear, I am not against end-to-end encryption in Messenger but I do believe the public has a right to know what does that even mean? Are they really going to produce end-to-end encryption? Because if they say they’re doing end-to-end encryption and they don’t really do that people’s lives are in danger. And I personally don’t trust Facebook currently to tell the truth… I am concerned about them misconstruing the product that they’ve built — and they need regulatory oversight for that.”

In additional remarks to the committee she further summarized her position by saying: “I am concerned on one side that the constellation of factors related to Facebook makes it even more necessary for public oversight of how they do encryption there — that’s things like access to the directory, those amplification settings. But the second one is just about security. If people think they’re using an end-to-end encryption product and Facebook’s interpretation of that is different than what, say, an open source product would do — because an open source product we can all look at it and make sure that what is says on the label is in the can.

“But if Facebook claims they’ve built an end-to-end encryption thing and there’s really vulnerabilities people’s lives are on the line — and that’s what I’m concerned about. We need public oversight of anything Facebook does around end-to-end encryption because they are making people feel safe when they might be in danger.”

Haugen, a former Facebook staffer from the civic integrity team, is the source for a tsunami of recent stories about Facebook’s business after she leaked thousands of pages of internal documents and research reports to the media, initially providing information to the Wall Street Journal, which published a slew of stories last month, including about the toxicity of Instagram for teens (aka the ‘Facebook Files‘), and subsequently releasing the data to a number of media outlets which have followed up with reports today on what they’re calling the Facebook Papers.   

The tl;dr of all these stories is Facebook prioritizes growth of its business over product safety — leading to a slew of harms that can affect individuals, other businesses and the public/society more generally whether as a result of inadequate AI systems that cannot properly identify and remove hate speech (leading to situations where its platform can whip up ethnic violence), or which allow engagement based ranking systems to routinely amplify extreme, radicalizing content without proper mind to risks (such as forming conspiracy-theory touting echo chambers forming around vulnerable individuals, isolating them from wider society), or overestimation of its ad reach leading to advertisers being systematically overcharged for its adtech.

During her testimony today, Haugen suggested Facebook’s AIs were unlikely to even be able to properly distinguish dialectal distinctions and nuances of meaning between UK English and US English — let alone the scores of languages in countries where it directs far less resource.

Parliamentarians probed her on myriad harms during around 2.5 hours of testimony — and some of her answers repeated earlier testimony she gave to lawmakers in the US.

Many of the UK committee’s questions asked for her view on what might be effective regulatory measures to close the accountability gap — both on Facebook and social media more generally — as MPs sought to identify profitable avenues for amending draft online safety legislation.

“The danger with Facebook is not individuals saying bad things, it is about the systems of amplification that disproportionately give people saying extreme polarising things the largest megaphone in the room,” argued Haugen.

Her list of suggestions for fixing a system of what she couched as broken incentives under Facebook’s current leadership included mandatory risk assessments — which she warned need to cover both product safety and organisational structure since she said much of the blame for Facebook’s problems lies with its “flat” organizational structure and a leadership team that rewards (and thus incentivizes) growth above all else, leaving no one internally who’s accountable for improving safety metrics.

Such risk assessments would need to be carefully overseen by regulators to avoid Facebook using its customary tactic in the face of critical scrutiny of just marking its own homework — or “dancing with data” as she put it.

Risk assessments should also involve the regulator “gathering from the community and saying are there other things that we should be concerned about”, she said, not just letting tech giants like Facebook define blinkered parameters for uselessly partial oversight — suggesting “a tandem approach like that that requires companies to articulate their solutions”.

“I think that’s a flexible approach; I think that might work for quite a long time. But it has to be mandatory and there have to be certain quality bars because if Facebook can phone in it I guarantee you they’ll phone it in,” she also told the committee.

Another recommendation Haugen had was for mandatory moderation of Facebook Groups when they exceed a certain number of users.

Whereas — left unmoderated — she said groups can be easily misappropriated and/or misused (using techniques like ‘virality-hacking’) to act as an “amplification point” for spreading discord or disseminate disinformation, including by foreign information operations.

“I strongly recommend that above a certain sized group they should be required to provide their own moderators and moderate every post,” she said. “This would naturally — in a content-agnostic way — regulate the impact of those large groups. Because if that group is actually valuable enough they will have no trouble recruiting volunteers.”

Haugen also suggested that Facebook should be forced to make a firehose of information available to external researchers (as Twitter, for example, already does) — in a privacy-safe way — which would allow outside academics and experts to drive accountability from the outside by investigating potential issues and identifying concerns freed from Facebook’s internal growth-focused lens.

Another of her recommendations was for regulators to demand segmented analysis from Facebook — so that oversight bodies get full transparency into populations that disproportionately experience harms on its platform.

“The median experience on Facebook is a pretty good experience — the real danger is that 20% of the population has a horrible experience or an experience that is dangerous,” she suggested.

She went on to argue that many of Facebook’s problems result from the sub-set of users who she said get “hyper exposed” to toxicity or to abuse — as a consequence of an engagement-driven design and growth-focused mindset that rejects even small tweaks to inject friction/reduce virality (and which she suggested would only mean Facebook giving up “small slivers” of growth in the short term and yield a much more pleasant and probably more profitable product over the longer term).

“As we look at the harms of Facebook we need to think about these things as system problems — like the idea that these systems are designed products, these are intentional choices and that it’s often difficult to see the forest for the trees. That Facebook is a system of incentives, it’s full of good, kind, conscientious people who are working with bad incentives. And that there are lack of incentives inside the company to raise issues about flaws in the system and there’s lots of rewards for amplifying and making things grow more,” she told the committee.

“So I think there is a big challenge of Facebook’s management philosophy is that they can just pick good metrics and then let people run free. And so they have found themselves in a trap where in a world like that how do you propose changing the metric? It’s very very hard because 1,000 people might have directed their labor for six months trying to move that metric and changing the metric will disrupt all of that work.

“I don’t think any of it was intentional — I don’t think they set out to go down this path. And that’s why we need regulation — mandatory regulation, mandatory actions — to help pull them away from that spiral that they’re caught in.”

Legislation that seeks to rein in online harms by applying regulations to platform giants like Facebook must also not focus only on individual harms — but needs to respond to societal harms, she also emphasized.

“I think it is a grave danger to democracy and societies around the world to omit societal harm. A core part of why I came forward was I looked at the consequences of choices Facebook was making and I looked at things like the global south and I believe situations like Ethiopia are just part of the opening chapters of a novel that’s going to be horrific to read. We have to care about societal harm — not just for the global south but for our own societies.

“When an oil spill happens it doesn’t make it harder for us to regulate oil companies. But right now Facebook is closing the door on us being able to act — we have a slight window of time to regain people-control over AI; we have to take advantage of this moment.”

Facebook has been contacted for comment.

 



source https://techcrunch.com/2021/10/25/facebook-whistleblower-frances-haugen-raises-trust-and-security-questions-over-its-e2e-encryption/

About shashi pathipaka

shashi pathipaka
Recommended Posts × +

0 comments:

Post a Comment