Disposing of Big Tech: Free speech is not disposable (Part One)
December 22, 2021 / Session / By Alex Linton
Originally published on Session.
In the United States—the capital of the tech world—there is an ongoing struggle between the belief in people’s right to free speech and concerns over the violent and malicious content being spread online. The outcomes of the conflict between big tech companies, regulators, and digital rights advocates on the US battleground will inevitably precipitate into the rest of our global community, so it’s important to keep a close eye on the news coming out of the US.
In the US, Section 230 is the shield protecting online speech, a key piece of legislation which prevents companies providing internet services—like social media and messaging apps—from being liable for user-generated content on their platform. Section 230 has allowed for innovation and free speech to flourish on the internet, without it — allowing user-generated content would be a risk no company could reasonably take. No content creators. No comment sections. Nothing that makes the internet the living, breathing organism that it is.
“The most important law protecting internet speech”
EFF, on Section 230
Needless to say, regulators and lawmakers aren’t supporters of Section 230. Donald Trump tried to repeal it, and Joe Biden wants to revoke it as well. Section 230 is well and truly under fire, and it’s part of a wider concept of limiting and moderating the content which is posted and promoted online.
The content moderation approach
When Biden called for Section 230 to be repealed, he said that it was because internet companies were ‘promoting falsehoods they know to be false.’ Biden’s concern is undoubtedly that misinformation, malicious information, and fake news are spreading like wildfire on social media platforms like Facebook. His view—and others—is that content needs to be more heavily limited and moderated to prevent this information from spreading so easily.
‘The only thing unifying both sides is a desire for greater regulatory control of media. In today’s hyper-partisan world, tech platforms have become just another plaything to be dominated by politics and regulation.’
Adam Thierer in The Hill
The problem of course, is that content moderation is a clear band-aid fix for the problem in front of us. Since the foundation of modern democracies, the value of free speech has been recognised, acknowledged, and enshrined in our laws and codes. While some limitations on that right have been accepted — they’re not taken lightly. Limiting people’s online speech is a widespread, catch-all limitation on speech — and it’s not a good way to solve the societal problems we’re posed with. Not only does content moderation not get to the root cause of the problem, it seems like it’s largely impossible for it to be effective. There are over 4 billion active internet users in the world. Many of them are actively creating and sharing content all the time. There isn’t an effective, efficient way to moderate the content being made by 4 billion people. Manual moderation is simply infeasible, and it’s possible that algorithmic moderation would actually make the problem worse.
‘Despite the potential promise of algorithms or ‘AI’, we show that even ‘well optimized’ moderation systems could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms…’
From Abstract in, ‘Algorithmic content moderation: Technical and political challenges in the automation of platform governance’
It seems like it is not possible for online platforms to target hateful content and misinformation in a meaningful way — which means that moderation would need to be so sweeping it would likely limit normal, everyday people’s free speech along the way.
Enter Facebook whistleblower Frances Haugen. Between the pages of leaked documents and her own testimony, Haugen has added new dimensions to the conversation about how to deal with Big Tech.
The algorithm approach
Haugen has helped shine a spotlight on the thinking that created the Big Tech calamity. The assumption was that scale is the ultimate good, and whatever evils committed in pursuit of scale are justified. Make everything efficient, frictionless, and completely optimal — allowing platforms to grow, and grow, and grow using the network effect.
Companies like Facebook have actively promoted outrageous content because it performs better according to their algorithm. More engagement, more growth, more money. From the outside looking in, algorithms are opaque, vague, and unaccountable. It was hard to point the finger at algorithms, because there is no transparency around how they’re operating.
The documents released by Frances Haugen have demystified some of the inner workings of the algorithms, and showed how deliberate choices are worsening online hate and outrage. In light of the documents, the conversation has shifted — instead of moderating content itself, moderate the amplification of content. The algorithm is the vehicle for amplification, and now that it’s finally being ID’d as the culprit behind the misinformation epidemic — there is a target on its head. Former Facebook data scientist Roddy Lindsay wrote in the New York Times advocating for ‘Congress to [should] craft a simple reform : make social media companies liable for content that their algorithms promote’.
This simple reform could flick the switch on misinformation, suddenly promoting misinformation and outrageous and violent content wouldn’t be the most profitable option for Big Tech. With one move — the algorithms would have to be retooled. It could be a shot to the heart of misinformation. Read more about this kind of reform in Part Two.
Preserving free speech in online spaces is paramount
The algorithm approach is elegant because it doesn’t limit people’s ability to speak and express themselves online, it just prevents companies from incentivising harmful, malicious content. Of course, there are significant, systemic issues with Big Tech that can’t just be solved by tweaking their algorithms—and there is more detail to be explored about the issues with algorithms in Part Two of Disposing of Big Tech—but the kind of reform suggested by Lindsay would create some level of accountability for internet companies without leaving free speech on its knees.
Latest blog posts
The OPTF and Session
The OPTF is transferring its responsibilities as steward of the Session project to the newly established Swiss foundation, the Session Technology Foundation.
READ MORE »
October 15, 2024
Cyber laws around the world: Privacy is not the policy
There is no doubt that the European Union’s GDPR has changed the cyber regulation landscape forever. As onlookers from non-EU countries urge their governments and regulators to adopt similar legislation, countries are rapidly adopting their
READ MORE »
December 04, 2022
The long and winding road : Striving for data protection in Indonesia
Juliana Harsianti is an independent researcher and journalist working at the intersection of digital technology and social impact. The long awaited Indonesian Personal Data Protection Bill was approved by the parliament on 20 September 2022.
READ MORE »
November 17, 2022