Disposing of Big Tech: Building better algorithms (Part Two)
Maths is maths — it can’t lie, it can’t be biased, it’s a force of nature that’s too pure and honest to be meddled with by mere humans. And algorithms…well algorithms are just a special kind of maths, so the same applies to those too, right? For a long time, people lived under the assumption that algorithms were neutral things that just kind of…existed. Most people understood the vague concept, a series of cogs turning somewhere in the background which miraculously presented you with all the viral content your little human heart could desire — like a mechanical magician pulling ‘Charlie bit my finger’ out of a hat. It was all just fun and games — just a silly meme machine built to help us pass the time. In Part One we talked about how the merry and cheer of the early internet has twisted into something dark — and suddenly we find ourselves battling over our basic rights and freedoms, like free speech and privacy.
In case you missed part one: The issue is that the internet is allowing harmful and violent content and misinformation to spread easily and efficiently. This is having a tremendous negative effect on the social and political landscape of the entire world. At the centre of this issue are the internet’s biggest content platforms — social media companies. Lots of discussion has gone into how social media companies are contributing to the age of misinformation, and how to reverse the trend. Early on, regulators hoped content moderation would be the answer — but it’s clear that content moderation isn’t an effective solution (at least, not on its own). Now, the conversation has turned — and people are exploring whether making changes to social media algorithms could get to the heart of the problem.
The world can be a better place if we make more deliberate, ethical choices about how algorithms work.
Why would we regulate algorithms? How would we do it? And, what will the effects actually be? These are some of the most important questions in tech right now. First up, let’s talk about just how powerful algorithms are as a tool for disseminating information and content.
The Facebook Papers have spawned an endless cascade of articles about the company’s toxic corporate culture, ego-driven leadership, and how their products are causing harm. Facebook does have a team dedicated to stemming the flow of harmful content on their platforms—the Integrity team—but they’ve been fighting a losing fight against both their own company as well as the algorithms they were meant to be keeping in check. The reportage on the leaked documents give the impression that the importance and value of the Integrity team’s mission was lost on the other arms of the company — the documents in the papers show the Integrity team was expected to explain the impact their initiatives and projects would have on growth and engagement.
“I worry that Feed is becoming an arms race”
Still — the Integrity team did have the ability to rig the system by artificially ‘downranking’ certain content, but…the algorithm just kept promoting it. The machine was fighting back — and it was doing exactly what it was designed to do: boost engagement. It didn’t need to measure for truthfulness, harm, or anything else. Even when harmful content was demoted by 90 per cent, the algorithm continued amplifying it hundreds of times because it was performing so damn well (at least, according to what it was looking for). It just couldn’t be stopped.
A side note on incentivising behaviour: Amplification is a reward. If harmful content is being amplified by the algorithm (because it also happens to have high engagement) then people are incentivised to keep making similar kinds of content. If we change the algorithm to amplify content which is productive and meaningful — we incentivise people to create and share the kind of content we ‘want’ to see.
Building a benevolent algorithm
Social media has become a part of the fabric of our lives – we’ve spent a lot of time lamenting the negative impacts of social media. But if we fundamentally change what our feeds look like with strict quality control and regulation of social media algorithms, what kind of positive changes can we expect to see in the world?
Firstly, it’s important to note there is no easy way to decide exactly how the process of changing and regulating algorithms will work (i.e. how to quantitate value). That’s a question we will explore a bit more in the third part of this series! For now, let’s think about why it’s worth putting in the hard work to figure out how.
Better connected, better informed
Because of the unreliability of information on social media, the discerning users of the internet surf the web with their guard up — always remaining cynical and skeptical everywhere they go. Many have even completely abandoned social media as an information source — when (theoretically) social media could be an incredible source of information. There is simply so much misinformation floating around that it undermines the factual information. Getting both sides of the story is one thing — but having mutually exclusive, conflicting information only serves to muddy the public discourse and halt societal progress.
By incentivising and amplifying truthful, useful, and meaningful content, we can increase the overall reliability and trustworthiness of information that shows up in people’s feeds.Regulating algorithms can make social media a convenient and reliable way to inform oneself about the facts of the day; allowing people to make more informed, mindful decisions about what they consume and the causes they care about.
Having an informed, educated, and connected population is enormously valuable for our global community — and critical for any healthy democracy. For a democracy to function properly, the population needs to be well-informed enough to make meaningful choices about voting and governance. The more informed the population is, the better.
At the moment, social media is often thought of as a threat to democracy — because it is having a deleterious effect on how much proper information people are able to access. Algorithm regulation could flip this on its head completely — and make algorithms a defender of democracy, not a detractor.
Shielding against censorship
Speaking of democracy, a better-behaved algorithm could also protect one of the most important democratic rights — free speech. Facebook has actively censored anti-government content before — and has generally only appeared as an advocate for free speech when it supports their bottom line. But the story of the Facebook Integrity team’s struggle to fight against the algorithm (described above) could actually be useful if the algorithm had a reliable way of programmatically amplifying the correct content.
Social media platforms would have a lot more legitimacy as platforms for online speech if they couldn’t so easily pick and choose what content to allow and what to censor based on personal whims or the demands of authoritarian governments. Suddenly, important information could survive even manual censorship attempts — if it was justified by the algorithm. When the content is harmful, that’s obviously a bad thing, but if it has value — it could be a huge step in the right direction for online speech.
Let’s get started
Instead of being an incredible drain, it’s possible to make algorithms work for us to help build a better future for the world. But how do we actually get there? There’s lots to consider, and it’ll probably require a huge amount of work and cooperation — but it could be worth it.
Technology has done an incredible job of connecting people who never would’ve crossed paths without things like the internet, but as more and more of our lives become digital (especially with the metaverse on the horizon) — we need to think of better solutions. The way that social media exists in its current form is unsustainable, but it can still be an important part of the internet’s future.
Latest blog posts
Cyber laws around the world: Privacy is not the policy
There is no doubt that the European Union’s GDPR has changed the cyber regulation landscape forever. As onlookers from non-EU countries urge their governments and regulators to adopt similar legislation, countries are rapidly adopting their
READ MORE »
The long and winding road : Striving for data protection in Indonesia
Juliana Harsianti is an independent researcher and journalist working at the intersection of digital technology and social impact. The long awaited Indonesian Personal Data Protection Bill was approved by the parliament on 20 September 2022.
READ MORE »
Kazakhstan needs tougher laws to address the impacts of spyware
In July 2021, the United Nations (UN) High Commissioner for Human Rights, Michelle Bachelet, issued a statement exposing the widespread use of Pegasus spyware that targeted journalists, human rights activists, politicians, and other people across
READ MORE »