Dialogues on digital rights: On the internet, trust is a noun

April 12, 2022 / Digital Rights / By Mallory Knodel

This article is a part of a series of pieces commissioned by the OPTF, written by people from all around the world. Mallory Knodel is the Chief Technology Officer of the Center for Democracy and Technology and a member of the Internet Architecture Board at the IETF.

Trust is hard. Trust as an action can be “placed” in someone only after it is “earned,” as in you trust your doctor’s advice. You might place your trust in your doctor because a medical school on display has assured you of their expertise, and that certificate is an artefact of trust. And, when it comes to digital trust, generally the best we should expect from technology is an “assurance,” the presence of which is a requirement of trust, but itself not a replacement for the act of trusting. At the same time it’s really important for digital technologies to help us place trust, an active verb, in the publications we read, the institutions we transact with and the people we talk to.

In the 1997 book, “The Problem of Trust,” author Adam Seligman discusses how transactional intermediaries and social rules meant to enhance social stability have actually had the opposite effect of undermining trust and eroding social cohesion. Today, we see this problem exacerbated by “technology-enabled intermediaries” which take the form of keys, tokens, certificates, proxies, and many other tools that work between computers, between computers and people, and even between people.

Computing might never be able to replace the need for trusted intermediaries, though there are an enormous number of digital applications of cryptography, usually, in the form of certificates, signatures, keys, authentication tokens and validation techniques, among others. These mechanisms, curiously, can sometimes be called “zero trust” models — as if trust itself were the cause of, rather than the cure for, fraud and deception. Another issue with these “trustless” models is the inability to locate untrustworthy behaviour or even acknowledge a state of uncertainty.

In order to be meaningful, trust requires the definition of limits: what is untrustworthy. As long as there is the need for trusted intermediaries, trust cannot be a static concept. Revising a trust relationship becomes important, as well as entire revocation of trust. Similarly, it is important to define what is not, itself, trust, but rather an assurance or a verification of identity only.

It is important to clearly separate assurance and trust, while recognising their interrelationship. Our devices and the software running on them intermediates our digital interactions (which, these days, is most of them). Even if you manage to find the time to pay a parking ticket in person at the DMV, the finalities of your transaction will be performed with the help of a computer connected to the internet. Assurances include identity, authentication, verification of people, credentials, transactions between individuals, devices, and platforms. So it would be impossible to place trust in an institution, service or person without the assurances that underpin these interrelationships.

By way of explaining how assurance and trust are separate but interrelated, three examples of digitised trust can make this mental model more complete. The three examples look at trust between computers, the trust a person has for a computer, and trust between people intermediated by computers.

TLS certificates assure trust between computers

We use TLS to safely view websites, open apps or send and receive messages and many times we don’t even know it’s being used because the green lock is a convention of web browsers but not other things, including in-app browsers. But when we visit websites safely a green lock appears to indicate that a Transport Layer Security (TLS) certificate is valid because our personal device has been assured of the identity of the computer hosting the website. Those entities are:

  • A user viewing a website

  • A website host providing encrypted transport of data between the server and the user

  • A certificate authority (CA) assuring the user of the website host’s identity,

  • And a web browser developer deciding it is okay to trust that CA.

Trust in the web was vastly improved when it was made easier for every website to get a TLS certificate to allow users to securely connect. But what does a certificate authority like “Let’s Encrypt” really do? A CA verifies that the applicant for a certificate is the website owner and bundles that information cryptographically when it issues the TLS certificate. Web browsers are another level of assurance that CAs, and there are over 100, are accountable and responsive.

The green lock nor TLS itself is not an assurance that the web server won’t deliver you fake news, download malware on your device, or even verify the legal entity of the website owner (those are done with “extended validation” TLS certificates and they largely don’t work because users can’t tell the difference).

C2PA assures people’s trust in information

Content provenance is a problem that some intermediaries are helping to solve with authenticity assurances, much like fact checking but for digital data. The Coalition for Content Provenance and Authenticity (C2PA) assertions help people query the technical details and history of digital media and information from the time it was created until publication.

It is pointed out in the specification that, “C2PA specifications do not provide value judgments about whether a given set of provenance data is ‘true’, but instead merely whether the provenance information can be verified”.

The consumer might view an image on a website displayed with a watermark or other indication of a C2PA assurance. The watermark is displayed by the publishing website, a validator of the assurances, and indicates further information about the image. That information’s validity has been assured by a signer and bundled cryptographically. The signer’s identity has been assured by an identity issuer.

The scheme is very complicated considering “truth” is an explicit non-goal of the specification. And yet at any point from camera to news article there can be manipulation of the data, sometimes intentional such as in the case of obfuscating sensitive personal details, what can 

COVID apps assures people’s trust in vaccination

COVID-19 vaccination certificate systems like the one in the EU uses digital signatures from local health authorities at the creation, storage and scanning levels. 

Vaccinated people have a “passport” in print or digital form on their smartphones that contains a QR code. This code contains some basic information about the person, their vaccination status and any test results. Businesses can use a reader application to scan these QR codes to obtain and verify the information against a photo ID, for example. The passport, and information updates, are generated with software issued only to local health authorities.

What you can be assured of: An entity with access to a digital signature generating application has bundled together a first and last name with COVID-19 health data.

What this trust implies: A local health authority has verified the vaccine status of a particular patient.

How this trust might be exploited: Someone might have given false information when they received their vaccine. Someone who is not a local health authority might have obtained the means to generate digital signatures (and not issue passports based on truthful data).

Ultimately vaccine passports are trying to make it convenient and consistent to play one’s role in a global health crisis by standardising vaccination records (local health authorities), getting vaccinated and showing proof (patients), and setting high standards for vaccination requirements (businesses, schools, airlines, et al). That we trust people to not be antisocial or liars isn’t something a digital signature could solve, or tries to solve.

We might conclude here with the introductory statement that trust is hard. That is said not because we should be wary of or stop proliferating digital trust mechanisms, but rather that we need a more practical view of what digital trust is, and what it is not. As for what digital trust is not, that is where institutions, organisations and people come in. Because while the digital age proliferates, diversifies and confuses the ways “trust” can be applied, somehow everyone is in greater need of a world that is more trustful.

Latest blog posts