How ‘Analog Privilege’ Spares Elites From The Downsides Of Flawed AI Decision-Making Systems
We live in a world where there are often both analog and digital versions of a product. For example, we can buy books or ebooks, and choose to listen to music on vinyl or via streaming services. The fact that digital goods can be copied endlessly and perfectly, while analog ones can't, has led some people to prefer the latter for what often amounts to little more than snobbery. But a fascinating Tech Policy Press article by Maroussia Levesque points out that in the face of increased AI decision-making, there are very real advantages to going analog:
The idea of analog privilege" describes how people at the apex of the social order secure manual overrides from ill-fitting, mass-produced AI products and services. Instead of dealing with one-size-fits-all AI systems, they mobilize their economic or social capital to get special personalized treatment. In the register of tailor-made clothes and ordering off menu, analog privilege spares elites from the reductive, deterministic and simplistic downsides of AI systems.
One example given by Levesque concerns the use of AI by businesses to choose new employees, monitor them as they work, and pass judgement on their performance. As she points out:
Analog privilege begins before high-ranking employees even start working, at the hiring stage. Recruitment likely occurs through headhunting and personalized contacts, as opposed to applicant tracking systems automatically sorting out through resumes. The vast majority of people have to jump automated one-size-fits all hoops just to get into the door, whereas candidates for positions at the highest echelons are ushered through a discretionary and flexible process.
Another example in the article involves the whitelisting of material on social networks when it is posted by high-profile users. Masnick's Impossibility Theorem pointed out five years ago that moderation at scale is impossible to do well. One response to that problem has been the increasing use of AI to make quick but often questionable judgements about what is and isn't acceptable online. This, in its turn, has led to another kind of analog privilege:
In light of AI's limitations, platforms give prominent users special analog treatment to avoid the mistakes of crude automated content moderation. Meta's cross-check program adds up to four layers of human review for content violation detection to shield high-profile users from inadvertent enforcement. For a fraction of one percent of users, the platform dedicates special human reviewers to tailor moderation decisions to each individual context. Moreover, the content stays up pending review.
In terms of addressing analog privilege, wherever it may be found, Levesque suggests that creating a right to be an exception" might be a good start. But she also notes that implementing such a right in AI laws won't be enough, and that the creators of AI systems need to improve intelligibility so people subject to AI systems can actually understand and contest decisions." More generally:
looking at analog privilege and the detrimental effects of AI systems side by side fosters a better understanding of AI's social impacts. Zooming out from AI harms and contrasting them with corresponding analog privileges makes legible a subtle permutation of longstanding patterns of exceptionalism. More importantly, putting the spotlight on privilege provides a valuable opportunity to interrupt unearned advantages and replace them with equitable, reasoned approaches to determining who should be subject to or exempt from AI systems.
Well, we can dream, can't we?