The Coming Crisis of Authenticity
Imagine…
A Human Resource representative, charged with hiring an employee for new technology initiative, puts together a job ad. The HR rep partially understands the technology, but not completely. As they start writing the job ad, the realize they need help. Its late on a Friday afternoon and the IT director left for a conference yesterday. Not to waste any more time, the HR rep asks an AI to help sculpt the ad.
A prospective employee sees the job ad. Some of the skills are right up their alley. Others not so much. Looking at their resume, the prospect realizes they can tweak their resume to better fit the ad. They struggle with wording those tweaks and ask AI to help. The AI does a masterful job of matching the keywords in the job ad to the resume. The prospect submits their resume.
The HR rep, overwhelmed with the number of applications, filters the submissions with an AI tool that compares keywords in the job ad with keywords in the resumes. The prospect’s resume comes out near the top. The HR rep calls the prospect and arranges a virtual interview.
The prospect, worried about getting employed as their rent is due soon, wants to put their best foot forward. They hear about this app that can suggest responses to interview questions. Figuring a little help is okay as long as they learn on the job quickly, they subscribe to the app.
The day of the interview arrives and the app helps the job applicant crush the interview questions. The HR rep, suitably impressed, waves of a slight unease that a couple of the answers were generic but well polished in phrasing.
To verify the capability of the applicant, the HR rep reaches out by email to one of the references and asks for a recommendation. The reference likes the prospect, but worries the job requirements don’t quite fit the applicant’s background. However, they figure that’s up to the company to decide, not them. Short on time, the reference seeks help writing the recommendation. With a prompt to an AI and some tweaks afterwards, the reference completes a glowing recommendation.
Reading the recommendation and reflecting on the interview, the HR rep decides the prospect fits the job. They send an offer. The prospect accepts the offer and starts the job.
Within two months, it becomes clear that the prospect can’t do essential elements of the job. With no time to hire a new employee, the new technology initiative fails. Because of the failure, the business struggles financially. It lays off some of its staff, including the HR rep.
Authenticity
Objectivity is necessary for decision making. Good decisions require knowing true and authentic facts, without which mistakes are made. Unfortunately, people introduce biases, occasionally stretch the truth, and in a few morally corrupt instances – attempt to deceive. Knowing the human propensity for fabricating facts, we develop systems and practices for discovering the truth, verifying facts, and establishing objectivity.
Resumes can be fabricated, so we require interviews. Applicants can lie during interviews so we demand references.
The problem today steams from our new AI tools. AI has made it possible to fake so many things. The authenticity of all of these activities can no longer be trusted. Even when people use them in well meaning and ethical ways, the outcomes from these activities change the facts, sometimes subtly, sometimes in significant ways. At what point can we no longer trust the outcome?
At what point can we no longer trust the authenticity?
We face a crisis of authenticity.
What’s different this time?
It’s not that people haven’t received helped before. We get help improving our resumes, improving our interview skills, or improving our recommendations. I’ve gone to resume writing workshops and participated in mock interviews. I use templates when writing recommendations so I don’t forget major points. We’ve all done it.
The difference this time – the cost of getting help dropped to zero.
When the cost is high, we are motivated to make what we say count. This discourages inauthentic responses because getting caught saying things false is expensive. It impacts our reputation, it impacts our pocket books, and it impacts our life.
When the cost of drops to zero, what do people lose if it fails?
What if job applicant lets AI construct their resume a thousand times? 999 companies may reject that resume. But if applicant lands that one job, they can extract significant value from that business until the business discovers the fraud. By which time, the job applicant can find the next sucker.
What if a political advocate develops a thousand deep fake political videos with AI? Social media companies might flag 999 of them. But if one succeeds, all those failures cost them nothing.
What if a scammer emails a million people with a personalized AI-generated messages made to sound like a relative of the target? 999,999 of those emails might fail. But the one that succeeds could drain the savings from the target’s bank account.
In isolation, one lie is manageable. When thousands start lying all the time because its so easy, how do we proceed?
How to authenticate
Our current system works because lying is costly. But when dishonesty becomes frictionless, bad actors exploit the system. AI enables that exploitation at scale.
Yes, AI will change the nature of work in many positive ways. However, we will also need new forms of authentication. New methods of trust-building. New systems for verifying competence, facts, and truth.
If resumes, interviews, and references become unreliable, what comes next?
We’ll need to rethink how we signal truth and value. And we’ll need to do it soon.
Because this shift is already underway.