Marcus spent four months applying to 200 jobs. He customized every resume. He researched every company. He followed up. In almost every case, he received the same automated response within minutes of submitting, sometimes seconds. A form rejection. No reason given. No human name attached.
When he finally landed an interview, the recruiter told him she had never seen his resume. Their ATS (applicant tracking system) had filtered him out automatically. She had pulled him from a different pool entirely, on a hunch, after a colleague mentioned his name.
He had made it not by being qualified, but by knowing someone who knew someone. The algorithm had already decided he wasn't worth a look.
The Screen Before the Screen
Most large employers no longer read resumes as a first step. They run them through automated screening software that scores candidates against a profile, filters by keyword match, ranks by inferred attributes, and passes a narrow shortlist to human eyes. The rest go into a folder no one opens.
This is framed as a solution to volume. Companies receive thousands of applications for a single role. No recruiter can read them all. AI makes the triage possible.
That framing is technically accurate. It is also a way of not saying the other thing.
What the Algorithm Actually Does
When a human hiring manager rejects your application, they are making a judgment call that can be questioned. If patterns emerge (Black applicants rejected at higher rates, women filtered out of technical roles, candidates with foreign-sounding names disappearing before interviews), that pattern can surface in litigation. A decision-maker can be deposed. An HR director can be put on the stand. There is a chain of accountability.
When an algorithm rejects you, the chain breaks.
The company can say: we did not reject you. The system scored you. The system is neutral. The system has no intent. You cannot depose a model. You cannot prove what a black-box ranking function was optimizing for. You cannot compel a vendor to disclose proprietary weights. And even if you could, the company can point to the vendor, and the vendor can point to the training data, and the training data points to the historical hiring decisions of companies like the one that just rejected you.
The feedback loop is perfect and perfectly insulated.
Why "Neutral" Is a Choice
The standard defense of automated screening is that it removes human bias. This defense depends on what you mean by bias.
If bias means a recruiter making a snap judgment based on a name, automated systems can reduce that specific failure. But they replace it with a different one: the assumption that whoever got hired in the past is the model for who should get hired in the future.
Every AI screening system is trained on historical data. Historical hiring data reflects historical biases. When the model learns what a "qualified" candidate looks like, it is learning from a dataset shaped by decades of decisions that courts have already found, repeatedly, to be discriminatory.
Neutrality is not a property of the training data. It is a marketing claim.
The Person on the Other End
There is no human to appeal to. This is not a design flaw. It is the design.
When rejection is automated, the company is not required to offer a reason. There is no fair chance ordinance for algorithms in most jurisdictions. There is no civil rights framework that has successfully compelled algorithmic transparency in private hiring. The vendors who build these systems consider their models trade secrets.
You applied. The system scored you. The score was not high enough. No further information is available.
This is not a neutral administrative outcome. It is a decision made about your life and livelihood by a system that has been deliberately placed outside the reach of accountability, and then described in press releases and HR conferences as a step toward fairness.
What "Efficiency" Costs
The efficiency framing is worth examining directly.
Automated screening does reduce recruiter workload. It does process high volumes. These are real things. But efficiency for whom? The company processes more applications with fewer hours of labor. The candidate spends weeks crafting materials that a model discards in milliseconds. The cost of the system's efficiency is externalized entirely onto the people it screens out. They never know why, never have recourse, and are left to iterate blindly.
When someone tells you a system is efficient, the useful question is: efficient for which party, and at whose expense?
The Accountability Gap
AI hiring tools are sold to HR departments as a way to make better decisions faster. What they also do, and what rarely appears in the sales deck, is transfer the legal and moral weight of rejection away from the employer.
The employer did not reject you. An algorithm did. The algorithm is a product. The product was built by a vendor. The vendor used industry-standard methodology. Everyone made reasonable decisions and no one is responsible for the outcome.
This structure is not unique to hiring. It is a pattern: using automation not to improve decisions but to diffuse accountability for them so thoroughly that no one can be held to anything.
When you cannot find anyone to answer for a decision that affected your life, the question worth asking is not: was this an accident?
The question is: who benefits from the fact that no one has to answer?