The main message of my new paper “Better Applications, Worse Matching: Artificial Intelligence and Talent Allocation” is simple but important: generative AI can make everyone look better on paper, can, in fact, even make everyone more productive in everything their doing, yet the labor market may be worse off. AI can help applicants write cleaner resumes, sharper cover letters, and more polished work samples. But hiring does not happen after firms fully know who will perform best in the job. It happens earlier, based on imperfect screening materials. The paper shows that if AI improves those materials’ appearance while making them less informative about actual fit, then matching workers to jobs can deteriorate even as AI raises productivity inside every single job.

The core contribution is to separate two effects that are often blurred together in public discussion. One is what AI does inside a job once someone is hired, i.e., it may help them write faster, code better, or complete tasks more efficiently. The other effect is what AI does before hiring. It may change how informative applications, interviews or work samples are. The paper argues that these are not the same thing. A labor market can continue to sort efficiently on the basis of the information it sees, yet still produce worse matches if the visible ranking of applicants becomes a poorer guide to underlying talent and job fit.
That distinction leads to a n interesting theoretical result. Even if AI makes every possible worker–firm pairing more productive, total output in the economy can still fall if the screening stage gets worse enough. In other words, better applications do not necessarily mean better matching. The paper also argues that this creates an “arms race” on the applicant side. While using AI to polish applications may be privately attractive for each worker, it becomes socially excessive when everyone does it. Once firms can no longer rely as much on first-round materials, they are predicted to respond by using more verification, more work-sample tests, more probationary hiring, and more early post-hire evaluation.
For a broader audience, the paper’s contribution is to reframe the conversation about AI and work. Much of the debate asks whether AI makes workers more productive. This paper says that is only half the question. The other half is whether AI improves or degrades the information used to allocate people to opportunities in the first place. That is important for everyone trying to understand and test whether AI increases the performance of a specific labor market.










