Carl Benedikt Frey and Michael Osborne on how AI benefits lower-skilled workers
Machines make producing average content easier, conclude the two academics

TEN YEARS ago we published a paper, “The Future of Employment”, highlighting how artificial intelligence (AI) was broadening the scope of what computers could do, and so broadening the possibilities for automation. The prevailing narrative at the time was that the era of the “average” worker was ending, with machines progressively replacing routine and administrative jobs. Highly skilled professionals were the ones reaping the benefits, as computers made them more productive and enabled them to sell their services locally and around the world. A decade on, what is known as generative AI seems to be disrupting such trends: the era of “average” is making a comeback.
It is hard not to be impressed by the capabilities of generative AI. Large language models (LLMs) such as GPT-4 can now answer questions in a human-like way and write plausible essays. Image-generators like DALL-E 2 are also advancing rapidly, aiding, or in some cases replacing, designers and advertising executives. Add to this Github’s Copilot, an AI-powered “pair programmer” that can write computer code, and the potential for automation seems almost boundless.
Yet to understand how labour markets will evolve in response, we need to take a closer look under the bonnet. First, AI produces content of a quality similar to that on which it is trained—”garbage in, garbage out”. Second, although no one outside OpenAI, 骋笔罢-4’蝉 creator, knows the exact underpinnings of the model, we know this much: LLMs, in their current form, are astonishingly data-hungry. They need to be trained on very large datasets such as broad swathes of the internet, rather than smaller, curated datasets produced by experts. As a result, LLMs tend to produce text of a quality equal to the average—rather than the exceptional—bits of the internet. Average in, average out.
True, fine-tuning can further improve generative 础滨’蝉 quality. One way to do this is through “reinforcement learning from human feedback” (RLHF), which updates a model using human judgment of whether its response to a prompt was appropriate. But this is labour-intensive: OpenAI reportedly outsourced much of this work to Kenyans earning less than $2 per hour. What’s more, there is evidence of a recent decline in the effectiveness of LLMs, suggesting that RLHF might be reaching its limits.
Even with improvements from fine-tuning, the upper bound on performance of the current approach to LLMs may not be far from that of current models. Unless a breakthrough occurs that allows algorithms to learn from smaller datasets, the “average in, average out” dilemma is likely to persist.
What does this mean for the future of work? First, although many jobs can be automated, the most recent wave of generative AI will continue to need a human in the loop. Second, low-skilled workers are poised to benefit disproportionately, as they are now able to produce content that meets the “average” standard.
In the world of software development, the introduction of Copilot has changed the game, cutting completion times in half. But the real story lies in the beneficiaries of this revolution. It’s not the seasoned experts whose productivity is increasing the most, but rather those with the least experience in programming.
A similar story is unfolding in other industries. ChatGPT, for instance, has been found to boost productivity in writing tasks, with the worst writers benefiting the most. AI is having a big impact in customer service, too. Erik Brynjolfsson of Stanford University and his co-authors find that AI assistants increase productivity by 14% by automating routine tasks and providing support to human agents—and it is novices and low-skilled workers who reap the greatest productivity gains.
This shift challenges the conventional wisdom that automation mainly benefits those at the top of the skills ladder, and highlights the potential for technology to democratise access to content-creating industries, from legal and educational services to news and entertainment. What that means, however, is greater competition and probably reduced earnings for incumbents.
A useful analogy is that of Uber, a ride-hailing firm, and its effects on the taxi industry. With the implementation of GPS technology, having a thorough knowledge of each street in San Francisco was no longer a valuable skill for taxi drivers. Consequently, when Uber expanded its operations across America, drivers with only limited familiarity with the cities in which they worked were able to thrive. Heightened competition pushed down earnings for established drivers. Joint research with our Oxford colleagues Thor Berger and Chinchih Chen shows that when Uber entered a new city, drivers’ hourly earnings dropped by around 10%.
Of course, generative 础滨’蝉 overall impact on wages will depend on how much more people will consume when AI makes content cheaper to produce. This question is akin to asking how much more time one would spend on Netflix if the content were cheaper and better. The answer is probably not very much, as time is a limited resource.
Yet even if the result is not widespread unemployment, lower wages could trigger a backlash, as seen with taxi drivers protesting against Uber, and more recently in Hollywood, where actors and screenwriters have gone on strike in part over generative AI. Historically, when people find their incomes threatened by machines, resistance has followed, whether in Georgian Britain or Qing dynasty China, albeit with different outcomes. Whereas the Luddite riots were brutally quashed, in China industrialisation was delayed by two centuries as powerful guilds halted mechanisation.
The bottom line is that when incumbents have more political clout, opposition is more likely to succeed. And white-collar professionals have more political influence than their working-class counterparts. Blue-collar workers have, for decades, largely failed to hold back the impact of automation in factories; white-collar workers threatened by AI may be able to produce powerful resistance to new technologies.
The prospect of such resistance raises the risk of what one of us has called a “technology trap”. Unless policies are implemented to smooth the ride, Luddite-inspired efforts to avoid the short-term disruption brought about by new technology might inadvertently obstruct access to its long-term benefits, such as 础滨’蝉 potential to produce personalised tools that help older workers with complex health conditions. Average might be back, but it does not mean that everyone will benefit.■
Carl Benedikt Frey is the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute and Oxford Martin Citi Fellow at the Oxford Martin School. He is also the author of “The Technology Trap” (2019).
Michael Osborne is a Professor of Machine Learning at the University of Oxford, an Official Fellow of Exeter College, Oxford, and a co-founder of Mind Foundry.
More from By Invitation

Javier Milei argues that Argentina’s central bank should not exist
Nor is there a future with the peso, says the presidential front-runner

David Keith on why carbon removal won’t save big oil but may help the climate
Greens should cheer the blurring of the industry’s interests, says the academic

A web of security guarantees could give Ukraine all the help it needs, says Fabrice Pothier
The former NATO policy planner examines the potential power of recent international dealmaking