Academia has a lot to answer for; it may have ruined the lives of a generation or more. To understand why, we need to go back to the 1990s.
Once the Berlin Wall fell, America turned its attention to international trade. Governments (and economists) thought growing international trade would bind nations states more closely economically and that would reduce the likelihood of war. Parenthetically, in 2022 Russia proved that to be an unfortunately illusion. Business and manufacturing companies in particular saw the post-Cold War order as an opportunity to move jobs to lower wage countries and offshoring took off.
As manufacturing jobs in the US (and Europe) disappeared, management scholars began lauding the "knowledge economy" as the answer to offshoring. In order to meet the demand for "knowledge workers", at least 40% (Tony Blair suggested 50%) of high-school graduates would need to get a four-year degree. In the UK polytechnics became universities with the stroke of a pen.
Fast forward a quarter century and academia (and much of the US) is either up in arms or enthralled by ChatGPT, a deep learning Artificial Intelligence engine. Tellingly, a member of the Wharton business faculty asked it to answer one of his exam MBA questions and he considered its answer, had it been a student's, would have earned a "C". So the debate about whether to use ChatGPT in the classroom or ban it is in a sense moot. If ChatGPT gets a a "C" a Wharton, anyone with a Wharton C or lower is effectively unemployable. Why hire a Wharton MBA at $300k when you can get as good an answer for free?
What does this all mean? First, ChatGPT provides a universal standard by which to calibrate work across institutions. If a Wharton professor thinks a ChatGPT answer is worth a "C" and a San Jose State prof thinks it's a B, that suggests a Wharton "C" is about the same as a San Jose State "B".
Second, it creates a performance threshold, an "AI bar"; get less than a Wharton "C" (or a San Jose State "B") and you are no better than ChatGPT. So to be competitive (with AI) in the labor market, students have to do at least as well as ChatGPT; otherwise they're unemployable. That's why the debate about banning or using ChatGPT is moot. Students may use it but if they do, they won't clear the AI bar and their degree is effectively worthless.
In the longer run, the employment landscape will change radically. Knowledge work will be eviscerated. High-school leavers will eschew four year degrees for jobs that require physical presence, service and manual jobs. For a while at least jobs that require individual customization may be immune from automation and offer a temporary respite from the technological tsunami. For those in early in their careers AI will soon over-take them and they will find themselves looking for work outside the knowledge economy.
We are at an inflection point, one that OpenAI has created with the launch of ChatGPT. Suddenly everyone has been given a salutary lesson in AI's potential. CEOs who had either not been paying attention or not taken it seriously, will now be asking what is the scale of the threat it poses to their companies if they don't get on board. That will light a fire under AI's adoption and its development.
The better AI becomes (and its progress will be ever more rapid), the less knowledge work the will be. Those most effected will be knowledge workers in their early careers, say one to ten years in. But that age range will get larger as AI improves. Only those with deep experience will be immune from replacement by "intelligent" machines. And as a consequence of academia's hype and enthusiasm over the knowledge economy we have created a huge group of people in their 30s and 40s who are most at risk from being replaced by AI. That's a big potential social problem for which academia is responsible.
Moreover, that creates a conundrum; if people early in their careers are replaced by AI, fewer and fewer will get the experience needed to stay ahead of the machines. The result will be an increasingly divided society with a tiny elite rising above the "AI bar" and commanding insane salaries while everyone else will be jobless or working for minimum wage. It could even be worse (although I doubt that politically this would be allowed to happen - but that's another story); AI could render even the best and the brightest redundant.
By the 2030s the political divide won't be between red states and blue states, but between a small insanely wealthy elite who have jobs and the 99% who will be out of work or earning minimum wage. How well the country's leaders prepare for that future will determine whether we navigate it peacefully or have to deal with a tinderbox of volatile social unrest.