The two-sector framework opens a cultural dimension that I think makes the model even richer than you've stated it here. The human-premium sector depends on consumers carrying a culturally transmitted preference for human authenticity, the ice-skater valued more precisely BECAUSE a human is performing. That preference was formed in a world where all output was human by default. The fascinating question your model raises is whether it reproduces across generations or slowly attenuates.
A generation raised on AI-generated content from age five is forming aesthetic preferences within an AI-saturated environment from the start. If your baseline sense of what good writing, music, performance and cooking feels like was shaped by AI output, the perceived gap between human and machine quality may never fully register. Which means the size of sector two could be generationally variable, large for cohorts who carry pre-AI cultural architecture, potentially much narrower for cohorts whose preferences were formed entirely within it. thats a demographic dimension to the equilibrium that makes the transition dynamics even more interesting to model, because sector two's long-run viability depends not just on redistribution but on whether the cultural preference that sustains it is durable or quietly depreciating across generational cohorts.
Good point; agree w/ you. Nobody knows if our preference for humans over non-humans would carry over. I think that, however reluctant I often am, to speak of transformational change, we enter here an unknown domain: machines no longer do what we tell them, they do not replace us, they actually, to some extent, become us. It moves from being an economic to being a philosophical or ontological issue.
Your ontological point opens a mechanism that I think is even more specific than it first appears. If machines "to some extent, become us" then the two-sector model eventually faces an Akerlof problem. The human-premium in sector two depends entirely on consumers being able to verify that the output is actually human. When that verification becomes impossible , tthe premium collapses even for output that genuinely IS human, because the uncertainty itself destroys the pricing signal..
This has already played out in one market. a painting's value depends on provenance, on WHO made it. When AI-generated art become indistinguishable from human work, the first instinct was labelling and certification. But the deeper problem is that provenance is becoming unfalsifiable. If you cant reliably prove something was made by a human, the market for human authenticity becomes a lemons market, the unverifiable drives out the verified and the premium dissolves across the board regardless of actual origin.
Thats the mechanism through which your ontological insight feeds back into the economics. Sector two fails when the boundary between human and machine output becomes unverifiable, because without verification the premium has no informationtheoretic foundation to stand on. The ice-skater survives because you can watch a physical body on the ice. The writer, the teacher, the coach, the counsellor, these are the categories where verification collapses first and they are precisely the occupations you placed at the heart of sector two.
It is not true that a world without human wages is a world without capitalism or profits. As Kalecki showed, profits are equal to investment plus capitalist consumption. After the initial shock of an aggregate demand loss, a new equilibrium would be possible based only on capitalist consumption.
Historically, of course, we have seen automation push people into services, and AI is likely to contribute to this trend. However, if AI becomes an immiserating force through outcompeting humans for jobs, then it will be capitalist consumption, rather than wages which sustain this demand for services.
The only thing necessary for there to be a tendency for the rate of profit to fall is a rising investment rate (of gross investment as a share of gross profit), something which has been historically correlated with the development of the productive forces. The US has been unwilling to have rising investment rates under neoliberalism in order to preserve its capitalist class, and hence has undermined its productive forces and increased rent seeking. And the less infrastructure and factories you have, the less use you'll have for AI to begin with. Already we're seeing the finance world hesitate about how the fixed capital expenditure is affecting the free cash flow of major tech companies, I suspect they'll be unable to endure it much longer.
At least regarding the issue of AI's effect on the economy from a Marxist point of view, this is something that I have written about before:
This is the strongest structural argument for why full automation doesn't collapse capitalism. The two-sector model (labor-intensive services absorbing displaced workers while generating the demand that sustains automated-sector profits) is historically grounded and hard to dismiss.
The critical assumption is transmission. The automated sector's productivity gains need to reach consumers as spending power. Your neoclassical case names this requirement explicitly: without it, aggregate demand and profits go to zero.
The record since 1971 bears on this. Computing, the internet and mobile each delivered productivity gains comparable to what AI promises. US productivity rose roughly 90% since 1979. Typical worker compensation rose 33%. The gains were real. They arrived as asset appreciation and corporate margins rather than household income.
If AI follows the same pattern, the labor-intensive second sector faces the constraint every service sector already faces: customers whose wages haven't kept pace with output. The care workers, coaches and artisan producers your model depends on need clients with money to spend. That base has been shrinking for decades.
The two-sector equilibrium requires a functioning link between productivity and demand. That link has been weakening for fifty years.
I think this is a very useful way to frame the issue, especially because it avoids the simple conclusion that AI automatically means the disappearance of labor income.
One point I would add is that technological progress can also radically change the relative purchasing power of different kinds of labor. Twenty years ago, if a television cost $2,000 and a New York barber earned $10 per hour, he needed 200 hours of work to buy one TV. Today, if a better television costs $1,000 and the same barber earns $40 per hour, he needs only 25 hours of work.
So the key effect is not simply that machines replace labor. In highly scalable and automatable sectors, technology can drive down prices dramatically. At the same time, in local, personal, trust-based, or human-authentic services, labor may become relatively more valuable. His purchasing power measured in televisions has increased enormously.
This is why I agree that the future of AI capitalism should be analyzed as a restructuring of labor, prices, income distribution, and demand, rather than simply as the end of human labor.
The Marxist interpretation of IA is very simple and elegant.
AI is just an automation technology: it substitutes living labor with dead labor (in money terms, not in quantity of human being employed). As such, it is the same economic phenomenon as the steam engine.
Because we have empirical evidence, we know AI can only be profitable if it automates the middle class jobs - those white collar, “intellectual” jobs. The reason for that is very clear and simple: the middle class has very low productivity (when they are productive), its elimination will open a room for capitalist accumulation and expansion.
The higher the salary of a worker, the lower its productivity; IA is very expensive, so it is only profitable if it eliminates the highest wages - which are, by definition, middle class wages. If AI gets cheap enough, it can be profitable to automate lower salary (i.e. higher productivity) labor. Therefore, this is just a matter of quantity that has a quality on its own.
As to why the capitalist intellectuals can't see AI as simple automation: by serendipity of destiny, the middle class is the one responsible for producing ideology in capitalist society. Therefore, the middle class is simply defending itself by using emotional, cultural, ethical, moral, transcendental, pseudo-scientific arguments against what is simply an automation technology.
Moralism is the tool of the middle class: the capitalist class doesn't need it because it can do whatever it wants, the proletariat and lumpenproletariat don't need it because they have nothing else to lose. Moral superiority is thus a feature of the middle class, a class identity.
This seems logically persuasive - but I am still left struggling to populate the Labour set you predict: “….the creation of occupations where labor skills will exceed today’s level simply because they would have to be superior to the skill levels produced by the AI in order for people to want to purchase such products and services.”
I suspect that the emergence of AI enabled robots programmed for continuous affection and unflappable good humor will make human nurses seem very second rate, for example.
When Marx said that the USA could transition beyond capitalism without the need for revolution he implied that capitalism can indeed create the conditions of its own replacement.
The incompatibility of an automated economy with capitalism is indeed the point.
Surplus value goes away amidst a huge growth in "things". Some call this abundance.
The labor intensive new work you describe I think might equate to what the young Marx called "the realm of freedom" for a "total human being". In other words free time and choice. it will certainly involve effort, both physical and mental, bit it may not be paid labor,
Am I missing something? The issue of c/v and profits should be informed by the Okishio Theorem, which works out the intuition that reducing labor input into produced goods also lowers the labor value of produced inputs as well. The theorem shows that if processes that reduce direct and indirect labor inputs are preferred, the result will be an increase in profits, not a fall. There have been various attempts to do an end run around this theorem, but none of them is convincing.
As for effective demand in an automated economy, the starting point is Joan Robinson's essay on the robot economy.
I would like to have included in this discussion the idea that AI in fact is only possible while using the Humanity's commons IPs and that the profits generated by AI should be thus properly redistributed, even if they go to zero overall.
Useful analysis. To further enrich the framework, we should consider the role of predictive value extraction as a modern driver of surplus. By leveraging AI to decode and anticipate the behaviors of the "mass-man," elites are not just automating labor; they are optimizing the management and coordination of entire populations.
In thermodynamic terms, this represents a reduction in systemic entropy for the ruling class, allowing them to extract a "behavioral surplus" that fits directly into the surplus value equation. It is no longer just about owning the machines, but about owning the algorithmic predictability of human agency itself.
Rather than look too far forward to the point where Ai has the potential to replace all existing categories of labor, we should analyze the current situation. That situation looks much more like what Marx encountered at the beginning of the industrial revolution. Capital needed to be acquired in order for capitalists to build the plants that would churn out goods. The capitalist needed to use all his capital and extract surplus value from his work force to be able to build his enterprise. The enterprise itself while able to churn out goods at a lower per unit cost than none industrial craft production still had to employ large numbers of workers so the per unit labor costs remained high and the capitalist paid his workers as close to their survival rate as he could get away with. Competition with other capitalists engaged in similar production remained intense.
Looking at Ai today the capital required to engage in Competition has limited Ai production to a handful of companies. Those companies have exhausted their cash reserves and are increasingly resorting to debt and firing unproductive workers to accumulate more investment capital.
It is also not clear that Ai can even replace workers or that Ai can be used to reduce the costs of production in the way that the assembly line did or the electrification of industry did.
This takes us back to Marx's distinction between sales and marketing workers and production workers. The former do not generate surplus value like the later do. Who then are the productive workers generating surplus value in the age of Ai?
It is a topic I am very interested in. In the Grundrisse, in the famous fragment about machines, Marx anticipates, many years earlier, the idea of a nearly fully automated society and presents leisure time as a source of value in this new context. However, that utopian future in which machines work and humans enjoy leisure is not a natural outcome of capitalism. It is necessary to struggle to socialize the ownership of machines (understood broadly, including AI) to avoid artificial scarcity and to prevent leisure time from being in the hands of a few (owners of robots, machines, AI, and platforms) and to ensure that all humanity can enjoy it. Otherwise, as Nicolás mentions below, and following Kalecki's warning, there could be an equilibrium in which the consumption of the capitalist class keeps the economy running.
Invoking Marx's so-called Fragment on Machines in the Grundrisse, Paul Mason in fact argued as far back as 2015 that because the marginal cost of the replication of information is zero or close to zero, and information is of seemingly ever-increasing importance as a factor of production, something like a Marxian end of capitalism was becoming a plausible future scenario. (I'm simplifying, but this sort of thing.) This was before the current AI debates, but it immediately returned to my mind when I read this.
The two-sector framework opens a cultural dimension that I think makes the model even richer than you've stated it here. The human-premium sector depends on consumers carrying a culturally transmitted preference for human authenticity, the ice-skater valued more precisely BECAUSE a human is performing. That preference was formed in a world where all output was human by default. The fascinating question your model raises is whether it reproduces across generations or slowly attenuates.
A generation raised on AI-generated content from age five is forming aesthetic preferences within an AI-saturated environment from the start. If your baseline sense of what good writing, music, performance and cooking feels like was shaped by AI output, the perceived gap between human and machine quality may never fully register. Which means the size of sector two could be generationally variable, large for cohorts who carry pre-AI cultural architecture, potentially much narrower for cohorts whose preferences were formed entirely within it. thats a demographic dimension to the equilibrium that makes the transition dynamics even more interesting to model, because sector two's long-run viability depends not just on redistribution but on whether the cultural preference that sustains it is durable or quietly depreciating across generational cohorts.
Good point; agree w/ you. Nobody knows if our preference for humans over non-humans would carry over. I think that, however reluctant I often am, to speak of transformational change, we enter here an unknown domain: machines no longer do what we tell them, they do not replace us, they actually, to some extent, become us. It moves from being an economic to being a philosophical or ontological issue.
Your ontological point opens a mechanism that I think is even more specific than it first appears. If machines "to some extent, become us" then the two-sector model eventually faces an Akerlof problem. The human-premium in sector two depends entirely on consumers being able to verify that the output is actually human. When that verification becomes impossible , tthe premium collapses even for output that genuinely IS human, because the uncertainty itself destroys the pricing signal..
This has already played out in one market. a painting's value depends on provenance, on WHO made it. When AI-generated art become indistinguishable from human work, the first instinct was labelling and certification. But the deeper problem is that provenance is becoming unfalsifiable. If you cant reliably prove something was made by a human, the market for human authenticity becomes a lemons market, the unverifiable drives out the verified and the premium dissolves across the board regardless of actual origin.
Thats the mechanism through which your ontological insight feeds back into the economics. Sector two fails when the boundary between human and machine output becomes unverifiable, because without verification the premium has no informationtheoretic foundation to stand on. The ice-skater survives because you can watch a physical body on the ice. The writer, the teacher, the coach, the counsellor, these are the categories where verification collapses first and they are precisely the occupations you placed at the heart of sector two.
It is not true that a world without human wages is a world without capitalism or profits. As Kalecki showed, profits are equal to investment plus capitalist consumption. After the initial shock of an aggregate demand loss, a new equilibrium would be possible based only on capitalist consumption.
Historically, of course, we have seen automation push people into services, and AI is likely to contribute to this trend. However, if AI becomes an immiserating force through outcompeting humans for jobs, then it will be capitalist consumption, rather than wages which sustain this demand for services.
The only thing necessary for there to be a tendency for the rate of profit to fall is a rising investment rate (of gross investment as a share of gross profit), something which has been historically correlated with the development of the productive forces. The US has been unwilling to have rising investment rates under neoliberalism in order to preserve its capitalist class, and hence has undermined its productive forces and increased rent seeking. And the less infrastructure and factories you have, the less use you'll have for AI to begin with. Already we're seeing the finance world hesitate about how the fixed capital expenditure is affecting the free cash flow of major tech companies, I suspect they'll be unable to endure it much longer.
At least regarding the issue of AI's effect on the economy from a Marxist point of view, this is something that I have written about before:
https://cosmonautmag.com/2023/05/artificial-intelligence-universal-machines-and-killing-bourgeois-dreams/
As well as the rate of profit dynamics I mentioned above:
https://cosmonautmag.com/2026/04/the-capitalist-in-the-21st-century/
bookmarked can’t wait to read this. Looks great
This is the strongest structural argument for why full automation doesn't collapse capitalism. The two-sector model (labor-intensive services absorbing displaced workers while generating the demand that sustains automated-sector profits) is historically grounded and hard to dismiss.
The critical assumption is transmission. The automated sector's productivity gains need to reach consumers as spending power. Your neoclassical case names this requirement explicitly: without it, aggregate demand and profits go to zero.
The record since 1971 bears on this. Computing, the internet and mobile each delivered productivity gains comparable to what AI promises. US productivity rose roughly 90% since 1979. Typical worker compensation rose 33%. The gains were real. They arrived as asset appreciation and corporate margins rather than household income.
If AI follows the same pattern, the labor-intensive second sector faces the constraint every service sector already faces: customers whose wages haven't kept pace with output. The care workers, coaches and artisan producers your model depends on need clients with money to spend. That base has been shrinking for decades.
The two-sector equilibrium requires a functioning link between productivity and demand. That link has been weakening for fifty years.
I think this is a very useful way to frame the issue, especially because it avoids the simple conclusion that AI automatically means the disappearance of labor income.
One point I would add is that technological progress can also radically change the relative purchasing power of different kinds of labor. Twenty years ago, if a television cost $2,000 and a New York barber earned $10 per hour, he needed 200 hours of work to buy one TV. Today, if a better television costs $1,000 and the same barber earns $40 per hour, he needs only 25 hours of work.
So the key effect is not simply that machines replace labor. In highly scalable and automatable sectors, technology can drive down prices dramatically. At the same time, in local, personal, trust-based, or human-authentic services, labor may become relatively more valuable. His purchasing power measured in televisions has increased enormously.
This is why I agree that the future of AI capitalism should be analyzed as a restructuring of labor, prices, income distribution, and demand, rather than simply as the end of human labor.
The Marxist interpretation of IA is very simple and elegant.
AI is just an automation technology: it substitutes living labor with dead labor (in money terms, not in quantity of human being employed). As such, it is the same economic phenomenon as the steam engine.
Because we have empirical evidence, we know AI can only be profitable if it automates the middle class jobs - those white collar, “intellectual” jobs. The reason for that is very clear and simple: the middle class has very low productivity (when they are productive), its elimination will open a room for capitalist accumulation and expansion.
The higher the salary of a worker, the lower its productivity; IA is very expensive, so it is only profitable if it eliminates the highest wages - which are, by definition, middle class wages. If AI gets cheap enough, it can be profitable to automate lower salary (i.e. higher productivity) labor. Therefore, this is just a matter of quantity that has a quality on its own.
As to why the capitalist intellectuals can't see AI as simple automation: by serendipity of destiny, the middle class is the one responsible for producing ideology in capitalist society. Therefore, the middle class is simply defending itself by using emotional, cultural, ethical, moral, transcendental, pseudo-scientific arguments against what is simply an automation technology.
Moralism is the tool of the middle class: the capitalist class doesn't need it because it can do whatever it wants, the proletariat and lumpenproletariat don't need it because they have nothing else to lose. Moral superiority is thus a feature of the middle class, a class identity.
This seems logically persuasive - but I am still left struggling to populate the Labour set you predict: “….the creation of occupations where labor skills will exceed today’s level simply because they would have to be superior to the skill levels produced by the AI in order for people to want to purchase such products and services.”
I suspect that the emergence of AI enabled robots programmed for continuous affection and unflappable good humor will make human nurses seem very second rate, for example.
When Marx said that the USA could transition beyond capitalism without the need for revolution he implied that capitalism can indeed create the conditions of its own replacement.
The incompatibility of an automated economy with capitalism is indeed the point.
Surplus value goes away amidst a huge growth in "things". Some call this abundance.
The labor intensive new work you describe I think might equate to what the young Marx called "the realm of freedom" for a "total human being". In other words free time and choice. it will certainly involve effort, both physical and mental, bit it may not be paid labor,
Not great.
https://paulruth.substack.com/p/lets-stop-pretending-that-ai-is-new?utm_source=share&utm_medium=android&r=atg1
Am I missing something? The issue of c/v and profits should be informed by the Okishio Theorem, which works out the intuition that reducing labor input into produced goods also lowers the labor value of produced inputs as well. The theorem shows that if processes that reduce direct and indirect labor inputs are preferred, the result will be an increase in profits, not a fall. There have been various attempts to do an end run around this theorem, but none of them is convincing.
As for effective demand in an automated economy, the starting point is Joan Robinson's essay on the robot economy.
I would like to have included in this discussion the idea that AI in fact is only possible while using the Humanity's commons IPs and that the profits generated by AI should be thus properly redistributed, even if they go to zero overall.
Useful analysis. To further enrich the framework, we should consider the role of predictive value extraction as a modern driver of surplus. By leveraging AI to decode and anticipate the behaviors of the "mass-man," elites are not just automating labor; they are optimizing the management and coordination of entire populations.
In thermodynamic terms, this represents a reduction in systemic entropy for the ruling class, allowing them to extract a "behavioral surplus" that fits directly into the surplus value equation. It is no longer just about owning the machines, but about owning the algorithmic predictability of human agency itself.
Rather than look too far forward to the point where Ai has the potential to replace all existing categories of labor, we should analyze the current situation. That situation looks much more like what Marx encountered at the beginning of the industrial revolution. Capital needed to be acquired in order for capitalists to build the plants that would churn out goods. The capitalist needed to use all his capital and extract surplus value from his work force to be able to build his enterprise. The enterprise itself while able to churn out goods at a lower per unit cost than none industrial craft production still had to employ large numbers of workers so the per unit labor costs remained high and the capitalist paid his workers as close to their survival rate as he could get away with. Competition with other capitalists engaged in similar production remained intense.
Looking at Ai today the capital required to engage in Competition has limited Ai production to a handful of companies. Those companies have exhausted their cash reserves and are increasingly resorting to debt and firing unproductive workers to accumulate more investment capital.
It is also not clear that Ai can even replace workers or that Ai can be used to reduce the costs of production in the way that the assembly line did or the electrification of industry did.
This takes us back to Marx's distinction between sales and marketing workers and production workers. The former do not generate surplus value like the later do. Who then are the productive workers generating surplus value in the age of Ai?
It is a topic I am very interested in. In the Grundrisse, in the famous fragment about machines, Marx anticipates, many years earlier, the idea of a nearly fully automated society and presents leisure time as a source of value in this new context. However, that utopian future in which machines work and humans enjoy leisure is not a natural outcome of capitalism. It is necessary to struggle to socialize the ownership of machines (understood broadly, including AI) to avoid artificial scarcity and to prevent leisure time from being in the hands of a few (owners of robots, machines, AI, and platforms) and to ensure that all humanity can enjoy it. Otherwise, as Nicolás mentions below, and following Kalecki's warning, there could be an equilibrium in which the consumption of the capitalist class keeps the economy running.
Invoking Marx's so-called Fragment on Machines in the Grundrisse, Paul Mason in fact argued as far back as 2015 that because the marginal cost of the replication of information is zero or close to zero, and information is of seemingly ever-increasing importance as a factor of production, something like a Marxian end of capitalism was becoming a plausible future scenario. (I'm simplifying, but this sort of thing.) This was before the current AI debates, but it immediately returned to my mind when I read this.
https://www.theguardian.com/books/2015/jul/17/postcapitalism-end-of-capitalism-begun
Typo: 'Therefore, our automated sector’s profit will not be negligible as at first semed'