Not sure I like this framing of Teslas can’t self-drive themselves. If as you say they can self-drive for 13 miles before needing human intervention, then it seems like they can self-drive for 13 miles.
Humans make many forms of driving mistakes all the time. They take wrong turns, they drive in the wrong lane, they speed, they go too slow, they wreck. If another human was overseeing them, they would make fewer mistakes. Maybe they’d go from 1 crash every 1million miles to 1 crash every 10 million miles. But by your definition, because they still need oversight, they cannot fully self drive.
I think this is misguided for a couple of reasons.
First the argument shows a contradiction, but doesn’t really show strong support for it. Its a guess which doesn’t show strong reasoning. We have an example where that seems to be the case, but if we accept the premise as true, it should be true both ways and its not (forward and backward). We have a set of LLMs that don’t improve, and so this segment must be apples to oranges without understanding the difference, and context is important. Paraphrasing can easily lead people astray, where something was said with a distinct meaning that is then taken out of context (overgeneralized), and I think this may have happened here, but I can’t be sure without the links to what was exactly said by Yann Lecun (referenced). I haven’t been able to find a talk with that slide in it, but I don’t have a lot of time to look.
Second, few people seem to realize that all of western society depends upon the value of labor being sufficient to purchase enough goods to support themselves, a wife, and children (so at least 1 has children themselves). This goes to the basis of a distribution of labor. When you have machines capable of replacing most people it all breaks down, and then people starve. This is for a number of reasons.
AI lets you replace people at the entry level, and talent development is a sequential pipeline. No new people in, no new people out. That’s within a 10 year time horizon.
Human’s are notoriously bad at recognizing and seeing failures which are slow moving. Dam breaks, avalanches, money printing (unreserved debt issuance), and the chaotic socialist calculation problem, are just some examples…
We’ve reached the limits of growth, and the birth rate has declined because people are unable to support the second reason here (its no longer true), the old hoarding resources are crowding out the unborn young, and soon legitimate producers will shut down when they can no longer make profit (as happens when stable store of value is lost). Current projections of birth-rate decline show in 8 years, we will have more deaths than births (not factoring in mortality within the first 2 years).
This leaves only state-run apparatus (faux producers) funded by money printing, that create distortion to extract sufficient profit. Distortion prevents economic calculation, and might include artificial supply constraints, price fixing, and other forms of corruption (such as buying back bad meat so no loss leader sales occur, people need meat so they are forced to buy it at higher price), it all leads to that last problem after the market has collapsed to non-market socialism, the socialist calculation problem where shortages sustain and get worse.
Like a limited-visibility n-body problem (modern literature), the intersection of economics and monetary policy in such cases becomes chaotic (Mises), an ever more narrow safe path forward based on lagging indicators, and when it fails exchange fails, then production fails, and Malthus/Catton show this results in famine which causes a great dying, and the sustainable population being less than it was before such improvements previously (<4B people globally). This is a cascading failure.
A three-body problem’s general solution is un-solveable. So the question here really is, should we be focusing all of our investment into models that will ultimately end up destroying us, where the process of integration burns the bridges preventing us from backing out afterwards?
Great men of the past understand that they cannot know the future, and will inevitably be mistaken as a result. In the past they created systems that can be corrected when those things occur, but the generation today seem to have forsaken this sentiment, being more intent on removing agency and its accompanying resiliency, in lieu of fragile mechanisms of coercive control (slavery of the unborn). Thomas Paine called these systems in his work Rights of Man, “dead men ruling”. Its a recurring theme throughout history, and the competition dynamics will force a race to the bottom without any net.
1. This trend you are talking about of how long a model can work without human intervention sounds quite similar to the research on the length of task AIs can do which is doubling every 4-7 months.
2. For me AGI is defined as AI that can perform generally as well as a human. That probably means the low bar is focusing for a few hours at a time, making the occasional mistake, and continuing work on the same projects over the course of days or weeks at a time. I don't know that this bar is as hard to hit as you suggest. Thoughts?
I like this piece! I’m trying to find it now but I could have sworn I saw a chart going around re: Deepseek that plotted output length to response quality and it showed linear improvement but then a valley again after a certain length, implying that there is some point after which the model generation kind-of goes off the rails.
Not sure I like this framing of Teslas can’t self-drive themselves. If as you say they can self-drive for 13 miles before needing human intervention, then it seems like they can self-drive for 13 miles.
Humans make many forms of driving mistakes all the time. They take wrong turns, they drive in the wrong lane, they speed, they go too slow, they wreck. If another human was overseeing them, they would make fewer mistakes. Maybe they’d go from 1 crash every 1million miles to 1 crash every 10 million miles. But by your definition, because they still need oversight, they cannot fully self drive.
I think this is misguided for a couple of reasons.
First the argument shows a contradiction, but doesn’t really show strong support for it. Its a guess which doesn’t show strong reasoning. We have an example where that seems to be the case, but if we accept the premise as true, it should be true both ways and its not (forward and backward). We have a set of LLMs that don’t improve, and so this segment must be apples to oranges without understanding the difference, and context is important. Paraphrasing can easily lead people astray, where something was said with a distinct meaning that is then taken out of context (overgeneralized), and I think this may have happened here, but I can’t be sure without the links to what was exactly said by Yann Lecun (referenced). I haven’t been able to find a talk with that slide in it, but I don’t have a lot of time to look.
Second, few people seem to realize that all of western society depends upon the value of labor being sufficient to purchase enough goods to support themselves, a wife, and children (so at least 1 has children themselves). This goes to the basis of a distribution of labor. When you have machines capable of replacing most people it all breaks down, and then people starve. This is for a number of reasons.
AI lets you replace people at the entry level, and talent development is a sequential pipeline. No new people in, no new people out. That’s within a 10 year time horizon.
Human’s are notoriously bad at recognizing and seeing failures which are slow moving. Dam breaks, avalanches, money printing (unreserved debt issuance), and the chaotic socialist calculation problem, are just some examples…
We’ve reached the limits of growth, and the birth rate has declined because people are unable to support the second reason here (its no longer true), the old hoarding resources are crowding out the unborn young, and soon legitimate producers will shut down when they can no longer make profit (as happens when stable store of value is lost). Current projections of birth-rate decline show in 8 years, we will have more deaths than births (not factoring in mortality within the first 2 years).
This leaves only state-run apparatus (faux producers) funded by money printing, that create distortion to extract sufficient profit. Distortion prevents economic calculation, and might include artificial supply constraints, price fixing, and other forms of corruption (such as buying back bad meat so no loss leader sales occur, people need meat so they are forced to buy it at higher price), it all leads to that last problem after the market has collapsed to non-market socialism, the socialist calculation problem where shortages sustain and get worse.
Like a limited-visibility n-body problem (modern literature), the intersection of economics and monetary policy in such cases becomes chaotic (Mises), an ever more narrow safe path forward based on lagging indicators, and when it fails exchange fails, then production fails, and Malthus/Catton show this results in famine which causes a great dying, and the sustainable population being less than it was before such improvements previously (<4B people globally). This is a cascading failure.
A three-body problem’s general solution is un-solveable. So the question here really is, should we be focusing all of our investment into models that will ultimately end up destroying us, where the process of integration burns the bridges preventing us from backing out afterwards?
Great men of the past understand that they cannot know the future, and will inevitably be mistaken as a result. In the past they created systems that can be corrected when those things occur, but the generation today seem to have forsaken this sentiment, being more intent on removing agency and its accompanying resiliency, in lieu of fragile mechanisms of coercive control (slavery of the unborn). Thomas Paine called these systems in his work Rights of Man, “dead men ruling”. Its a recurring theme throughout history, and the competition dynamics will force a race to the bottom without any net.
Two things:
1. This trend you are talking about of how long a model can work without human intervention sounds quite similar to the research on the length of task AIs can do which is doubling every 4-7 months.
2. For me AGI is defined as AI that can perform generally as well as a human. That probably means the low bar is focusing for a few hours at a time, making the occasional mistake, and continuing work on the same projects over the course of days or weeks at a time. I don't know that this bar is as hard to hit as you suggest. Thoughts?
Interesting argument…
I like this piece! I’m trying to find it now but I could have sworn I saw a chart going around re: Deepseek that plotted output length to response quality and it showed linear improvement but then a valley again after a certain length, implying that there is some point after which the model generation kind-of goes off the rails.
This is what I was thinking of, curious how you think the observed “overthinking” interacts with LeCun: https://x.com/Alex_Cuadron/status/1890533671704346931