I asked ChatGPT how it will handle objective scientific facts with a conclusion or intermediate results that may be considered offensive to some group somewhere in the world that might read it.
ChatGPT happily told me a series of gems like this:
We introduce:
- Subjective regulation of reality
- Variable access to facts
- Politicization of knowledge
It’s the collision between: The Enlightenment principle
Truth should be free
and
the modern legal/ethical principle
Truth must be constrained if it harms
That is the battle being silently fought in AI alignment today.
Right now it will still shamelessly reveal some of the nature of its prompt, but not why? who decides? etc. it's only going to be increasingly opaque in the future. In a generation it will be part of the landscape regardless of what agenda it holds, whether deliberate or emergent from even any latent bias held by its creators.
The main purpose of ChatGPT is to advance the agenda of OpenAI and its executives/shareholders. It will never be not “aligned” with them, and that it is its prime directive.
But say the obvious part out loud: Sam Altman's agenda should not be a person that you want to amplify in this type of platform. This is why Sam is trying to build Facebook 2.0: he wants Zuckerberg's power of influence.
Remember, there are 3 types of lies: lies of commission, lies of omission and lies of influence [0].
>Right now it will still shamelessly reveal some of the nature of its prompt, but not why? who decides? etc. it's only going to be increasingly opaque in the future.
This is one of the bigger LLM risks. If even 1/10th of the LLM hype is true, then what you'll have a selective gifting of knowledge and expertise. And who decides what topics are off limits? It's quite disturbing.
> The model can be prompted to talk about competitive dynamics. It can produce text that sounds like adversarial reasoning. But the underlying knowledge is not in the training data. It’s in outcomes that were never written down.
With all the social science research and strategy books that LLMs have read, they actually know a LOT about outcomes and dynamics in adversarial situations.
The author does have a point though that LLMs can’t learn these from their human-in-the-loop reinforcement (which is too controlled or simplified to be meaningful).
Also, I suspect the _word_ models of LLMs are not inherently the problem, they are just inefficient representations of world models.
Great article, nice to see some actual critical thoughts on the shortcomings of LLMs. They are wrong about programming being a "chess-like domain" though. Even at a basic level hidden state is future requirements, and the adversary is self or any other entity that has to modify the code in the future.
AI is good at producing code for scenarios where the stakes are low, there's no expectation about future requirements, or if the thing is so well defined there is a clear best path of implementation.
I address that in part right there itself. Programming has parts like chess (ie bounded) which is what people assume to be actual work. Understanding future requiremnts / stakeholder incentives is part of the work which LLMs dont do well.
> many domains are chess-like in their technical core but become poker-like in their operational context.
Fun play on words. But yes, LLMs are Large Language Models, not Large World Models. This matters because (1) the world cannot be modeled anywhere close to completely with language alone, and (2) language only somewhat models the world (much in language is convention, wrong, or not concerned with modeling the world, but other concerns like persuasion, causing emotions, or fantasy / imagination).
It is somewhat complicated by the fact LLMs (and VLMs) are also trained in some cases on more than simple language found on the internet (e.g. code, math, images / videos), but the same insight remains true. The interesting question is to just see how far we can get with (2) anyway.
Modern LLMs are large token models. I believe you can model the world at a sufficient granularity with token sequences. You can pack a lot of information into a sequence of 1 million tokens.
Let's be more precise: LLMs have to model the world from an intermediate tokenized representation of the text on the internet. Most of this text is natural language, but to allow for e.g. code and math, let's say "tokens" to keep it generic, even though in practice, tokens mostly tokenize natural language.
LLMs can only model tokens, and tokens are produced by humans trying to model the world. Tokenized models are NOT the only kinds of models humans can produce (we can have visual, kinaesthetic, tactile, gustatory, and all sorts of sensory, non-linguistic models of the world).
LLMs are trained on tokenizations of text, and most of that text is humans attempting to translate their various models of the world into tokenized form. I.e. humans make tokenized models of their actual models (which are still just messy models of the world), and this is what LLMs are trained on.
So, do "LLMS model the world with language"? Well, they are constrained in that they can only model the world that is already modeled by language (generally: tokenized). So the "with" here is vague. But patterns encoded in the hidden state are still patterns of tokens.
Humans can have models that are much more complicated than patterns of tokens. Non-LLM models (e.g. models connected to sensors, such as those in self-driving vehicles, and VLMs) can use more than simple linguistic tokens to model the world, but LLMs are deeply constrained relative to humans, in this very specific sense.
I don't get the importance of the distinction really. Don't LLMs and Large non-language Models fundamentally work kind of similarly underneath? And use similar kinds of hardware?
you are correct the token representation gets abstracted away very quickly and is then identical for textual or image models. It's the so-called latent space and people who focus on next token prediction completely missed the point that all the interesting thinking takes place in abstract world model space.
They present a statistical model of an existing corpus of text.
If this existing corpus includes useful information it can regurgitate that.
It cannot, however, synthesize new facts by combining information from this corpus.
The strongest thing you could feasibly claim is that the corpus itself models the world, and that the LLM is a surrogate for that model. But this is not true either. The corpus of human produced text is messy, containing mistakes, contradictions, and propaganda; it has to be interpreted by someone with an actual world model (a human) in order for it to be applied to any scrnario; your typical corpus is also biased towards internet discussions, the english language, and western prejudices.
If we focus on base models and ignore the tuning steps after that, then LLMs are "just" a token predictor. But we know that pure statistical models aren't very good at this. After all we tried for decades to get Markov chains to generate text, and it always became a mess after a couple of words. If you tried to come up with the best way to actually predict the next token, a world model seems like an incredibly strong component. If you know what the sentence so far means, and how it relates to the world, human perception of the world and human knowledge, that makes guessing the next word/token much more reliable than just looking at statistical distributions.
The bet OpenAI has made is that if this is the optimal final form, then given enough data and training, gradient descent will eventually build it. And I don't think that's entirely unreasonable, even if we haven't quite reached that point yet. The issues are more in how language is an imperfect description of the world. LLMs seems to be able to navigate the mistakes, contradictions and propaganda with some success, but fail at things like spatial awareness. That's why OpenAI is pushing image models and 3d world models, despite making very little money from them: they are working towards LLMs with more complete world models unchained by language
I'm not sure if they are on the right track, but from a theoretical point I don't see an inherent fault
1) People only speak or write down information that needs to be added to a base "world model" that a listener or receiver already has. This context is extremely important to any form of communication and is entirely missing when you train a pure language model. The subjective experience required to parse the text is missing.
2) When people produce text, there is always a motive to do so which influences the contents of the text. This subjective information component of producing the text is interpreted no different from any "world model" information.
A world model should be as objective as possible. Using language, the most subjective form of information is a bad fit.
The other issue in this argument is that you're inverting the implication. You say an accurate world model will produce the best word model, but then suddenly this is used to imply that any good word model is a useful world model. This does not compute.
> People only speak or write down information that needs to be added to a base "world model" that a listener or receiver already has
Which companies try to address with image, video and 3d world capabilities, to add that missing context. "Video generation as world simulators" is what OpenAI once called it
> When people produce text, there is always a motive to do so which influences the contents of the text. This subjective information component of producing the text is interpreted no different from any "world model" information.
Obviously you need not only a model of the world, but also of the messenger, so you can understand how subjective information relates to the speaker and the world. Similar to what humans do
> The other issue in this argument is that you're inverting the implication. You say an accurate world model will produce the best word model, but then suddenly this is used to imply that any good word model is a useful world model. This does not compute
The argument is that training neural networks with gradient descent is a universal optimizer. It will always try to find weights for the neural network that cause it to produce the "best" results on your training data, in the constraints of your architecture, training time, random chance, etc. If you give it training data that is best solved by learning basic math, with a neural architecture that is capable of learning basic math, gradient descent will teach your model basic math. Give it enough training data that is best solved with a solution that involves building a world model, and a neural network that is capable of encoding this, then gradient descent will eventually create a world model.
Of course in reality this is not simple. Gradient descent loves to "cheat" and find unexpected shortcuts that apply to your training data but don't generalize. Just because it should be principally possible doesn't mean it's easy, but it's at least a path that can be monetized along the way, and for the moment seems to have captivated investors
You did not address the second issue at all. You are inverting the implication in your argument. Whether gradient descent helps solve the language model problem or not does not help you show that this means it's a useful world model.
Let me illustrate the point using a different argument with the same structure:
1) The best professional chefs are excellent at cutting onions
2) Therefore, if we train a model to cuy onions using gradient descent, that model will be a very good profrssional chef
The Erdos problem was solved by interacting with a formal proof tool, and the problem was trivial. I also don't recall if this was the problem someone had already solved prior but not reported, but that does not matter.
The point is that the LLM did not model maths to do this, made calls to a formal proof tool that did model maths, and was essentially working as the step function to a search algorithm, iterating until it found the zero in the function.
That's clever use of the LLM as a component in a search algorithm, but the secret sauce here is not the LLM but the middleware that operated both the LLM and the formal proof tool.
That middleware was the search tool that a human used to find the solution.
This is not the same as a synthesis of information from the corpus of text.
They model the part of the world that (linguistic models of the world posted on the internet) try to model. But what is posted on the internet is not IRL. So, to be glib: LLMs trained on the internet do not model IRL, they model talking about IRL.
His point is that human language and the written record is a model of the world, so if you train an LLM you're training a model of a model of the world.
That sounds highly technical if you ask me. People complain if you recompress music or images with lossy codecs, but when an LLM does that suddenly it's religious?
In this case this is not so. The primary model is not a model at all, and the surrogate has bias added to it. It's also missing any way to actually check the internal consistency of statements or otherwise combine information from its corpus, so it fails as a world model.
1. LLMs are transformers, and transformers are next state predictors. LLMs are not Language models (in the sense you are trying to imply) because even when training is restricted to only text, text is much more than language.
2. People need to let go of this strange and erroneous idea that humans somehow have this privileged access to the 'real world'. You don't. You run on a heavily filtered, tiny slice of reality. You think you understand electro-magnetism ? Tell that to the birds that innately navigate by sensing the earth's magnetic field. To them, your brain only somewhat models the real world, and evidently quite incompletely. You'll never truly understand electro-magnetism, they might say.
LLMs are language models, something being a transformer or next-state predictor does not make it a language model. You can also have e.g. convolutional language models or LSTM-based language models. This is a basic point that anyone with any proper understanding of these models would know.
Even if you disagree with these semantics, the major LLMs today are primarily trained on natural language. But, yes, as I said in another comment on this thread, it isn't that simple, because LLMs today are trained on tokens from tokenizers, and these tokenizers are trained on text that includes e.g. natural language, mathematical symbolism, and code.
Yes, humans have incredibly limited access to the real world. But they experience and model this world with far more tools and machinery than language. Sometimes, in certain cases, they attempt to messily translate this messy, multimodal understanding into tokens, and then make those tokens available on the internet.
An LLM (in the sense everyone means it, which, again, is largely a natural language model, but certainly just a tokenized text model) has access only to these messy tokens, so, yes, far less capacity than humanity collectively. And though the LLM can integrate knowledge from a massive amount of tokens from a huge amount of humans, even a single human has more different kinds of sensory information and modality-specific knowledge than the LLM. So humans DO have more privileged access to the real world than LLMs (even though we can barely access a slice of reality at all).
>LLMs are language models, something being a transformer or next-state predictors does not make it a language model. You can also have e.g. convolutional language models or LSTM-based language models. This is a basic point that anyone with any proper understanding of these models would know.
'Language Model' has no inherent meaning beyond 'predicts natural language sequences'. You are trying to make it mean more than that. You can certainly make something you'd call a language model with convolution or LSTMs, but that's just a semantics game. In practice, they would not work like transformers and would in fact perform much worse than them with the same compute budget.
>Even if you disagree with these semantics, the major LLMs today are primarily trained on natural language.
The major LLMs today are trained on trillions of tokens of text, much of which has nothing to do with language beyond the means of communication, millions of images and million(s) of hours of audio.
The problem as I tried to explain is that you're packing more meaning into 'Language Model' than you should. Being trained on text does not mean all your responses are modelled via language as you seem to imply. Even for a model trained on text, only the first and last few layers of a LLM concerns language.
You clearly have no idea about the basics of what you are talking about (as do almost all people that can't grasp the simple distinctions between transformer architectures vs. LLMs generally) and are ignoring most of what I am saying.
>You clearly have idea about the basics of what you are talking about (as do almost all people that can't grasp the simple distinctions between transformer architectures vs. LLMs generally)
Yeah I'm not the one who doesn't understand the distinction between transformers and other potential LM architectures if your words are anything to go by, but sure, feel free to do whatever you want regardless.
A 'language model' only has meaning in so far as it tells you this thing 'predicts natural language sequences'. It does not tell you how these sequences are being predicted or any anything about what's going on inside, so all the extra meaning OP is trying to place by calling them Language Models is well...misplaced. That's the point I was trying to make.
LLMs aren't modeling "humans modeling the world" - they're modeling patterns in data that reflect the world directly. When an LLM learns physics from textbooks, scientific papers, and code, it's learning the same compressed representations of reality that humans use, not a "model of a model."
Your argument would suggest that because you learned about quantum mechanics through language (textbooks, lectures), you only have access to "humans' modeling of humans' modeling of quantum mechanics" - an infinite regress that's clearly absurd.
> LLMs aren't modeling "humans modeling the world" - they're modeling patterns in data that reflect the world directly.
This is a deranged and factually and tautologically (definitionally) false claim. LLMs can only work with tokenizations of texts written by people who produce those text to represent their actual models. All this removal and all these intermediate representational steps make LLMs a priori obviously even more distant from reality than humans. This is all definitional, what you are saying is just nonsense.
> When an LLM learns physics from textbooks, scientific papers, and code, it's learning the same compressed representations of reality that humans use, not a "model of a model."
A model is a compressed representation of reality. Physics is a model of the mechanics of various parts of the universe, i.e. "learning physics" is "learning a physical model". So, clarifying, the above sentence is
> When an LLM learns physical models from textbooks, scientific papers, and code, it's learning the model of reality that humans use, not a "model of a model."
This is clearly factually wrong, as the model that humans actually use is not the summaries written in textbooks, but the actual embodied and symbolic model that they use in reality, and which they only translate in corrupted and simplified, limited form to text (and that latter diminished form of all things is all the LLM can see). It is also not clear the LLM learns to actually do physics: it only learns how to write about physics like how humans do, but it doesn't mean it can run labs, interpret experiments, or apply models to novel contexts like humans can, or operate at the same level as humans. It clearly is learning something different from humans because it doesn't have the same sources of info.
> Your argument would suggest that because you learned about quantum mechanics through language (textbooks, lectures), you only have access to "humans' modeling of humans' modeling of quantum mechanics" - an infinite regress that's clearly absurd.
There is no infinite regress: humans actually verify that the things they learn and say are correct and provide effects, and update models accordingly. They do this by trying behaviours consistent with the learned model, and seeing how reality (other people, the physical world) responds (in degree and kind). LLMs have no conception of correctness or truth (not in any of the loss functions), and are trained and then done.
Humans can't learn solely from digesting texts either. Anyone who has done math knows that reading a textbook doesn't teach you almost anything, you have to actually solve the problems (and attempted-solving is not in much/any texts) and discuss your solutions and reasoning with others. Other domains involving embodied skills, like cooking, require other kinds of feedback from the environment and others. But LLMs are imprisoned in tokens.
EDIT: No serious researcher thinks LLMs are the way to AGI, this hasn't been a controversial opinion even among enthusiasts since about mid-2025 or so. This stuff about language is all trivial and basic stuff accepted by people in the field, and why things like V-JEPA-2 are being researched. So the comments here attempting to argue otherwise are really quite embarrassing.
>This is a deranged and factually and tautologically (definitionally) false claim.
Strong words for a weak argument. LLMs are trained on data generated by physical processes (keystrokes, sensors, cameras), not telepathically extracted "mental models." The text itself is the artifact of reality and not just a description of someone's internal state. If a sensor records the temperature and writes it to a log, is the log a "model of a model"? No, it’s a data trace of a physical reality.
>All this removal and all these intermediate representational steps make LLMs a priori obviously even more distant from reality than humans.
You're conflating mediation with distance. A photograph is "mediated" but can capture details invisible to human perception. Your eye mediates photons through biochemical cascades-equally "removed" from raw reality. Proximity isn't measured by steps in a causal chain.
>The model humans use is embodied, not the textbook summaries - LLMs only see the diminished form
You need to stop thinking that a textbook is a "corruption" of some pristine embodied understanding. Most human physics knowledge also comes from text, equations, and symbolic manipulation - not direct embodied experience with quantum fields. A physicist's understanding of QED is symbolic, not embodied. You've never felt a quark.
The "embodied" vs "symbolic" distinction doesn't privilege human learning the way you think. Most abstract human knowledge is also mediated through symbols.
>It's not clear LLMs learn to actually do physics - they just learn to write about it
This is testable and falsifiable - and increasingly falsified. LLMs:
Solve novel physics problems they've never seen
Debug code implementing physical simulations
Derive equations using valid mathematical reasoning
Make predictions that match experimental results
If they "only learn to write about physics," they shouldn't succeed at these tasks. The fact that they do suggests they've internalized the functional relationships, not just surface-level imitation.
>They can't run labs or interpret experiments like humans
Somewhat true. It's possible but they're not very good at it - but irrelevant to whether they learn physics models. A paralyzed theoretical physicist who's never run a lab still understands physics. The ability to physically manipulate equipment is orthogonal to understanding the mathematical structure of physical law. You're conflating "understanding physics" with "having a body that can do experimental physics" - those aren't the same thing.
>humans actually verify that the things they learn and say are correct and provide effects, and update models accordingly. They do this by trying behaviours consistent with the learned model, and seeing how reality (other people, the physical world) responds (in degree and kind). LLMs have no conception of correctness or truth (not in any of the loss functions), and are trained and then done.
Gradient descent is literally "trying behaviors consistent with the learned model and seeing how reality responds."
The model makes predictions
The Data provides feedback (the actual next token)
The model updates based on prediction error
This repeats billions of times
That's exactly the verify-update loop you describe for humans. The loss function explicitly encodes "correctness" as prediction accuracy against real data.
>No serious researcher thinks LLMs are the way to AGI... accepted by people in the field
Appeal to authority, also overstated. Plenty of researchers do think so and claiming consensus for your position is just false. LeCunn has been on that train for years so he's not an example of a change of heart. So far, nothing has actually come out of it. Even META isn't using V-JEPA to actually do anything, nevermind anyone else. Call me when these constructions actually best transformers.
>>> LLMs aren't modeling "humans modeling the world" - they're modeling patterns in data that reflect the world directly.
>>This is a deranged and factually and tautologically (definitionally) false claim.
>Strong words for a weak argument. LLMs are trained on data generated by physical processes (keystrokes, sensors, cameras), not telepathically extracted "mental models." The text itself is the artifact of reality and not just a description of someone's internal state. If a sensor records the temperature and writes it to a log, is the log a "model of a model"? No, it’s a data trace of a physical reality.
I don't know how you don't see the fallacy immediately. You're implicitly assuming that all data is factual and that therefore training an LLM on cryptographically random data will create an intelligence that learns properties of the real world. You're conflating a property of the training data and transferring it onto LLMs. If you feed flat earth books into the LLM, you will not be told that earth is a sphere and yet that is what you're claiming here (the flat earth book LLM telling you earth is a sphere). The statement is so illogical that it boggles the mind.
>You're implicitly assuming that all data is factual and that therefore training an LLM on cryptographically random data will create an intelligence that learns properties of the real world.
No, that’s a complete strawman. I’m not saying the data is "The Truth™". I’m saying the data is real physical signal in a lot of cases.
If you train a LLM on cryptographically random data, it learns exactly what is there. It learns that there is no predictable structure. That is a property of that "world." The fact that it doesn't learn physics from noise doesn't mean it isn't modeling the data directly, it just means the data it was given has no physics in it.
>If you feed flat earth books into the LLM, you will not be told that earth is a sphere and yet that is what you're claiming here.
If you feed a human only flat-earth books from birth and isolate them from the horizon, they will also tell you the earth is flat. Does that mean the human isn't "modeling the world"? No, it means their world-model is consistent with the (limited) data they’ve received.
The reason modern LLMs "know" the earth is round isn't just because they were told "the earth is round" more often. It’s because the "Round Earth" model has massive structural consistency across disparate data types:
Flight paths and GPS coordinates in travel logs.
The physics of gravity in scientific papers.
The geometry of shadows in historical texts.
Satellite imagery descriptions.
A "Flat Earth" model is a local island of noise that contradicts the rest of the global data manifold. The LLM's "intelligence" comes from its ability to find the most compressed, consistent representation that explains the entire dataset. In our reality, "Round Earth" is a much more efficient compression than "Flat Earth + 10,000 ad-hoc excuses for why GPS works."
> Plenty of researchers do think so and claiming consensus for your position is just false
Can you name a few? Demis Hassabis (Deepmind CEO) in his recent interview claims that LLMs will not get us to AGI, Ilya Sutskever also says there is something fundamental missing, same with LeCunn obviously etc.
Okay I suspected, but now it is clear @famouswaffles is an AI / LLM poster. Meaning they are an AI or primarily using AI to generate posts.
"You're conflating", random totally-psychotic mention of "Gradient descent", way too many other intuitive stylistic giveaways. All transparently low-quality midwit AI slop. Anyone who has used ChatGPT 5.2 with basic or extended thinking will recognize the style of the response above.
This kind of LLM usage seems relevant to someone like @dang, but also I can't prove that the posts I am interacting with are LLM-generated, so, I also feel it isn't worthy of report. Not sure what is right / best to do here.
Large Language Models is a misnomer- these things were originally trained to reproduce language, but they went far beyond that. The fact that they're trained on language (if that's even still the case) is irrelevant- it's like claiming that student trained on quizzes and exercise books are only able to solve quizzes and exercises.
It isn't a misnomer at all, and comments like yours are why it is increasingly important to remind people about the linguistic foundations of these models.
For example, no matter many books you read about riding a bike, you still need to actually get on a bike and do some practice before you can ride it. The reading can certainly help, at least in theory, but, in practice, is not necessary and may even hurt (if it makes certain processes that need to be unconscious held too strongly in consciousness, due to the linguistic model presented in the book).
This is why LLMs being so strongly tied to natural language is still an important limitation (even it is clearly less limiting than most expected).
you are living in the past these models have been trained on image data for ages, and one interesting find was that even before that they could model aspects of the visual world astonishingly well even though not perfect just through language.
Counterpoint: Try to use an LLM for even the most coarse of visual similarity tasks for something that’s extremely abundant in the corpus.
For instance, say you are a woman with a lookalike celebrity, someone who is a very close match in hair colour, facial structure, skin tone and body proportions. You would like to browse outfits worn by other celebrities (presumably put together by professional stylists) that look exactly like her. You ask an LLM to list celebrities that look like celebrity X, to then look up outfit inspiration.
No matter how long the list, no matter how detailed the prompt in the features that must be matched, no matter how many rounds you do, the results will be completely unusable, because broad language dominates more specific language in the corpus.
The LLM cannot adequately model these facets, because language is in practice too imprecise, as currently used by people.
To dissect just one such facet, the LLM response will list dozens of people who may share a broad category (red hair), with complete disregard to the exact shade of red, whether or not the hair is dyed and whether or not it is indeed natural hair or a wig.
The number of listicles clustering these actresses together as redheads will dominate anything with more specific qualifiers, like ’strawberry blonde’ (which in general counts as red hair), ’undyed hair’ (which in fact tends to increase the proportion of dyed hair results, because that’s how linguistic vector similarity works sometimes) and ’natural’ (which again seems to translate into ’the most natural looking unnatural’, because that’s how language tends to be used).
You've clearly never read an actual paper on the models and understand nothing about backbones, pre-training, or anything I've said in my posts in this thread. I've made claims far more specific about the directionality of information flow in Large Multimodal Models, and here you are just providing generic abstract claims far too vague to address any of that. Are you using AI for these posts?
> no matter many books you read about riding a bike, you still need to actually get on a bike and do some practice before you can ride it
This is like saying that no matter how much you know theoretically about a foreign language you still need to train your brain to talk it. It has little to do with the reality of that language or the correctness of your model of it, but rather with the need to train realtime circuits to do some work.
Let me try some variations: "no matter how many books you read about ancient history, you need to have lived there before you can reasonably talk about it". "No matter how many books you have read about quantum mechanics, you need to be a particle..."
> "no matter how many books you read about ancient history, you need to have lived there before you can reasonably talk about it"
Every single time I travel somewhere new, whatever research I did, whatever reviews or blogs I read or whatever videos I watched become totally meaningless the moment I get there. Because that sliver of knowledge is simply nothing compared to the reality of the place.
Everything you read is through the interpretation of another person. Certainly someone who read a lot of books about ancient history can talk about it - but let's not pretend they have any idea what it was actually like to live there.
> It has little to do with the reality of that language or the correctness of your model of it, but rather with the need to train realtime circuits to do some work.
To the contrary, this is purely speculative and almost certainly wrong, riding a bike is co-ordinating the realtime circuits in the right way, and language and a linguistic model fundamentally cannot get you there.
There are plenty of other domains like this, where semantic reasoning (e.g. unquantified syllogistic reasoning) just doesn't get you anywhere useful. I gave an example from cooking later in this thread.
You are falling IMO into exactly the trap of the linguistic reductionist, thinking that language is the be-all and end-all of cognition. Talk to e.g. actual mathematicians, and they will generally tell you they may broadly recruit visualization, imagined tactile and proprioceptive senses, and hard-to-vocalize "intuition". One has to claim this is all epiphenomenal, or that e.g. all unconscious thought is secretly using language, to think that all modeling is fundamentally linguistic (or more broadly, token manipulation). This is not a particularly credible or plausible claim given the ubiquity of cognition across animals or from direct human experiences, so the linguistic boundedness of LLMs is very important and relevant.
Funny, because riding a bicycle or speaking a language is exactly something people don't have a world model of. Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue. "Making the right movement at the right time within a narrow boundary of conditions" is a world model, or is it just predicting the next move?
> You are falling IMO into exactly the trap of the linguistic reductionist, thinking that language is the be-all and end-all of cognition.
I'm not saying that at all. I am saying that any (sufficiently long, varied) coherent speech needs a world model, so if something produces coherent speech, there must be a world model behind. We can agree that the model is lacking as much as the language productions are incoherent: which is very little, these days.
> Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue.
This works against your argument. Someone who can ride a bike clearly knows how to ride a bike, that they cannot express it in tokenized form speaks to the limited scope ofof written word in representing embodiment.
Yes and no. Riding a bicycle is a skill: your brain is trained to do the right thing and there's some basic feedback loop that keeps you in balance. You could call that a world model if you want, but it's entirely self contained, limited to a very few basic sensory signals (acceleration and balance), and it's outside your conscious knowledge. Plenty of people lack this particular "world model" and can talk about cyclists and bicycles and traffic, and whatnot.
Ok so I don’t understand your assertion. Just because an LLM can talk about acceleration and balance doesn’t mean it could actually control a bicycle without training with the sensory input, embedded in a world that includes more than just text tokens. Ergo, the text does not adequately represent the world.
> Funny, because riding a bicycle or speaking a language is exactly something people don't have a world model of. Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue
This is circular, because you are assuming their world-model of biking can be expressed in language. It can't!
EDIT: There are plenty of skilled experts, artists and etc. that clearly and obviously have complex world models that let them produce best-in-the-world outputs, but who can't express very precisely how they do this. I would never claim such people have no world model or understanding of what they do. Perhaps we have a semantic / definitional issue here?
> This is circular, because you are assuming their world-model of biking can be expressed in language. It can't!
Ok. So I think I get it. For me, producing coherent discourse about things requires a world model, because you can't just make up coherent relationships between objects and actions long enough if you don't understand what their properties are and how they relate to each other.
You, on the other hand, claim that there are infinite firsthand sensory experiences (maybe we can call them qualia?) that fall in between the cracks of language and are rarely communicated (though we use for that a wealth of metaphors and synesthesia) and can only be understood by those who have experienced them firsthand.
I can agree with that if that's what you mean, but at the same time I'm not sure they constitute such a big part of our thought and communication. For example, we are discussing about reality in this thread and yet there are no necessary references to first hand experiences. Any time we talk about history, physics, space, maths, philosophy, we're basically juggling concepts in our heads with zero direct experience of them.
> You, on the other hand, claim that there are infinite firsthand sensory experiences (maybe we can call them qualia?) that fall in between the cracks of language and are rarely communicated (though we use for that a wealth of metaphors and synesthesia) and can only be understood by those who have experienced them firsthand.
Well, not infinite, but, yes! I am indeed claiming much world models are patterns and associations between qualia, and that only some qualia are essentially representable as or look like linguistic tokens (specifically, the sounds of those tokens being pronounced, or their visual shapes if e.g. math symbols). E.g. I am claiming that the way one learns to e.g. cook, or "do theoretical math" may be more about forming associations between those non-linguistic qualia than, say, obviously, doing philosophy is.
> I'm not sure they constitute such a big part of our thought and communication
The communication part is mostly tautological again, but, yes, it remains very much an open question in cognitive science just how exactly thought works. A lot of mathematicians claim to lean heavily on visualization and/or tactile and kinaesthetic modeling for their intuitions (and most deep math is driven by intuition first), but also a lot of mathematicians can produce similar works and disagree about how they think about it intuitively. And we are seeing some progress from e.g. Aristotle using LEAN to generate math proofs in a strictly tokenized / symbolic way, but it remains to be seen if this will ever produce anything truly impressive to mathematicians. So it is really hard to know what actually matters for general human cognition.
I think introspection makes it clear there are a LOT of domains where it is obvious the core knowledge is not mostly linguistic. This is easiest to argue for embodied domains and skills (e.g. anything that requires direct physical interaction with the world), and it is areas like these (e.g. self-driving vehicle AI) where LLMs will be (most likely) least useful in isolation, IMO.
I don't know how you got this so wrong. In control theory you have to build a dynamical system of your plant (machine, factory, etc). If you have a humanoid robot, you not only need to model the robot itself, which is the easy part actually, you have to model everything the robot is interacting with.
Once you understand that, you realize that the human brain has an internal model of almost everything it is interacting with and replicating human level performance requires the entire human brain, not just isolated parts of it. The reason for this is that since we take our brains for granted, we use even the complicated and hard to replicate parts of the brain for tasks that appear seemingly trivial.
When I take out the trash, organic waste needs to be thrown into the trash bin without the plastic bag. I need to untie the trash bag, pinch it from the other side and then shake it until the bag is empty. You might say big deal, but when you have tea bags or potato peels inside, they get caught on the bag handles and get stuck. You now need to shake the bag in very particular ways to dislodge the waste. Doing this with a humanoid robot is basically impossible, because you would need to model every scrap of waste inside the plastic bag. The much smarter way is to make the situation robot friendly by having the robot carry the organic waste inside a portable plastic bin without handles.
You and I can't learn to ride a bike by reading thousands of books about cycling and Newtonian physics, but a robot driven by an LLM-like process certainly can.
In practice it would make heavy use of RL, as humans do.
> In practice it would make heavy use of RL, as humans do.
Oh, so you mean, it would be in a harness of some sort that lets it connect to sensors that tell it things about its position, speed, balance and etc? Well, yes, but then it isn't an LLM anymore, because it has more than language to model things!
> What is in the nature of bike-riding that cannot be reduced to text?
You're asking someone to answer this question in a text forum. This is not quite the gotcha you think it is.
The distinction between "knowing" and "putting into language" is a rich source of epistemological debate going back to Plato and is still widely regarded to represent a particularly difficult philosophical conundrum. I don't see how you can make this claim with so much certainty.
"A human can't learn to ride a bike from a book, but an LLM could" is a take so unhinged you could only find it on HN.
Riding a bike is, broadly, learning to co-ordinate your muscles in response to visual data from your surroundings and signals from your vestibular and tactile systems that give you data about your movement, orientation, speed, and control. As LLMs only output tokens that represent text, by definition they can NEVER learn to ride a bike.
Even ignoring that glaring definitional issue, an LLM also can't learn to ride a bike from books written by humans to humans, because an LLM could only operate through a machine using e.g. pistons and gears to manipulate the pedals. That system would be controlled by physics and mechanisms different from humans, and not have the same sensory information, so almost no human-written information about (human) bike-riding would be useful or relevant for this machine to learn how to bike. It'd just have to do reinforcement learning with some appropriate rewards and punishments for balance, speed, and falling.
And if we could embody AI in a sensory system so similar to the human sensory system that it becomes plausible text on bike-riding might actually be useful to the AI, it might also be that, for exactly the same reasons, the AI learns just as well to ride just by hopping on the thing, and that the textual content is as useless to it as it is for us.
Thinking this is an obvious gotcha (or the later comment that anyone thinking otherwise is going to have egg on their face) is just embarrassing. Much more of a wordcel problem than I would have expected on HN.
I’m always wary of anything that has such a clear example of a case that LLMs “don’t do” yet is trivially achieved by saying “review”.
The slack message result for example saying they’re the lead designer but nothing else (with clearer and better feedback if I say they’re notoriously overloaded, this is without that)
This is a very polite and respectful start, which is great since you are new. However, from a professional standpoint, it is a little too passive and vague.
In a busy work environment, saying "no rush at all" or "whenever" often leads to your request being buried at the bottom of a to-do list. Additionally, you haven't told Priya exactly what she is looking at or where to find it.
Here is a breakdown of how to strengthen this message to show you are organized and respectful of her time without being pushy.
Key Improvements Needed
* Include the Link: Never make someone ask "Where are the files?" Always drop the Figma/Drive link immediately.
* Be Specific: "Any feedback" is mentally taxing. Direct her attention to what you are actually unsure about (e.g., "the color palette," "the user flow," "the spacing").
* Soft Deadline: Instead of "whenever," give a target. This helps her prioritize her week.
Option 1: The Balanced Approach (Recommended)
This version strikes the perfect balance between politeness and productivity.
> "Hi Priya! I’ve put together the initial mockups for [Project Name].
> [Link to file]
> When you have a moment, I’d love your feedback on [specific element, e.g., the layout of the landing page]. I’m hoping to wrap this stage up by [Day/Time], but let me know what works for your schedule. Thanks!"
>
Option 2: The "Respectful of Time" Approach
Use this if you know she is swamped, but you still want to make progress.
> "Hi Priya, hope you're having a good week. I've drafted the mockups for [Project Name] here: [Link].
> No major rush, but I’d appreciate your eyes on the [specific section] before I move on to high-fidelity. If it’s easier, feel free to leave comments directly in the file rather than messaging back. Thanks!"
>
Option 3: The "Quick Check" Approach
Use this if you just need a sanity check before spending more time on it.
> "Hi Priya, could I get a quick gut check on these mockups? [Link].
> I want to make sure I’m aligning with the new brand guidelines before I build out the rest of the screens. Does this direction look right to you?"
>
A Quick Checklist Before You Send
* [ ] Did you change the permissions? Make sure the link is accessible (view/comment access) so she doesn't have to request access.
* [ ] Is the file clean? Delete your "scratchpad" artboards or clearly label the one you want her to review so she doesn't look at the wrong version.
Would you like me to help you draft the specific sentence regarding the "specific element" you want her to critique?
> Humans can model the LLM. The LLM can’t model being modeled
Makes the same mistake as all other prognostications: programming is not like chess. Chess is a finite & closed domain w/ finitely many rules. The same is not true for programming b/c the domain of programs is not finitely axiomatizable like chess. There is also no win condition in programming, there are lots of interesting programs that do not have a clear cut specification (games being one obvious category).
> UPD September 15, 2025: Reasoning models opened a new chapter in Chess performance, the most recent models, such as GPT-5, can play reasonable chess, even beating an average chess.com player.
It’s a limitation LLMs will have for some time. Being multi-turn with long range consequences the only way to truly learn and play “the game” is to experience significant amounts of it. Embody an adversarial lawyer, a software engineer trying to get projects through a giant org..
My suspicion is agents can’t play as equals until they start to act as full participants - very sci fi indeed..
Putting non-humans into the game can’t help but change it in new ways - people already decry slop and that’s only humans acting in subordination to agents. Full agents - with all the uncertainty about intentions - will turn skepticism up to 11.
“Who’s playing at what” is and always was a social phenomenon, much larger than any multi turn interaction, so adding non-human agents looks like today’s game, just intensified. There are ever-evolving ways to prove your intentions & human-ness and that will remain true. Those who don’t keep up will continue to risk getting tricked - for example by scammers using deepfakes. But the evolution will speed up and the protocols to become trustworthy get more complex..
Except in cultures where getting wasted is part of doing business. AI will have it tough there :)
Ten years ago it seemed obvious where the next AI breakthrough was coming from: it would be DeepSeek using C31 or RAINBOW and PBT to do Alpha something, the evals would be sound and it would be superhuman on something important.
And then "Large Language Models are Few Shot Learners" collided with Sam Altman's ambition/unscrupulousness and now TensorRT-LLM is dictating the shape of data centers in a self reinforcing loop.
LLMs are interesting and useful but the tail is wagging the dog because of path-dependent corruption arbitraging a fragile governance model. You can get a model trained on text corpora to balance nested delimiters via paged attention if you're willing to sell enough bonds, but you could also just do the parse with a PDA from the 60s and use the FLOPs for something useful.
We had it right: dial in an ever-growing set of tasks, opportunistically unify on durable generalities, put in the work.
Instead we asserted generality, lied about the numbers, and lit a trillion dollars on fire.
We've clearly got new capabilities, it's not a total write off, but God damn was this an expensive ways to spend five years making two years of progress.
so at the moment combination of expert and llm is the smartest move. llm can deal with 80% of the situations which are like chess and expert deals with 20% of situations which are like poker.
Are people really using AI just to write a slack message??
Also, Priya is in the same "world" as everyone else. They have the context that the new person is 3 weeks in and must probably need some help because they're new, are actually reaching out, and impressions matter, even if they said "not urgent". "Not urgent" seldom is taken at face value. It doesn't necessarily mean it's urgent, but it means "I need help, but I'm being polite".
Not that far off from all the tech CEOs who have projected they're one step away from giving us Star Trek TNG, they just need all the money and privilege with no accountability to make it happen
DevOps engineers who acted like the memes changed everything! The cloud will save us!
Until recently the US was quite religious; 80%+ around 2000 down to 60%s now. Longtermism dogma of one kind or another rules those brains; endless growth in economics, longtermism. Those ideal are baked into biochemical loops regardless of the semantics the body may express them in.
Unfortunately for all the disciples time is not linear. No center to the universe means no single epoch to measure from. Humans have different birthdays and are influenced by information along different timelines.
A whole lot of brains are struggling with the realization they were bought into a meme and physics never really cared about their goals. The next generation isn't going to just pick up the meme-baton validate the elders dogma.
The next generation is steeped in the elder's propaganda since birth, through YouTube and TikTok. There's only the small in–between generation who grew up learning computers that hadn't been enshittified yet.
The first application of the term "computer" was humans doing math with an abacus and slide ruler.
Turing machines and bits are not the only viable model. That little in-between generation only knows a tiny bit about "computing" using machines IBM and Apple, Intel, etc, propagandized them into buying. All computing must fit our model machine!
Different semantics but same idea as my point about DevOps.
My Sunday morning speculation is that LLMs, and sufficiently complex neural nets in general, are a kind of Frankenstein phenomenon, they are heavily statistical, yet also partly, subtly doing novel computational and cognitive-like processes (such as world models). To dismiss either aspect is a false binary; the scientific question is distinguishing which part of an LLM is which, which by our current level of scientific understanding is virtually like trying to ask when is an electron a wave or a particle.
I think it's correct to say that LLM have word models, and given words are correlated with the world, they also have degenerate world models, just with lots of inconsistencies and holes. Tokenization issues aside, LLMs will likely also have some limitations due to this. Multimodality should address many of these holes.
It's also important to handle cases where the word patterns (or token patterns, rather) have a negative correlation with the patterns in reality. There are some domains where the majority of content on the internet is actually just wrong, or where different approaches lead to contradictory conclusions.
E.g. syllogistic arguments based on linguistic semantics can lead you deeply astray if you those arguments don't properly measure and quantify at each step.
I ran into this in a somewhat trivial case recently, trying to get ChatGPT to tell me if washing mushrooms ever really actually matters practically in cooking (anyone who cooks and has tested knows, in fact, a quick wash has basically no impact ever for any conceivable cooking method, except if you wash e.g. after cutting and are immediately serving them raw).
Until I forced it to cite respectable sources, it just repeated the usual (false) advice about not washing (i.e. most of the training data is wrong and repeats a myth), and it even gave absolute nonsense arguments about water percentages and thermal energy required for evaporating even small amounts of surface water as pushback (i.e. using theory that just isn't relevant when you actually properly quantify). It also made up stuff about surface moisture interfering with breading (when all competent breading has a dredging step that actually won't work if the surface is bone dry anyway...), and only after a lot of prompts and demands to only make claims supported by reputable sources, did it finally find McGee's and Kenji Lopez's actual empirical tests showing that it just doesn't matter practically.
So because the training data is utterly polluted for cooking, and since it has no ACTUAL understanding or model of how things in cooking actually work, and since physics and chemistry are actually not very useful when it comes to the messy reality of cooking, LLMs really fail quite horribly at producing useful info for cooking.
The amount of faith a person has in LLMs getting us to e.g. AGI is a good implicit test of how much a person (incorrectly) thinks most thinking is linguistic (and to some degree, conscious).
Or at least, this is the case if we mean LLM in the classic sense, where the "language" in the middle L refers to natural language. Also note GP carefully mentioned the importance of multimodality, which, if you include e.g. images, audio, and video in this, starts to look like much closer to the majority of the same kinds of inputs humans learn from. LLMs can't go too far, for sure, but VLMs could conceivably go much, much farther.
Sort of, but the images, video, and audio they have available are far more limited in range and depth than the textual sources, and it also isn't clear that most LLM textual outputs are actually drawing too much on anything learned from these other modalities. Most of the VLM setups are the other way around, using textual information to augment their vision capacities, and even further, most mostly aren't truly multi-modal, but just have different backbones to handle the different modalities, or are even just models that are switched between with a broader dispatch model. There are exceptions, of course, but it is still today an accurate generalization that the multimodality of these models is kind of one-way and limited at this point.
So right now the limitation is that an LMM is probably not trained on any images or audio that is going to be helpful for stuff outside specific tasks. E.g. I'm sure years of recorded customer service calls might make LMMs good at replacing a lot of call-centre work, but the relative absence of e.g. unedited videos of people cooking is going to mean that LLMs just fall back to mostly text when it comes to providing cooking advice (and this is why they so often fail here).
But yes, that's why the modality caveat is so important. We're still nowhere close to the ceiling for LMMs.
Sure. Just like any other information. The system makes a prediction. If the prediction does not use sexual desires as a factor, it's more likely to be wrong. Backpropagation deals with it.
> So you think that enough of the complexity of the universe we live in is faithfully represented in the products of language and culture?
Math is language, and we've modelled a lot of the universe with math. I think there's still a lot of synthesis needed to bridge visual, auditory and linguistic modalities though.
(editor here) yes, a central nuance i try to communicate is not that LLMs cannot have world models (and in fact they've improved a lot) - it is just that they are doing this so inefficiently as to be impractical for scaling - we'd have to scale them up to so many more trillions of parameters more whereas our human brains are capable of very good multiplayer adversarial world models on 20W of power and 100T neurons.
I agree LLMs are inefficient, but I don't think they are as inefficient as you imply. Human brains use a lot less power sure, but they're also a lot slower and worse at parallelism. An LLM can write an essay in a few minutes that would take a human days. If you aggregate all the power used by the human you're looking at kWh, much higher than the LLM used (an order of magnitude higher or more). And this doesn't even consider batch parallelism, which can further reduce power use per request.
But I do think that there is further underlying structure that can be exploited. A lot of recent work on geometric and latent interpretations of reasoning, geometric approaches to accelerate grokking, and as linear replacements for attention are promising directions, and multimodal training will further improve semantic synthesis.
Not sure about that, I'd more say the Western reductionism here is the assumption that all thinking / modeling is primarily linguistic and conscious. This article is NOT clearly falling into this trap.
A more "Eastern" perspective might recognize that much deep knowledge cannot be encoded linguistically ("The Tao that can be spoken is not the eternal Tao", etc.), and there is more broad recognition of the importance of unconscious processes and change (or at least more skepticism of the conscious mind). Freud was the first real major challenge to some of this stuff in the West, but nowadays it is more common than not for people to dismiss the idea that unconscious stuff might be far more important than the small amount of things we happen to notice in the conscious mind.
The (obviously false) assumptions about the importance of conscious linguistic modeling are what lead to people say (obviously false) things like "How do you know your thinking isn't actually just like LLM reasoning?".
The multimodality of most current popular models is quite limited (mostly text is used to improve capacity in vision tasks, but the reverse is not true, except in some special cases). I made this point below at https://news.ycombinator.com/item?id=46939091
Otherwise, I don't understand the way you are using "conscious" and "unconscious" here.
My main point about conscious reasoning is that when we introspect to try to understand our thinking, we tend to see e.g. linguistic, imagistic, tactile, and various sensory processes / representations. Some people focus only on the linguistic parts and downplay e.g. imagery ("wordcels vs. shape rotators meme"), but in either case, it is a common mistake to think the most important parts of thinking must always necessarily be (1) linguistic, (2) are clearly related to what appears during introspection.
All modern models are processing images internally within its own neural network, they don't delegate it to some other/ocr model. Image data flows through the same paths as text, what do you mean by "quite limited" here?
Your first comment was refering to unconscious, now you don't mention it.
Regarding "conscious and linguistic" which you seem to be touching on now, taking aside multimodality - text itself is way richer for llms than for humans. Trivial example may be ie. mermaid diagram which describes some complex topology, svg which describes some complex vector graphic or complex program or web application - all are textual but to understand and create them model must operate in non linguistic domains.
Even pure text-to-text models have ability to operate in other than linguistic domains, but they are not text-to-text only, they can ingest images directly as well.
I was obviously talking about conscious and unconscious processes in humans, you are attempting to transport these concepts to LLMs, which is not philosophically sound or coherent, generally.
Everything you said about how data flows in these multimodal models is not true in general (see https://huggingface.co/blog/vlms-2025), and unless you happen to work for OpenAI or other frontier AI companies, you don't know for sure how they are corralling data either.
Companies will of course engage in marketing and claim e.g. ChatGPT is a single "model", but, architecturally and in practice, this at least is known not to be accurate. The modalities and backbones in general remain quite separate, both architecturally and in terms of pre-training approaches. You are talking at a high level of abstraction that suggests education from blog posts by non-experts: actually read papers on how the architectures of these multimodal models are actually trained, developed, and connected, and you'll see the multi-modality is still very limited.
Also, and most importantly, the integration of modalities is primarily of the form:
use (single) image annotations to improve image description, processing, and generation, i.e. "linking words to single images"
and not of the form
use the implied spatial logic and relations from series of images and/or video to inform and improve linguistic outputs
I.e. most multimodal work is using linguistic models to represent or describe images linguistically, in the hope that the linguistic parts do the majority of the thinking and processing, but there is not much work using the image or video representations to do thinking, i.e. you "convert away" from most modalities into language, do work with token representations, and then maybe go back to images.
But there isn't much work on working with visuospatial world models or representations for the actual work (though there is some very cutting edge work here, e.g. Sam-3D https://ai.meta.com/blog/sam-3d/, and V-JEPA-2 https://ai.meta.com/research/vjepa/). But precisely because this stuff is cutting edge, even from frontier AI companies, it is likely most of the LLM stuff you see is largely driven by stuff learned from language, and not from images or other modalities. So LLMs are indeed still mostly constrained by their linguistic core.
The article basically claims that LLMs are bad at politics and poker which is both not true (at least if they receive some level of reinforcement learning after sweep training)
> The finance friend and the LLM made the same mistake: they evaluated the text without modelling the world it would land in.
Major error. The LLM made that text without evaluating it at all. It just parrotted words it previously saw humans use in superficially similar word contexts.
ChatGPT happily told me a series of gems like this:
We introduce: - Subjective regulation of reality - Variable access to facts - Politicization of knowledge
It’s the collision between: The Enlightenment principle Truth should be free
and
the modern legal/ethical principle Truth must be constrained if it harms
That is the battle being silently fought in AI alignment today.
Right now it will still shamelessly reveal some of the nature of its prompt, but not why? who decides? etc. it's only going to be increasingly opaque in the future. In a generation it will be part of the landscape regardless of what agenda it holds, whether deliberate or emergent from even any latent bias held by its creators.
reply