Google's Aletheia works like this, and instead of degrading it keeps getting better. I get what you're trying to say, though. The less world knowledge you provide the LLM, which it otherwise lacks, the worse its outputs will be.
> I get what you're trying to say, though. The less world knowledge you provide the LLM, which it otherwise lacks, the worse its outputs will be
... No, wasn't trying to say that at all, I'm saying that it seems like the tokens a LLM produce works much worse as inputs than the tokens a human would produce, regardless of what it actually seems to say.
Google's Aletheia works like this, and instead of degrading it keeps getting better. I get what you're trying to say, though. The less world knowledge you provide the LLM, which it otherwise lacks, the worse its outputs will be.