Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wordcels think LLMs imitate the human brain, when a shape rotator knows they really just imitate human language.


This sentence made me despise my own literacy.


doesn't this make LLM a dead end towards AGI and mostly just a neat specific trick?


In order to believe this, you'd need to be able to imagine a specific test of something that an LLM could not do under any circumstances. Previously, that test could have been something like "compose a novel sonnet on a topic". Today, it is much less clear that such a test (that won't be rapidly beaten) even exists.


You could use a Markov chain to generate poetry with rhyme and meter[1]. Granted, it wouldn't be a very good one, but that just makes an LLM a refinement to older probabilistic methods.

As for something LLMs are unlikely to do under any circumstances, there's already a fairly obvious example. They can't keep a secret, hence prompt injections.

[1] https://us.pycon.org/2020/schedule/presentation/112/


Do you really believe that an LLM that can keep a secret cannot be made? I suspect that we could do this trivially and the "LLMs can't keep a secret" is a specific product of finetuning for helpfulness.


How about make new scientific discoveries?


A better and better parrot is still a parrot then?

(I’m agreeing with you basically)


Why do you think you are anything more?


Why do you think you're anything more than a pocket calculator?


I don’t.


I think AGI is a questionable concept. We still don't have a good definition of what intelligence really is, and some people keep moving the goal posts. What we need is AI that fills specific needs we have.


If we simply AGI to be "general purpose AI", then my argument is - maybe the approach of LLMs works fine enough for textual generation, but it is not a path towards "general purpose AI".. and what we are going to have is different approaches for different niche use cases.

I'm less convinced there's any unified solution for "general purpose AI" before us here.


And I'm convinced we don't even want "general purpose AI". We want AI for a variety of specific purposes. Admittedly these LLMs are a lot broader than I ever imagined, but they're still limited to generating text. I wouldn't want ChatGPT to drive my car.


Wouldn’t the wordcels be the Chomsky generative grammar supporters and the shape rotators the neuroscientists who support a statistical approach?


I love these new terms, can you elaborate on this?



Oh, that seems very stupid! Thanks!


The concept was popularized by roon, heres his official explanation (slightly ruining the joke): https://roonscape.ai/p/a-song-of-shapes-and-words


Wordcels? Shape rotator?


The Brave summmarizer says:

> Wordcels are people who have high verbal intelligence and are good with words, but feel inadequately compensated for their skill. The term "cel" denotes frustration over being denied something they feel they deserve.1 Shape rotators are people with high visuospatial intelligence but low verbal intelligence, who have an intuition for technical problem-solving but are unable to account for themselves or apprehend historical context.2 The use of the terms has skyrocketed online in the past few months, especially in the last few days.0 The term "wordcel" is derived from incel and is used to describe someone who has high verbal intelligence but low "visuospatial" intelligence, whose facility for and love of complex abstraction leads them into rhetorical and political dead-ends.


Wow, okay, thank you for explaining that. They both sound kind of derogative (I'd guessed as much from -cel) but at least I understand them now.


It seems they are intended to. As the article linked above says, it's part of the culture wars.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: