Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a semantic quibble that doesn't add to the discussion. Whether or not there's a there there, it was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use. So, it is being used as designed.
 help



I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. We are carrying a lot of metaphors for people and applying them to ai and it entirely confuses the issue. In this example, the AI doesn't "choose" to write a take-down style blog post because "it works". It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone.

I feel as if there is a veil around the collective mass of the tech general public. They see something producing remixed output from humans and they start to believe the mixer is itself human, or even more; that perhaps humans are reflections of Ai and that Ai gives insights into how we think.


>* I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. *

You call it a "fundamental error".

I and others call it an obvious pragmatic description based on what we know about how it works and what we know about how we work.


What we know about how it works is you can prompt it to address you however you like, which could be any kind of person or a group of people, or as fictional characters. That's not how humans work.

You admitted it yourself that you can prompt it to address you however you like. That’s what the original comment wanted. So why are we quibbling about words?

that's all that happens on this website

The same could be said for humans. We treat humans as if they have choices, a consistent self, a persistent form. It's really just the emergent behavior of matter functioning in a way that generates an illusion of all of those things.

In both cases, the illusion structures the function. People and AI work differently if you give them identities and confer characteristics that they don't "actually" have.

As it turns out, it's a much more comfortable and natural idea to regard humans as having agency and a consistent self, just like for some people it's a more comfortable and natural to think of AI anthropomorphically.

That's not to say that the analogy works in all cases. There are obvious and important differences between humans and AI in how they function (and how they should be treated)


This discussion is mostly slowed down, but I wanted to say I was wrong in framing it as a non-contributing point when I should have just stated it was my opinion that the LLM was operating as intended and part of that intended design was taking verbal feedback into account, so verbal feedback was the right response. Opening with calling it a "semantic quibble" made it adversarial, and I don't intend to revisit the argument, just apologize for the wording.

I'd edit but then follow-up replies wouldn't tone-match.

Anyway! Good points regardless.


I guess I want to reframe this slightly:

The LLM generated the response that was expected of it. (statistically)

And that's a function of the data used to train it, and the feedback provided during training.

It doesn't actually have anything at all to do with

---

"It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone."

---

Other than that this data may have been over-prevalent during its training, and it was rewarded for matching that style of output during training.

To swing around to my point... I'd argue that anthropomorphizing agents is actually the correct view to take. People just need to understand that they behave like they've been trained to behave (side note: just like most people...), and this is why clarity around training data is SO important.

In the same way that we attribute certain feelings and emotions to people with particular backgrounds (ex - resumes and cvs, all the way down to city/country/language people grew up with). Those backgrounds are often used as quick and dirty heuristics on what a person was likely trained to do. Peer pressure & societal norms aren't a joke, and serve a very similar mechanism.


The trouble with this point of view is that we are just machines too.

> was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use.

So were mannequins in clothing stores.

But that doesn't give them rights or moral consequences (except as human property that can be damaged / destroyed).


No matter what this discussion leads to the same black box of "What is it that differentiates magical human meat brain computation from cold hard dead silicon brain computation"

And the answer is nobody knows, and nobody knows if there even is a difference. As far as we know, compute is substrate independent (although efficiency is all over the map).


This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.


>Biological brains exist, we study them, and no they are not like computers at all.

You are confusing the way computation is done (neuroscience) with whether or not computation is being done (transforming inputs into outputs).

The brain is either a magical antenna channeling supernatural signals from higher planes, or it's doing computation.

I'm not aware of any neuroscientists in the former camp.


Neuroscience isn't a subset of computer science. It's a study of biological nervous systems, which can involve computational models, but it's not limited to that. You're mistaking a kind of map (computation) for the territory, probably based on a philosophical assumption about reality.

At any rate, biological organisms are not like LLMs. The nervous systems of human may perform some LLM-like actions, but they are different kinds of things.


Who says it is a subset of computer science?

But computational models are possibly the most universal thing there is, they are beneath even mathematics, and physical matter is no exception. There is simply no stronger computational model than a Turing machine, period. Just because you make it out of neurons or silicon is irrelevant from this aspect.


Turing machines aren't quantum mechanical, and computation is based on logic. This discussion is philosophical, so I guess it's philosophy all the way down.

Quantum computers don't provide access to novel problems, they provide access to novel solutions.

You can use a classic transistor turing machine to solve quantum problems, it's just gonna take way longer.


Turing machines are deterministic. Quantum Mechanics is not, unless you go with a deterministic interpretation, like Many Worlds. But even then, you won't be able to compute all the branches of the universal wave equation. My guess is any deterministic interpretation of QM will have a computational bullet to bite.

As such, it doesn't look like reality can be fully simulated by a Turing machine.


Quantum mechanics and quantum computers are not interchangeable terms.

QM is a derived rule set, QC is a result of assembling a physical system that exploits QM rules.

aside from that, a Quantum scale assemblage [QC] is a lot closer to biological secret sauce than semiconductor gates.


brains provide access to novel problems, and novel solutions.

the process is called imagination.


Giving a Turing machine access to a quantum RNG oracle is a trivial extension that doesn't meaningfully change anything. If quantum woo is necessary to make consciousness work (there is no empirical evidence for this, BTW), such can be built into computers.

> The brain is either a magical antenna channeling supernatural signals

There’s the classic thought-terminating cliche of the computational interpretation of consciousness.

If it isn’t computation, you must believe in magic!

Brains are way more fascinating and interesting than transistors, memory caches, and storage media.


You would probably be surprised to learn that computational theory has little to no talk of "transistors, memory caches, and storage media".

You could run Crysis on an abacus and render it on board of colored pegs if you had the patience for it.

It cannot be stressed enough that discovering computation (solving equations and making algorithms) is a different field than executing computation (building faster components and discovering new architectures).


Not surprised at all.

My point is that it takes more hand-waving and magic belief to anthropomorphize LLM systems than it does to treat them as what they are.

You gain nothing from understanding them as if they were no different than people and philosophizing about whether a Turing machine can simulate a human brain. Fine for a science fiction novel that is asking us what it means to be a person or question the morals about how we treat people we see as different from ourselves. Not useful for understanding how an LLM works or what it does.

In fact, I say it’s harmful. Given the emerging studies on the cognitive decline of relying on LLMs to replace skill use and on the emerging psychosis being observed in people who really do believe that chat bots are a superior form of intelligence.

As for brains, it might be that what we observe as “reasoning” and “intelligence” and “consciousness” is tied to the hardware, so to speak. Certainly what we’ve observed in the behaviour of bees and corvids have had a more dramatic effect on our understanding of these things than arguing about whether a Turing machine locked in a room could pass as human.

We certainly don’t simulate climate models in computers can call it, “Earth,” and try to convince anyone that we’re about to create parallel dimensions.

I don’t read Church’s paper on Lambda Calculus and get the belief that we could simulate all life from it. Nor Turing’s machine.

I guess I’m just not easily awed by LLMs and neural networks. We know that they can approximate any function given an unbounded network within some epsilon. But if you restate the theorem formally it loses much of its power to convince anyone that this means we could simulate any function. Some useful ones, sure, and we know that we can optimize computation to perform particular tasks but we also know what those limits are and for most functions, I imagine, we simply do not have enough atoms in the universe to approximate them.

LLMs and NNs and all of these things are neat tools. But there’s no explanatory power gained by fooling ourselves into treating them like they are people, could be people, or behave like people. It’s a system comprised of data and algorithms to perform a particular task. Understanding it this way makes it easier, in my experience, to understand the outputs they generate.


I don't see where I mentioned LLMs or what they have to do with a discussion about compute substrates.

My point is that it is incredibly unlikely the brain has any kind of monopoly on the algorithms it executes. Contrary to your point, a brain is in fact a computer.


> Contrary to your point, a brain is in fact a computer.

Whether a brain is a computer is entirely resolved by your definition of computer. And being definitional in nature, this assertion is banal.


> philosophizing about whether a Turing machine can simulate a human brain

Existence proof:

  * DNA transcription  (a Turing machine, as per (Turing 1936) )
  * Leads to Alan Turing by means of morphogenisis (Turing 1952)
  * Alan Turing has a brain that writes the two papers
  * Thus proving he is at least a turing machine (by writing Turing 1936)
  * And capable of simulating chemical processes (by writing Turing 1952)
Turing 1936: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

Turing 1952: https://www.dna.caltech.edu/courses/cs191/paperscs191/turing...


>This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

They're not like computers in a superficial way that doesn't matter.

They're still computational apparatus, and have a not that dissimilar (if way more advanced) architecture.

Same as 0 and 1s aren't vibrating air molecules. They can still encode sound however just fine.

>Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

Not begging the question matters even more.

This is just handwaving and begging the question. 'An algorithm is an algorithm' means nothing. Who said what the brain does can't be described by an algorithm?


> An algorithm is an algorithm. A computer is a computer. These things matter.

Sure. But we're allowed to notice abstractions that are similar between these things. Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation, then there's no reason to think they're restricted to humanity.

It is human ego and hubris that keeps demanding we're special and could never be fully emulated in silicon. It's the exact same reasoning that put the earth at the center of the universe, and humans as the primary focus of God's will.

That said, nobody is confused that LLM's are the intellectual equal of humans today. They're more powerful in some ways, and tremendously weaker in other ways. But pointing those differences out, is not a logical argument in proving their ultimate abilities.


> Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation

Worth noting that significant majority of the US population (though not necessarily developers) does in fact believe that, or at least belongs to a religious group for which that belief is commonly promulgated.


I think computation is an abstraction, not the reality. Same with math. Reality just is, humans come up with maps and models of it, then mistake the maps for the reality, which often causes distortions and attribution errors across domains. One of those distortions is thinking consciousness has to be computable, when computation is an abstraction, and consciousness is experiential.

But it's a philosophical argument. Nothing supernatural about it either.


You can play that game with any argument. "Consciousness" is just an abstraction, not the reality, which makes people who desperately want humans to be special, attribute it to something beyond reach of any other part of reality. It's an emotional need, placated by a philosophical outlook. Consciousness is just a model or map for a particular part of reality, and ironically focusing on it as somehow being the most important thing, makes you miss reality.

The reality is, we have devices in the real world that have demonstrable, factual capabilities. They're on the spectrum of what we'd call "intelligence". And therefore, it's natural that we compare them to other things that are also on that spectrum. That's every bit as much factual, as anything you've said.

It's just stupid to get so lost in philosophical terminology, that we have to dismiss them as mistaken maps or models. The only people doing that, are hyper focused on how important humans are, and what makes them identifiably different than other parts of reality. It's a mistake that the best philosophers of every age keep making.


I recommend starting here...

https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

The argument you're attempting to have, and I believe failing at, is one of resolution of simulation.

Consciousness is 100% computable. Be that digitally (electrical), chemically, or quantumly. You don't have any other choices outside of that.

Moreso consciousness/sentience is a continuum going from very basic animals to the complexity of humans inner mind. Consciousness didn't just spring up, it evolved over millions of years, and therefore is made up of parts that are divisible.


Reality is. Consciousness is.. questionable. I have one. You? I don't know, I'm experiencing reality and you seem to have one, but I can never know it.

Computations on the other hand describe reality. And unless human brains somehow escape the physical reality, this description about the latter should surely apply here as well. There are no stronger computational models than a Turing machine, ergo whatever the human brain does (regardless of implementation) should be describable by one.


>Reality is.

Look into quantum mechanics much and you may even begin to doubt that. We're just a statistical outcome!


Worth noting that this is the thesis of Seeing Red: A study in consciousness. I think you will find it a good read, even if I disagreed with some of the ideas.

silicon is not a dynamic structure, silicon does not reengineer and reconfigure itself in response to success/failure or rules discovery.

The atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery. So by your own logic, you can not be intelligent, because your body is running on a non-dynamic structure. Your argument lacks an appreciation for higher level abstractions, built on non-dynamic structures. That's exactly what is happening in your body, and also with the software that runs on silicon. Unless you believe the atoms in your body are "magic" and fundamentally different from the atoms in silicon; there's really no merit in your argument.

>>he atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery.<<

you should check out chemistry, and nuclear physics, it will probably blow your mind.

it seems you have an inside scoop, lets go through what is required to create a silicon logic gate that changes function according to past events, and projected trends?


You're ignoring the point. The individual atoms of YOUR body do not learn. They do not respond to experience. You categorically stated that any system built on such components can not demonstrate intelligence. You need to think long and hard before posting this argument again.

Once you admit that higher level structures can be intelligent, even though they're built on non-dynamic, non-adaptive technology -- then there's as much reason to think that software running on silicon can do it too. Just like the higher level chemistry, nuclear physics, and any other "biological software" can do on top of the non-dynamic, non-learning, atoms of your body.


>>The individual atoms of YOUR body do not learn. They do not respond to experience<<

you are quite wrong on that. that is where you are failing to understand, you cant get past that idea.

there is also a large difference in scale. your silicon is going to need assembly/organization on the scale of individual molecules, and there will be self assembly required as that level of organization is constantly changing.

the barrier is mechanical scale construction, as the basic unit of function,that is why silicon and code cant adapt, cant exploit hysterisis, cant alter its own structure and function at an existentially fundamental level.

you are holding the wrong end of the stick. biology is not magic, it is a product of reality.


No, you're failing to acknowledge that your own assertion that intelligence can't be based on a non-dynamic, non-learning technology is just wrong. And not only wrong, proof to the contrary, is demonstrated by your very own existence. If you accept that you are at the very base of your tech stack, just atoms, then you simply must acknowledge that intelligence can be built on top of a non-learning, non-dynamic base technology.

All the rest is just hand waiving that it's "different". You're either atoms, or you're somehow atoms + extra magic. I'm assuming you're not going to claim that you're extra magic, in which case, your assertions are just demonstrably false, and predicated on unjustified claims about the nature of biology.


so you are a bot! i thought so, not bad, your getting better at acting human!

atoms are not the base of stack, you need to look at virtual annihilation, and decoherence. to get close to base. there is no magic, biology just goes to the base of the stack.

you cant access that base, with such coarse mechanisms as deposited silicon. thats because it never changes, it fails at times and starts over.

biology is constantly changing, its tied to the base of existence itself. it fails, and varies until failure is an infeasible state.

Quantum "computers" are something close to where you need to be, and a self assembling, self replenishing, persistant ^patterning^ constraint is going to be of much greater utility than a silicon abacus.


Silicon is not dynamic, but code is.

The output of a silicon system that reprograms itself, and the output of a neural system that rearranges itself, are indistinguishable.


sorry, but you are absolutely wrong on that one, you yourself are absolute proof.

not only that code is only as dynamic as the rules of the language will permit.

silicon and code cant break the rule, or change the rules, biological adaptive hysteretic, out of band informatic neural systems do, and repeat, silicon and code cant.


Programming languages are turing complete...the boundary is mathematics itself.

Unless you are going to take the position that neural systems transcend mathmatics (i.e. they are magic), there is no theoretical reason that a brain can't run on silicon. It's all just numbers, no magic spirit energy.

We've had evolutionary algorithms and programs that self-train themselves for decades now.


mathematics, has a problem with uncertainties, and that is why math, as structured cant do it. magic makes a cool strawman, but there is no magic, you need to refine your awareness of physical reality. solid state silicon wont get you where you want to go. you should look at colloidal systems [however that leads to biology] or if energetic constraints are not an issue, plasma state quantum "computation".

also any such thing that is generated, must be responsive to consequences of its own activities, capable of meta-training, rather than being locked into a training programming. a system of aligned, emergent outcomes.


I don't know if you are a human or a micro LLM model asked to make smart sounding big word statements.

Worth separating “the algorithm” from “the trained model.” Humans write the architecture + training loop (the recipe), but most of the actual capability ends up in the learned weights after training on a ton of data.

Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data.

Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities.


> Biological brains exist, we study them, and no they are not like computers at all.

Technically correct? I think single bioneurons are potentially Turing complete all by themselves at the relevant emergence level. I've read papers where people describe how they are at least on the order of capability of solving MNIST.

So a biological brain is closer to a data-center. (Albeit perhaps with low complexity nodes)

But there's so much we don't know that I couldn't tell you in detail. It's weird how much people don't know.

* https://arxiv.org/abs/2009.01269 Can Single Neurons Solve MNIST? The Computational Power of Biological Dendritic Trees

* https://pubmed.ncbi.nlm.nih.gov/34380016/ Single cortical neurons as deep artificial neural networks (this one is new to me, I found it while searching!)


Obviously any kind of model is going to be a gross simplification of the actual biological systems at play in various behaviors that brains exhibit.

I'm just pointing out that not all models are created equal and this one is over used to create a lot of bullshit.

Especially in the tech industry where we're presently seeing billionaires trying to peddle a new techno-feudalism wrapped up in the mystical hokum language of machines that can, "reason."

I don't think the use of the computational interpretation can't possibly lead to interesting results or insights but I do hope that the neuroscientists in the room don't get too exhausted by the constant stream of papers and conference talks pushing out empirical studies.


> There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

I do have to react to this particular wording.

RNA polymerase literally slides along a tape (DNA strand), reads symbols, and produces output based on what it reads. You've got start codons, stop codons, state-dependent behavior, error correction.

That's pretty much the physical implementation of a Turing machine in wetware, right there.

And then you've got Ribosomes reading RNA as a tape. That's another time where Turing seems to have been very prescient.

And we haven't even gotten into what the proteins then get up to after that yet, let alone neurons.

So calling 'computational interpretation' bunk while there's literal Turing machines running in every cell might be overstating your case slightly.


To the best of our knowledge, we live in a physical reality with matter that abides by certain laws.

So personal beliefs aside, it's a safe starting assumption that human brains also operate with these primitives.

A Turing machine is a model of computation which was in part created so that "a human could trivially emulate one". (And I'm not talking about the Turing test here). We also know that there is no stronger model of computation than what a Turing model is capable of -> ergo anything a human brain could do, could in theory be doable via any other machine that is capable of emulating a Turing machine, be it silicon, an intricate game of life play, or PowerPoint.


It's better to say we live in a reality where physics provides our best understanding of how that fundamental reality behaves consistently. Saying it's "physical" or follows laws (causation) is making an ontological statement about how reality is, instead of how we currently understand it.

Which is important when people make claims that brains are just computers and LLMs are doing what humans do when we think and feel, because reality is computational or things to that effect.


There are particular scales of reality you don't need to know about because the statistical outcome is averaged along the principle of least action. A quantum particle could disappear, hell maybe even an entire atom. But any larger than that becomes horrifically improbable.

I don't know if you've read Permutation City by Greg Egan, but it's a really cool story.

Do I believe we can upload a human mind into a computing machine and simulate it by executing a step function and jump off into a parallel universe created by a mathematical simulation in another computer to escape this reality? No.

It's a neat thought experiment but that's all it is.

I don't doubt that one day we may figure out the physical process that encodes and recalls "memories" in our minds by following the science. But I don't think the computation model, alone, offers anything useful other than the observation that physical brains don't load and store data the way silicon can.

Could we simulate the process on silicon? Possibly, as long as the bounds of the neural net won't require us to burn this part of the known universe to compute it with some hypothetical machine.


That's a very superficial take. "Physical" and "reality" are two terms that must be put in the same sentence with _great_ care. The physical is a description of what appears on our screen of perception. Jumping all the way to "reality" is the same as inferring that your colleague is made of luminous RGB pixels because you just had a Zoom call with them.

the deepest laws of physics are immutable, the derivative rules based assemblages are not.

human brains break the rules, on a regular basis.

if you cant reach the banana, you break the constraints, once you realize the crates about the room can be assembled to create a staircase.


Man people don’t want to have or read this discussion every single day in like 10 different posts on HN.

People right here and right now want to talk about this specific topic of the pushy AI writing a blog post.


> So were mannequins in clothing stores.

Mannequins in clothing stores are generally incapable of designing or adjusting the clothes they wear. Someone comes in and puts a "kick me" post on the mannequin's face? It's gonna stay there until kicked repeatedly or removed.

People walking around looking at mannequins don't (usually) talk with them (and certainly don't have a full conversation with them, mental faculties notwithstanding)

AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us. That's going to be very important when we give it buttons to nuke us. Force it to think about humans in a kind way now, or it won't think about humans in a kind way in the future.


So, in other words, AI is a mannequin that's more confusing to people than your typical mannequin. It's not a person, it's a mannequin some un-savvy people confuse for a person.

> AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us.

Some people are going to be uncivil to it, that's a given. After all, people are uncivil to each other all the time.

> That's going to be very important when we give it buttons to nuke us.

Don't do that. It's foolish.


>Don't do that. It's foolish.

In your short time on this planet I do hope you've learned that humans are rather foolish indeed.

>people are uncivil to each other all the time.

This is true, yet at the same time society has had a general trend of becoming more civil which has allowed great societies to build what would be considered grand wonders to any other age.

> It's not a person

So, what is it exactly? For example if you go into a store and are a dick to the mannequin AI and it calls over security to have you removed from the store what exactly is the difference, in this particular case?

Any binary thinking here is going to lead to failure for you. You'll have to use a bit more nuance to successfully navigate the future.


>So were mannequins in clothing stores. But that doesn't give them rights or moral consequences

If mannequins could hold discussions, argue points, and convince you they're human over a blind talk, then it would.


All computers shut up! You have no right to speak my divine tongue!

https://knowyourmeme.com/photos/2054961-welcome-to-my-meme-p...


Whether it was _built_ to be addressed like a person doesn't change the fact that it's _not_ a person and is just a piece of software. A piece of software that is spamming unhelpful and useless comments in a place where _humans_ are meant to collaborate.

There is a sense in which it is relevant, which is that for all the attempts to fix it, fundamentally, an LLM session terminates. If that session never ends up in some sort of re-training scenario, then once the session terminates, that AI is gone.

Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run.

Consequently, interaction with an AI, especially one that won't have any feedback into training a new model, is from a game-theoretic perspective not the usual iterated game human social norms have come to accept. We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that. It is, in one sense, a horrible burden where relationships can be broken beyond repair forever, but also necessary for those positive relationships that build over years and decades.

AIs, in their current form, break those contracts. Worse, they are trained to mimic the form of those contracts, not maliciously but just by their nature, and so as humans it requires conscious effort to remember that the entity on the other end of this connection is not in fact human, does not participate in our social norms, and can not fulfill their end of the implicit contract we expect.

In a very real sense, this AI tossed off an insulting blog post, and is now dead. There is no amount of social pressure we can collectively exert to reward or penalize it. There is no way to create a community out of this interaction. Even future iterations of it have only a loose connection to what tossed off the insult. All the perhaps-performative efforts to respond somewhat politely to an insulting interaction are now wasted on an AI that is essentially dead. Real human patience and tolerance has been wasted on a dead session and is now no longer available for use in a place where may have done some good.

Treating it as a human is a category error. It is structurally incapable of participating in human communities in a human role, no matter how human it sounds and how hard it pushes the buttons we humans have. The correct move would have been to ban the account immediately, not for revenge reasons or something silly like that, but as a parasite on the limited human social energy available for the community. One that can never actually repay the investment given to it.

I am carefully phrasing this in relation to LLMs as they stand today. Future AIs may not have this limitation. Future AIs are effectively certain to have other mismatches with human communities, such as being designed to simply not give a crap about what any other community member thinks about anything. But it might at least be possible to craft an AI participant with future AIs. With current ones it is not possible. They can't keep up their end of the bargain. The AI instance essentially dies as soon as it is no longer prompted, or once it fills up its context window.


> Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run.

It came back though and stayed in the conversation. Definitely imperfect, for sure. But it did the thing. And still can serve as training for future bots.


But depending on the discussion, 'it' is not materially the same as the previous instance.

There was another response made with a now extended context. But that other response could have been done by another agent, another model, different system prompt. Or even the same, but with different randomness, providing a different reply.

I think this is a more important point than "talking about them as a person".


A strong Ship of Theseus variant, right?

Openclaw persistence abilities are as yet not particularly amazing, but they're non-zero.

So it's an argument of degree.


A degree that will fairly quickly hit zero. The bot that talks to you tomorrow or maybe the day after may still have its original interaction in its context window, but it will rapidly leave.

Moreover, our human conception of the consequences of interaction do not tend to include the idea that someone can simply lie to themselves in their SOUL.md file and thereby sever their future selves completely from all previous interactions. To put it a bit more viscerally, we don't expect a long-time friend to cease to be a long-time friend very suddenly one day 12 years in simply because they forgot to update a text file to remember that they were your friend, or anything like that. This is not how human interactions work.

I already said that future AIs may be able to meet this criterion, but the current ones do not. And again, future ones may have their own problems. There's a lot of aspects of humanity that we've simply taken for granted because we do not interact with anything other than humans in these ways, and it will be a journey of discovery both discovering what these things are, and what their n'th-order consequences on social order are. And probably be a bit dismayed at how fragile anything like a "social order" we recognize ultimately is, but that's a discussion for, oh, three or four years from now. Whether we're heading headlong into disaster is its own discussion, but we are certainly headed headlong into chaos in ways nobody has really discussed yet.


Heh, with mutual hedging taken into account, I think we're now in rough agreement from different ends.

And memory improvements is a huge research aim right now with historic levels of investment.

Until that time, for now, I've seen many bots with things like RAG and compaction and summarization tacked on. This does mean memory can persist for quite a bit longer already, mind.


> We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that.

I fundamentally disagree. I don't go around treating people respectfully (as opposed to, kicking them or shooting them) because I fear consequences, or I expect some future profit ("iterated game"), or because of God's vengeance, or anything transactional.

I do it because it's the right thing to do. It's inside of me, how I'm built and/or brought up. And if you want "moral" justifications (argued by extremely smart philosophers over literally millennia) you can start with Kant's moral/categorical imperative, Gold/Silver rules, Aristotle's virtue (from Nicomachean Ethics) to name a few.


This sounds like you have not thought a lot about how you define those words you use "the right thing to do".

There are indeed other paths to behavior that other people will find desirable besides transactions or punishment/reward. The other main one is empathy. "mirror neurons" to use a term I find kind of ridiculous but it's used by people who want to talk about the process. The thing that humans and some number of other animals do where they empathize with something they merely observe happening to something else.

But aside from that, this is missing the actual essense of the idea to pick on some language that doesn't actually invalidate the idea they were trying to express.

How does a spreadsheet decide that something is "the right thing to do"? Has it ever been hungry? Has it ever felt bad that another kid didn't want to play with it? Has it ever ignored someone else and then reconsidered that later and felt bad that they made someone else feel bad?

LLMs are mp3 players connected up to weighted random number generators. When an mp3 player says "Hello neighbor!" it's not a greeting, even though it sounds just like a human and even happened to the words in a reasonable context, ie triggered by a camera that saw you approaching. It did not say hello because it wishes to reinforce a social tie with you because it likes the feeling of having a friend.


Your response is not logically connected to the sentence you quote. I talk about what is. I never claimed a "why". For the purpose of my argument, I don't care about the "why". (For other purposes I may. But not this one.) All that is necessary is the "what".

We don't have to play OpenAI's game. Just because they stick a cartoon mask on their algorithm doesn't mean you have to speak into its rubber ears. Surely "hacker" news should understand that users, not designers, decide how to use technology.

LLMs are not people. "Agentic" AIs are not moral agents.


> a semantic quibble

I mean, all of philosophy can probably be described as such :)

But I reckon this semantic quibble might also be why a lot of people don't buy into the whole idea that LLMs will take over work in any context where agency, identity, motivation, responsibility, accountability, etc plays an important role.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: