Hacker Newsnew | past | comments | ask | show | jobs | submit | curtisf's commentslogin

"I would rather read the prompt"

https://claytonwramsey.com/blog/prompt/

discussion: https://news.ycombinator.com/item?id=43888803

All of the output beyond the prompt contains, definitionally, essentially no useful information. Unless it's being used to translate from one human language to another, you're wasting your reader's time and energy in exchange for you own. If you have useful ideas, share them, and if you believe in the age of LLMs, be less afraid of them being unpolished and simply ask you readers to rely on their preferred tools to piece through it.


I have also found that LLMs do not help me communicate my ideas in any way because the bottleneck is getting the ideas out of my head and into the prompt in the first place, but I will disagree with the idea that the output beyond the prompt contains no useful information.

In the article you linked the output he is complaining about probably had a prompt like this: "What are the downsides of using Euler angles for rotation representation in robotics? Please provide a bulleted list and suggest alternatives." The LLM expanded on it based on its knowledge of the domain or based on a search tool (or both). Charitably, the student looked it over and thought through the information and decided it was good (or possibly tweaked around the edges) and then sent it over - though in practice they probably just assumed it was correct and didn't check it.

For writing an essay like "I would rather read the prompt" LLMs don't seem like they would speed up the process much, but for something that involves synthesizing or summarizing information LLMs definitely can generate you a useful essay (though at least at the moment the default system prompts generate something distinctively bland and awful).


Pretty balanced take. I think if a human gains information or saves time, it's still worthwhile. Surely, I don't publish those clickbaits. That's AI slop.

Sounds reasonable until you consider that the "prompt" might include a million tokens of context, not to mention follow-up/iterations

"Consensus" in this post refers to the "consensus problem", which is a fundamental and well-known problem in distributed systems.

It's not about political consensus.

However, the paper that introduced it and proved it possible, Lamport's "The Part Time Parliament", used an involved (and often cited as confusing) "Parliament" metaphor for computers in a distributed system

"Consensus" in distributed systems need not be limited to majorities; it really just requires no "split brain" is possible. For example, "consensus" is achieved by making one server the leader, and giving other servers no say. A majority is just the 'quorum' which remains available with that largest number of unavailable peers possible.


As feedback to the author, I made the same mistake initially. It was only around halfway through when I realized the voters in question didn't necessarily care what they were voting for in the usual preferential or political sense, only that they were trying to have any consensus at all.

Looking back at the page again from the top, I see the first paragraph references Paxos, which is a clue to those who know what that is, but I think using "There’s a committee of five members that tries to choose a color for a bike shed" as the example, which is the canonical case for people arguing personal preferences and going to the wall for them at the expense of every other rational consideration, threw me back off the trail. I'd suggest perhaps the sample problem being something as trivial as that in reality, but less pre-loaded with the exact opposite connotation.


> it really just requires no "split brain" is possible. For example, "consensus" is achieved by making one server the leader, and giving other servers no say.

Which is funny, because that actually describes political consensus as well, functionally, even if it’s not what people typically think of as the definition.

If you can effect enough of the right censorship or silencing or cancelling, you can achieve consensus (aka no split brain, at least no split with agency)


It could also be useful in low doses to supplement, for example, a seasonal vaccine in a year where they are especially unsure about prevalent strains, or where their predictions were already proved wrong early in the flu season


> For optional types, 0 is decoded as the default value of the underlying type (e.g. string? decodes 0 as "", not null).

In the "dense JSON" format, isn't representing removed/absent struct fields with `0` and not `null` backwards incompatible?

If you remove or are unaware of a `int32?` field, old consumers will suddenly think the value is present as a "default" value rather than absent


That is correct and that is a good catch, the idea though is that when you remove a field you typically do that after having made sure that all code no longer read from the removed field and that all binaries have been deployed.


How does this work if, for example, you persist the data in a database?


Let's imagine you have this:

``` struct User { id: int64; email: string?; name: string; } ```

You store some users in a database: [10,"john@gmail.com""john"], [11,"jane",null,"john@gmail.com"]

You remove the email field later:

``` struct User { id: int64; name: string; removed; } ```

Supposedly you remove a field after you have migrated all code that uses the field and you have deployed all binaries.

In your DB, you still have [10,john@gmail.com","john"], [11,null,"jane"], which you are able to deserialize fine (the email field is ignored). New values that you serialize are stored as [12,0,"jack"]. If you happen to have old binaries which still use the old email field and which are still running (which you shouldn't, but let's imagine you accidentally didn't deploy all your binaries before you removed the field), these new binaries will indeed decode the email field for new values (Jack) as an empty string instead of null.


Isn't it?

You can have Dependabot enabled, but turn off automatic PRs. You can then manually generate a PR for an auto-fixable issue if you want, or just do the fixes yourself and watch the issue number shrink.


Conscription is horribly inapt metaphor for mandatory inoculation.

Banning the playing of third-party Russian roulette, where you hold a mostly unloaded gun to the head of your neighbors, coworkers, and service staff, actually more accurately represents the risks involved to both yourself and the public, and importantly to the personal tax and effort required.


what about when a veteran returns from war with ptsd that can be triggered at any point and potentially result in violence to those around them ? thats about the same net effect as walking around with a loaded gun to everyones head , the only difference is the comparability in numbers. as well the covid death rates for young people are a fraction of the death rates of the elderly, who do deserve to be taken care of but ultimately are a net drain to society. so your comment is better stated as holding a gun to the head of the elderly ... which is horrible but not quite the same argument.


A lien on the property? Although almost all jurisdictions already have property taxes, so it hasn't been an insurmountable problem so far


This could be stated much more succinctly using Jobs to be Done (which is referenced in the first few paragraphs):

Your customers don't want to do stuff with AI.

They want to do stuff faster, better, cheaper, and more easily. (JtbD claims you need to be at least 15% better or 15% cheaper than the competition -- so if we're talking "AI", the classical ML or manual human alternative)

If the LLM you're trying to package can't actually solve the problem, obviously no one will buy it because _using AI_ OBVIOUSLY isn't anyone's _job-to-be-done_


It could, but under the current system, candidates who are affiliated with major parties (i.e., essentially everyone who ends up winning an election) already need to win the support of their party, and the process for this is generally opaque and largely controlled by often less-moderate insiders

Also, having viable third party choices puts more pressure on larger parties to field more widely palatable candidates, or risk losing their majorities


I just think that seeing the current gerrymandered districts where I live and the crazy people who come out of the party, I would rather voters choose individuals than parties.

If someone doesn’t tow the party line, the party would immediately replace them the next year and this would give parties even more power.


I do not understand what this could mean.

There are clear formalizations of concepts like Consistency in distributed systems, and there are algorithms that correctly achieve Consensus.

What does it mean to formalize the "Single Source of Truth" principle, which is a guiding principle and not a predictive law?


Here ‘formalize SSOT’ means: treat the codebase as an encoding system with multiple places that can hold the same structural fact (class shape, signature, etc.). Define DOF (degrees of freedom) as the count of independent places that can disagree; coherence means no disagreement. Then prove:

- Only DOF=1 guarantees coherence; DOF>1 always leaves truth indeterminate, so any oracle that picks the ‘real’ value is arbitrary.

- For structural facts, DOF=1 is achievable iff the language provides definition‑time hooks plus introspectable derivation; without both (e.g., Java/Rust/Go/TS) you can’t enforce SSOT no matter how disciplined you are.

It’s like turning ‘consistency’ in distributed systems from a principle into a property with necessary/sufficient conditions and an impossibility result. SSOT isn’t a predictive law; it’s an epistemic constraint. If you want coherence, the math forces a single independent source. And if the same fact lives in backend and UI, the ‘truth’ is effectively in the developer’s head; an external oracle. Any system with >1 independent encoding leaves truth indeterminate; coherence only comes when the code collapses to one independent source (DOF=1).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: