Hacker Newsnew | past | comments | ask | show | jobs | submit | Klaster_1's commentslogin

This should be illegal. Megacorps eat more and more of our life and regular people are increasingly at mercy of these hostile entities. They should be pushed more against. If we can't have proper anti monopoly splits like AT&T, then at least ways to prevent them exerting too much power are long due. If you provide an essential service, responsibility should match that.

Yes, there needs to be a government public service counter where you can go with all your BigTech issues and complaints.

This is one of the goals of the digital services act.

The EU isn't as bad as some Americans want to believe.

The EU’s heart is in the right place, which can only rarely be said of the US.

But the EU’s approach is often backwards. When product managers have to ask the government if it’s ok to ship a feature, something is wrong. When the government responds that it can’t say in advance, you’ll just have to ship and see if you get fined, something is really seriously broken.


If a company is about to produce millions of physical products, I think it is quite ok if they first check with the government to see if that is a good idea.

This is about Digital Services Act, not physical products.


They aren't perfect but at least they try. All our government does is bomb brown people and cut taxes for the wealthy.

Excreting power. What an awesome mental image.

“Exerting” would be more correct I guess but less fun.


Funny because I have dyslexia and read excreting power as exerting power, and then had to read your "Exerting" underneath 4 times to understand the mistake. I guess it's the phonics, dyslexia is so weird tho, ha.

Hey do you have certain fonts that are better? I was working with a dyslexic student last week trying to find fonts that work better for his online classes. All the research pointed towards a handful that didn't seem to really improve processing for the student.

They tried all sorts with me in school, I seem to recall it's related to trying to add shadow to hint to the brain the direction the letter should be etc. I found it more annoying than helpful. Probably a very unpopular opinion but I think teaching someone with dyslexia to read and write neurotypically is probably unhelpful and finding audio visual learning methods is a considerably better way to have them retain knowledge. I think you can get to a basic level of competency but speed and recall, at least with me, never really came. One thing I found once that was cool was an app that present each word at a time only in the center of the screen, but it felt extremely mechanical I was so focused on the words once I was done there was basically no meaning left if that makes sense. I'm autistic with dyscalculia also, FWIW. I mostly think in sounds, pictures and movies, for whatever reason my brain doesn't have a great framework for symbols that don't have those things inherently attached to them. ¯\_(ツ)_/¯

Man, I'd buy these in a heartbeat if our local Cyprus regulations allowed for these. Participating in green transition as an immigrant renter sucks.

Check with your local utility. Here, (MA, USA), we can't run classic balcony solar (feeds the grid when you produce more than you consume). But we can run zero-export solar (never feeding the grid, but dialing back the inverter when you produce more than you consume).

The economics behind battery-backed zero-export solar are interesting because they keep your local solar energy local, and you can extract maximum benefit from the system. Also, if you have enough batteries and TOD rates for grid power, you can store grid energy when it's cheap (overnight) and use it locally when it's expensive.

Our local utility, National Grid, has a program where, if you have the right inverter-battery combination, they will buy power from you during peak-load periods, and you can make a couple of grand a year.

Batteries, especially local ones, change the dynamics of power generation and use. It's amazing and wonderful.


I believe the major hurdle would be from the "renter" part.

Usually such installations are only allowed to be done by the owners, not tenants.


I recently switched to agent writing my PR and commit messages with skills that mimic me doing the same. Most of the time, it writes exactly what I'd write and if something is off, editing takes less time than writing from scratch.

This came up recently when I asked Claude to adjust indentation and it just couldn't. Such a stupid issue.


They recently put out a new roadmap item to adapt the compiler to modern tools: https://angular.dev/roadmap#developer-velocity . Given the great track record of recent Angular road map deliverables, I think they'll come up with something at least faster than what we have now. Angular already runs on Vite, and the fact that Vite 8 exposes AST level plugin endpoints are good signs. Not waiting 1 minute for unit test suite or `ng serve` cold start would be very welcome indeed.


Compared to generic Serious Sam like maps of TP1, I found beautiful environments of TP2 really added another level of appreciation to the game. Those and the music surface up in my memory couple times a week, very few games had this effect on me - maybe only TES3. The game is magical and I'd love more of that.


Oh it was definitely beautiful, it just never seemed relevant to what was happening. More like the art team run amok. TP1 at least had an in universe explanation for the shape of the world.

Spoilers…kinda

I guess Athena and crew have invented literal universe manipulation powers? Allowing them to craft whatever beauty they see fit. Yet, their living quarters always seem to be some dingy basement lab with power cables and other miscellaneous garbage strewn about. The wondrous environments seemed fully disconnected from everything else in the world.

If they had the time and the budget, go nuts, but the game would have been perfectly suitable with significantly lower fidelity graphics.


A lot of the game was spent walking from puzzle to puzzle. I think prioritising graphics was a good choice, because one was forced to notice the landscape.


The article very much resonates with my experience past several months.

The project I work on has been steadily growing for years, but the amount of engineers taking care of it stayed same or even declined a bit. Most of features are isolated and left untouched for months unless something comes up.

So far, I managed growing scope by relying on tests more and more. Then I switched to exclusively developing against a simulator. Checking changes with real system become rare and more involved - when you have to check, it's usually the gnarliest parts.

Last year's, I noticed I can no longer answer questions about several features because despite working on those for a couple of months and reviewing PRs, I barely hold the details in my head soon afterwards. And this all even before coding agents penetrated deep into our process.

With agents, I noticed exactly what article talks about. Reviewing PR feels even more implicit, I have to exert deliberate effort because tacit knowledge of context didn't form yet and you have to review more than before - the stuff goes into one ear and out of another. My team mates report similar experience.

Currently, we are trying various approaches to deal with that, it it's still too early to tell. We now commit agent plans alongside code to maybe not lose insights gained during development. Tasks with vague requirements we'd implicitly understand most of previously are now a bottleneck because when you type requirements to an agent for planning immediately surface various issues you'd think of during backlog grooming. Skill MDs are often tacit knowledge dumps we previously kept distributed in less formal ways. Agents are forcing us to up our process game and discipline, real people benefit from that too. As article mentioned, I am looking forward to tools picking some of that slack.

One other thing that surprised me was that my eng manager was seemingly oblivious to my ongoing complains about growing cognitive load and confusion rate. It's as if the concept was alien to them or they could comprehend that other people handle that at different capacity than them.


> One other thing that surprised me was that my eng manager was seemingly oblivious to my ongoing complains about growing cognitive load and confusion rate.

Engineering managers in my experience (even in ones with deep technical backgrounds) often miss the trees for the forest. The best ones go to bat for you, especially once verifying that they can do something to unblock or support you. But that’s still different than being in the terminal or IDE all day.

Offloading cognitive load is pretty much their entire role.


We don't have the right abstractions in place to support true AI driven work. We replaced ourselves but we don't have the tools to do '1 layer up'.


Nailed it.

We desperately need a new set of abstractions for human- and AI-based knowledge.

I prefer humans-as-a-network of abstractions piloting an organic robot perspective. Sans mathematical framework, this is an unsatisfying claim, I know... But just hear me out.

This allows for extreme complexity between individuals and for language to act as a standard serial com channel with high dimensional abstractions embedded across words - a network of abstractions unto itself. Models of this network are embedded in books and 'live' in oral history.

LLMs, then, are just a much better model of the abstraction networks that span people through language (and often thought).

Notice that they're NOT people. And that we are actively developing network science to accommodate the complexities of inherent in examining both the real world and modeled versions of these networks.

As an example, the tools to layer up can be envisioned as more networks on top of these networks: reasoning and cognitive patterns are captured in recursive transformer-based LLMs. So a metacognative model might actively generate LoRA for each prompt.

Again, much math and research needed. But it's been a very useful set of abstractions this far.


Learning has always been to write things down. Just reading it seldom sticks.


Absolutely not. Learning has been to experiment with the things until you form a effective mental model of the thing. Writing things does ab-so-luetely nothing except make you feel good in the moment. Just like listening to a lecture without engaging with the subject matter deeper.

Writing things down is important for organisational persistence of information but that is something else.


Writing is better than reading, but doing is better than writing.


How does this apply to coding when the act of writing IS doing? Or do you mean like coding "on your own" versus following a tutorial for example?


Means writing code (doing) vs writing documentation / plans / project architecture documents and so on.


Writing code is doing


Choosing what to write down is making a mental model, extracting the core and thinking about the subject.

Seems to me you're just a bad note-taker that blindly writes things down, and for some reason decided to use that lack of knowledge in a tirade against me..


Not sure humanity learned nothing before the last 8000 years. It was just very slow. Maybe we will need new ways to learn


I think that recording dialog with the agent (prompt, the agent's plan, and agent's report after implementation) will become increasingly important in the future.


I have this at the bottom of my AGENTS.md:

You will also add a markdown file to the changelog directory named with the current date and time `date -u +"%Y-%m-%dT%H-%M-%SZ"`, record the prompt, and a brief summary of what changes you made, this should be the same summary you gave the developer in the chat.

From that I get the prompt and the summary for each change. It's not perfect but it at least adds some context around the commit.


Isn’t the commit message a better place to add what and why? You might need to feed some info that the agent doesn’t have access to “we are developing feature X this change will such and such to blah blah”. The agent will write a pretty good commit message most of the times. Why do you need a markdown file? Are releasing new versions of the software for third parties?


Cheaper and faster retrieval to be added to the context and discoverable by the agent.

You need more git commands to find the right commit that contains the context you want (either you the human or the LLM burning too many token and time) than just include the right MD file or use grep with proper keywords.

Moreover you could need multiple commits to get the full context, while if you ask the LLM to keep the MD file up to date, you have everything together.


I doubt you can give more context to an LLM from a README file than 500 properly written commits. Or to a human for that matter.


The problem isn't giving MORE context to an agent, it's giving the right context

These things are built for pattern matching, and if you keep their context focused on one pattern, they'll perform much better

You want to avoid dumping in a bunch of data (like a year's worth of git logs) and telling it to sort out what's relevant itself

Better to have pre-processing steps, that find (and maybe summarize) what's relevant, then only bring that into context

You can do that by running your git history through a cheap model, and asking it to extract the relevant bits for the current change. But, that can be overkill and error prone, compared to just maintaining markdown files as you make changes


"You want to avoid dumping in a bunch of data (like a year's worth of git logs) and telling it to sort out what's relevant itself"

So instead you give it a years worth of changelog.md?

"Better to have pre-processing steps, that find (and maybe summarize) what's relevant, then only bring that into context"

So, not a list of commits that touched the relevant files or are associated with relevant issues? That kind of "preprocessing" doesn't count?

"You can do that by running your git history through a cheap model, and asking it to extract the relevant bits for the current change. But, that can be overkill and error prone, compared to just maintaining markdown files as you make changes"

And somehow extracting the same data out of a [relatively] unstructured and context-free (the changelog only has dates and description, that will need to be correlated to actual changes with git anyway...) markdown file is magically less error-prone?


Hey you can try it if you like. That's one of the beauties of the current moment, nobody REALLY knows what works best, just a whole lot of people trying stuff

And no, I wouldn't ever give it a year of changelog.md. I give it a short description of the current functionality, and a well-trimmed list of 'lessons-learned' (specific pitfalls/traps from previous work, so the AI doesn't have to repeat them)

If you think git logs are a good way to give context, try it and and see how it works! My instinct's that it won't work as well as a short readme, but I could be wrong. It's so easy to prototype these days, no reason to not give it a shot


"a short description of the current functionality, and a well-trimmed list of 'lessons-learned'"

Where does that come from?

"And no, I wouldn't ever give it a year of changelog.md."

No, instead you'll "[run] your git history through a cheap model". Except it's "overkill and error prone". So you're writing it up yourself? You didn't do the work, how do you know what the pitfalls and traps are?


How often, in your experience, do people read those auto-generated markdown files? Do you have any empirical data on how useful people find reading other people's agents' auto-generated files?


How often is it the same summary given to the developer in the chat?


Why doesn't this apply to human collaborators as well? If you need all this extra metadata to comprehend the changes, isn't that kind of going backwards? You spend time (setting up the agents, building extensive prompts that explain soooo much of how to do things, adding to whatever markdown file you think controls the parrot) and money (so many token$), to get code that you don't comprehend, and just decide to fill your repo with all of the above to... what exactly does all this accomplish? So you can later ask another parrot to "fix" something?


Agree, but current agents don't help with that. I use Copilot, and you can't even dump it preserving complete context, including images, tool call results and subagent outputs. And even if you could, you'd immediately blow up the context trying to ingest that. This needs some supporting tooling, like in today's submission where agent accesses terabytes of CI logs via ClickHouse.


I've had some luck creating tiny skills that produce summaries. E.g. a current TASK.md is generated from a milestone in PLAN.md, and when work is checked in STATUS.md and README.md are regenerated as needed. AGENTS.md is minimal and shrinking as I spread instructions out to the tools.

Part of my CI process when creating skills involves setting token caps and comparing usage rates with and without the skill.


Honor culture is what happens when there's no reliable institutions or evidence, so people have to defend reputation themselves - usually with retaliation and interpersonal violence. Always-on cameras are the opposite idea: enforcement moves outside the individual, which is basically how honor cultures stop being a thing.


The Quantum Thief series by Hannu Rajaniemi depicts a society where the protection point in "smart glasses" is addressed by making shared info opt-in and handling that centrally (vulnerability of which is a major plot point), so people see a non-distinct blob instead of a person if they don't have access. There's more to it in the books, but I won't spoil, I highly recommend reading these instead.


I sure hope it doesn't, though. Or at least gets severally reformed to not be as severe as nowadays.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: