The problem is humans are really bad at perceiving externalities at this scale, cause / effect between small actions and large effects, and effects that play out over the span of their lifetime rather than the span of their day. The denialist rationale shifted over the years from doubting the very basis of the science, to claiming its just a short term blip, to its natural long term cycles, to … everything that involves not looking up.
I think the truth is we won’t really take this seriously globally until the changes are so severe that it’ll take generations to undo if ever.
My employer is pretty advanced in its use of these tools for development and it’s absolutely accelerated everything we do to the point we are exhausting roadmaps for six months in a few weeks. However I think very few companies are operating like this yet. It takes time for tools and techniques to make it out and Claude code alone isn’t enough. They are basically planning to let go of most of the product managers and Eng managers, and I expect they’re measuring who is using the AI tools most effectively and everyone else will be let go, likely before years end. Unlike prior iterations I saw at Salesforce this time I am convinced they’re actually going to do it and pull it off. This is the biggest change I’ve seen in my 35 year career, and I have to say I’m pretty excited to be going through it even though the collateral damage will be immense to peoples lives. I plan to retire after this as well, I think this part is sort of interesting but I can see clearly what comes next is not.
Why are you excited for this? They’re not going to give YOU those peoples’ salaries. You will get none of it. In fact, it will drag your salary through the floor because of all the available talent.
I’m excited as a computer scientist to see it happening in my life time. I am not excited for the consequences once it’s played out. Hence my comment about retiring, and empathy for everyone who is still around once I do. I never got into this for the money - when I started engineers made about as much as accountants. It’s only post 1997 or so that it became “cool” and well paid. I am doing this because I love technology and what it can do and the science of computing. So in that regard it’s an amazing time to be here. But I am also sad to see the black box cover the beauty of it all.
I'm very confused about this. Salary is only one portion of your total compensation. The vast majority of tech companies offer equity in a company. The two ways to increase the FMV of your equity is: increase your equity stake or increase the value of the total equity available. Hitting the same goals with fewer people means your run rate is lower, which increases the value of your equity (the FMV prices in lower COGS for the same revenue.) Also, keeping on staff often means you want to offer them increased equity stakes as an employment package. Letting staff go means more of that available equity pool is available to distribute to remaining employees.
We aren't fungible workers in a low skill industry. And if you find yourself working in a tech company without equity: just don't, leave. Either find a new tech company or do something else altogether.
Equity is negotiable just like salary, and if supply of developer labor increases with the same or less demand, you'll get less equity just like you will get less salary.
I can't believe the person you replied to thinks that they're going to get some magical more amount of equity because you can hopefully do more with fewer people. That's assuming the entire business landscape doesn't also change with AI, disincentivizing so much investment in companies in the first place because someone else with AI can create a competitor in a shorter amount of time...
In the last three startups I worked at I didn’t bother exercising my vested equity - even a successful exit would at best triple the price of those shares - not worth the risk. One of those three startups already failed.
5.4 is the one fine tuned for autonomous mass murder, automated surveillance state, and money grabs at any cost. It’s really hard to lump that into the others as it’s a fairly unique and specialized feature set. You can’t really call it that tho so they have to use the numbers.
I’m pretty glad I’m out of the OpenAI ecosystem in all seriousness. It is genuinely a mess. This marketing page is also just literally all over the place and could probably be about 20% of its size.
This is a huge step over M4 32GB 153GB/s memory transfer
For local LLM this make it a replacement for a DGX Spark, which offers a third of the transfer speed and is not something you toss in your backpack as your laptop. It’s practically useful for a lot of local use cases and that I think is the 4x factor (memory xfer) - but the 128Gb unified headroom tremendously improves the models you can run and training you can do.
What is truly amazing is the M1 Max is 400GB/s. 5 years later and we still only hit 1.5x on memory bandwidth. It's quite fascinating how high Apple spec'd it back then with apparently little foreknowledge of how important memory bandwidth would become, and then conversely how little they've managed to improve it now when it's so obvious how important it is.
The reason for that is that most memory bandwidth bumps come with new memory generations. For example an early DDR4 platform (e.g. Intel Skylake/Core iX-6000) and a late one (e.g. AMD Zen3/Ryzen 5000) only differ by 1.5x as well, typically.
The same trend is visible in GPUs: for example, my RTX 2070 (GDDR6) has the same memory bandwidth as a 3070 and only a little bit less than a 4070 (GDDR6X). However, a 5070 does get significantly more bandwidth due to the jump to GDDR7. Lower-end cards like the 4060 even stuck to GDDR6, which gave them a bandwidth deficit compared to a 3060 due to the narrower memory buses on the 40 series.
I run Qwen 3.5 30B MOE and it’s reasonable at most tasks I would use a local model for - including summarizing things. For instance I auto update all my toolchains automatically in the background when I log in and when finished I use my local model to summarize everything updated and any errors or issues on the next prompt rendering. It’s quite nice b/c everything stay updated, I know whats been updated, and I am immediately aware of issues. I also use it for a variety of “auto correct” tasks, “give me the command for,” summarize the man page and explain X, and a bunch of tasks that I would rather not copy and paste etc.
Each of those clauses have a DoD policy carve out as an exception which says basically they can do whatever they want if they want to do it, but won’t be able to if they don’t want to do it.
Yeah, in fact, I’m increasing my subscription to Anthropic and decreasing to OAI. Now if there was a way to easily port conversation history between one and another I’d probably be fine with deleting OpenAI. ChatGPT has years of my and my families interactions in its history and those are mostly useless to others, but to me they’re valuable. But the knob I have is my spend, so here it goes…
If OpenAI had shown any fidelity or backbone in the least, then different story. A unified industry against any one being bullied into business decisions they don’t want to make is a wall and a strengthening of competition. Now the government will use war powers to shape private industries competitive landscape and turn companies with a core business principles into tools of the state through unilateral and likely unlawful actions, and OpenAI’s first response is to grab the money and shove their competitors under the government bus.
We are all much less safe, and the AI industry much much weaker as a result.
Export your data and ask Claude to shove it in a database that you can let it access anytime you want via tool calling.
I agree, this could have been a moment of solidarity across the industry, an acknowledgement that we're all in this together having fun and building out intelligent systems, and instead we're seeing Sam Altman yet again for who he really is.
I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.
Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.
I think the truth is we won’t really take this seriously globally until the changes are so severe that it’ll take generations to undo if ever.
reply