Hacker Newsnew | past | comments | ask | show | jobs | submit | httpz's commentslogin

In places with a lot of flat empty land, solar farms are a lot cheaper. South Korea doesn't have any flat empty land though..

One important detail is lost in translation. This law applies to publicly-funded parking lots. Public parking does not mean any parking lot open to the general public in this case.

It's car optimized because the 110F weather makes it un-walkable in the first place. When I lived in a walkable city, I would prefer to walk 30 minutes than drive. When I lived in Phoenix, I did not want to spend more than 30 seconds outside in the summer.

how's the tree situation though? 110F + lots of huge trees = a lot more tolerable. trees cool shit down big time.

It's a desert so trees can't survive without irrigation. Since water is scarce as well, there aren't enough trees to cover the vast low density area.

You can always start small and over decades grow the area. After all that is how cities like Copenhagen and Amsterdam became bike friendly, not just a few years, but decades of work.

What about https://en.wikipedia.org/wiki/Dracaena_cinnabari or https://en.wikipedia.org/wiki/Chilopsis?


As a fellow European: we're prone to underestimating how uninhabitable bits of America are that nonetheless have people living in them. Those are port cities and therefore stable and temperate. You cannot green Arizona.

Just FYI there's a lot of ways to re-green a desert without actually being wasteful with water. There's some really impressive case studies out there. Shaping the land with berms and swales, building walls of trees to prevent water from being leached away by the wind, etc.

This is a classic case of Productivity Paradox when personal computers were first introduced into workplaces in the 80s.

A famous economist once said, "You can see the computer age everywhere but in the productivity statistics."

There are many reasons for the lag in productivity gain but it certainly will come.

https://en.wikipedia.org/wiki/Productivity_paradox


That's only certain if investments in tech infrastructure always led to productivity increases. But sometimes they just don't. Lots of firms spent a lot of money on blockchain five years ago, for instance, and that money is just gone now.


I find it odd the universal assumption that AI is going to be good for productivity

The loss of skills, complete loss of visibility and experience with the codebase, and the complete lack of software architecture design, seems like a massive killer in the long term

I have a feeling that we're going to see productivity with AI drop through the floor


I'd claim the opposite. Better models design better software, and quickly better software than what most software developers were writing.

Just yesterday I asked Opus 4.6 what I could do to make an old macOS AppKit project more testable, too lazy to even encumber the question with my own preferences like I usually do, and it pitched a refactor into Elm architecture. And then it did the refactor while I took a piss.

The idea that AI writes bad software or can't improve existing software in substantial ways is really outdated. Just consider how most human-written software is untested despite everyone agreeing testing is a good idea simply because test-friendly arch takes a lot of thought and test maintenance slow you down. AI will do all of that, just mention something about 'testability' in AGENTS.md.


OK so this comes back to the question I started this subthread with: where is this better software? Why isn't someone selling it to me? I've been told for a year it's coming any day now (though invariably the next month I'm told last month's tools were in fact crap and useless compared to the new generation so I just have to wait for this round to kick in) and at some point I do have to actually see it if you expect me to believe it's real.


How would you know if all software written in the last six months shipped X% faster and was Y% better?

Why would you think you have your finger on the pulse of general software trends like that when you use the same, what, dozen apps every week?

Just looking at my own productivity, as mere sideprojects this month, I've shipped my own terminal app (replaced iTerm2), btrfs+luks NAS system manager, overhauled my macOS gamepad mapper for the app store, and more. All fully tested and really polished, yet I didn't write any code by hand. I would have done none of that this month without AI.

You'd need some real empirics to pick up productivity stories like mine across the software world, not vibes.


It's on the people pushing AI as the panacea that has changed things to show workings. Not someone saying "I've not seen evidence of it". Otherwise it's "vibes" as you put it.


Right, I'm sympathetic to the idea that LLMs facilitate the creation of software that people previously weren't willing to pay for, but then kind of by definition that's not going to have a big topline economic impact.


Well, we don't know - that's capturing 2 scenarios: software that whose impact is low as reflected by lack of investment and legitimately useful improvements that just weren't valued (fix slow code, reduce errors and increase uptime, address security concerns) because the cost was not appreciated / papered over by patches / company hasn't been bitten yet


Why did you add that "weren't willing to pay for" condition?

Most of the software I replaced was software I was paying for (iStat Menus, Wispr Flow, Synology/Unraid). That I was paying for a project I could trivially take on with AI was one of the main incentives to do it.


Here's an example: https://eudaimonia-project.netlify.app/

I'm happy to sell it to you, though it is also free. I guided Claude to write this in three weeks, after never having written a line of JavaScript or set up a server before. I'm sure a better JavaScript programmer than I could do this in three weeks, but there's no way I could. I just had a cool idea for making advertising a force for good, and now I have a working version in beta.

I'd say it is better software, but better is doing a lot of heavy lifting there. Claude's execution is average and always will be, that's a function of being a prediction engine. But I genuinely think the idea is better than how advertising works today, and this product would not exist at all if I had to write it myself. And I'm someone who has written code before, enough that I was probably a somewhat early adopter to this whole thing. Multiply that by all the people whose ideas get to live now, and I'm sure some ideas will prove to be better even with average execution. Like an llm, that's a function of statistics.


In glad you made something with it you wanted to make, and as a fan of Aristotle I'm always happy to see the word eudaimonia out there. Best of luck. That said I don't understand what this does or why I would want the tokens it mentions.


Yeah, I gotta make a video walkthrough. Its basically a goal tracker combined with an ad filter - write what you want out of life and block ads, it replaces them with ads that actually align with your long term goals instead of distracting from them. The tokens let you add ads to the network, though you also get some for using the goal tracker.


Though this does suggest one possible answer to me: the new software is largely web applications, and the web is just a space I don't spend much time anymore other than a few retro sites like this


No, you don't need a video walkthrough. You need that damn web page to explain – in plain language – what this is and what it's good for.


They can't, they never did the work to discover what it's good for because they skipped over implementation and concept validation.

This concept will never work outside of their own head. People continue to think producing something is the hard part my word.


Would the above explanation be better? The website is there because stripe needs a landing page and the text is there because I'm trying to communicate the aspiration the instantiation I can always explain in detail if someone wants to hear how that would work.


> Would the above explanation be better?

No idea. I certainly didn't get it. Goal tracker is one thing, ad blocker is another thing. Why would I want to combine them? And why would I want to see any ads at all? Perhaps I'm just not the target audience...


Maybe not, but you might want to see ads because 1) they fund a huge part of the free internet so you would at least want other people to see them and 2) if they were targeted not at what you're most likely to buy today but at what would most help you achieve goals you'r struggling with, they'd be a constant source of useful information and motivation as you go about your day. Aligning incentives between you and advertisers turns ads from friction to tailwind, and advertisers already want to align with what incentivises you if the alternative is having their ads blocked.

That second point is the part that seems obvious to me but I have a hard time communicating.


++1

I didnt get it either on first glance when scrolling down the whole page


Wow, that us useful feedback, thanks! I'll update that this weekend.


And now you have no idea how any of the code works

AI writes bad software by virtue of it being written by the AI, not you. No actual team member understands what's going on with the code. You can't interrogate the AI for its decision making. It doesn't understand the architecture its built. There's nobody you can ask about why anything is built the way it is - it just exists

Its interesting watching people forget that the #1 most important thing is developers who understand a codebase thoroughly. Institutional knowledge is absolutely key to maintaining a codebase, and making good decisions in the long term

Its always been possible to trade long term productivity for short term gains like this. But now you simply have no idea what's going on in your code, which is an absolute nightmare for long term productivity


You can read as much or as little of the code as you want.

The status quo was that I have no better understanding of code I haven't touched in a year, or code built by other people. Now I have the option to query the code with AI to bootstrap my understanding to exactly the level necessary.

But you're wrong on every claim about LLM capabilities. You can ask the AI exactly why it decided on a given design. You can ask it what the best options were and why it chose that option. You can ask it for the trade-offs.

In fact, this should be part of your Plan feedback loop before you move to Implementation.


You can ask the AI why, but its answer doesn't come from any kind of genuine reasoning. It doesn't know why it did anything, because it doesn't exist as a sentient being. It just makes something up that sounds good

If you choose to take AI reasoning at face value, you're choosing to accept pretty strong technical debt


My own observation is that the initial boost to productivity results in massive crippling technical debt.


That's just because everyone is misusing AI. If you ask AI to do a job and you have no idea what it did, you lost ownership, which means you're asking to be replaced. You need to own the task. If you fully delegate your task to anyone else or to AI, you no longer know what's going on. AI does not necessarily produce more tech debt, but AI might do things you don't expect because it lacks context and specificity to perform accurately.


Having the productivity "drop through the floor" is a bit hyperbolic, no? Humans are still reviewing the PRs before code merge at least at my company (for the most part, for now).


I don't know that it's likely but it's certainly a plausible outcome. If tooling keeps getting built for this and the financial music stops it's going to take a while for everybody to get back up to speed

Remember this famously happened before, in the 1970s


There's an actual working product now, albeit one which is currently loss leading. In software world at least there is definitely enough value for it to be used even if it's just better search engine. I'm not sure why it would disappear if the financial music stops as opposed to being commoditised.


Because there's cheaper ways to get an equally good search engine? But yes I imagine some amount of inference will continue even in an AI Winter 3.0 scenario.


Ironically, abstraction bloat eats away any infra gains. We trade more compute to allow people less in tune with the machine to get things done, usually at the cost of the implementation being eh... Suboptimal, shall we say.


I think there's a broad category error where people see that every gain has been an abstraction (true) but conclude from that that every abstraction will be a gain (dubious)


My unfounded hunch for the computing bit is that home computers became more and more commonplace in the home as we approached the 21st century.

A Commodore 64 was a cool gadget, but “the family computer” became a device that commoditized the productivity. The opportunity cost of applying a computer to try something new went to near zero.

It might have been harder for someone to improve the productivity of an old factory in Shreveport, Louisiana with a computer than it was for the upstarts at id to make Doom.


> There are many reasons for the lag in productivity gain but it certainly will come.

Predictions without a deadline are unfalsifiable.


Well the thing with predictions is that they are in genral difficult - esp. when it comes to those in future :-D


There's a famous quote by a cyclist, "It never gets easier, you just go faster"


Not that I agree with the pardons, but former presidents are usually old. Letting your political opponent die in prison can have a massive backlash so most presidents would rather not let that happen.


I'm half joking but if this AI boom continues we're going to see Nvidia exit from consumer GPU business. But Jensen Huang will never do that to us... (I hope)


There is a couple reasons why Jensen won't take off the gaming leather jacket just yet:

1. Gaming cards are their R&D pipeline for data center cards. Lots of innovation came from gaming cards.

2. Its a market defense to keep other players down and keep them from growing their way into data centers.

3. Its profitable (probably the main reason but boring)

4. Hedge against data center volatility (10 key customers vs millions)

5. Antitrust defense (which they used when they tried to buy ARM)


6. Techies who use NVidia GPUs in their PCs are more likely to play with AI and ultimately contribute to the space as either a developer or a user


7. Maybe just don’t put all your eggs in one basket, especially when that basket is an industry that has yet to materialize its promise.


They'll access GPUs through their company VPN

If they're unemployed, they'll just rent from the cloud

How many of you still manage your own home server?


> 1. Gaming cards are their R&D pipeline for data center cards. Lots of innovation came from gaming cards.

No way that is true any more. Five years ago, maybe.

https://www.reddit.com/r/pcmasterrace/comments/1izlt9w/nvidi...


on the contrary. this is the place they can try out new tech, new cores, new drivers, new everything with very little risk. driver crash? the gamer will just restart their game. the AI workload will stall and cost a lot of money.

basically, the gaming segment is the beta-test ground for the datacenter segment. and you have beta testers eager to pay high prices!

we see the same in CPUs by the way, where the datacenter lineup of both intel and amd lags behind the consumer lineup. gives time to iron out bios, microcode and optimizations.


Why would anyone sell a handful of GPUs to nobodies like us when they could sell a million GPUs for thousands apiece to a handful of big companies? We're speedrunning the absolute worst corpo cyberpunk timeline.


Because when you lose even one of those big companies in your handful, it tanks your business. Customer diversity is a good thing.

And they're not selling a handful of GPUs to nobodies like us; they're selling millions of GPUs to millions of nobodies.


Gaming is now less than 10% of nvidia's revenue. We're really not adding any meaningful diversity to their bottom line anymore.


> Customer diversity is a good thing.

Tell that to Micron.


The way things are going no one will be able to afford a PC.

Instead we will be streaming games from our locked down tablets and paying a monthly subscription for the pleasure.


You will own nothing and be happy.


Might almost be a good thing, if it means abandoning overhyped/underperforming high-end game rendering tech, and taking things in a different direction.

The push for 4K with raytracing hasn't been a good thing, as it's pushed hardware costs way up and led to the attempts to fake it with AI upscaling and 'fake frames'. And even before that, the increased reliance on temporal antialiasing was becoming problematic.

The last decade or so of hardware/tech advances haven't really improved the games.


DLSS Transformer models are pretty good. Framegen can be useful but has niche applications dure to latency increase and artifacts. Global illumination can be amazing but also pretty niche as it's very expensive and comes with artifacts.

Biggest flop is UE5 and it's lumen/nanite. Reallly everything would be fine if not that crap.

And yeah, our hardware is not capable of proper raytracing at the moment.


> Framegen can be useful but has niche application

Somebody should tell that to the AAA game developers that think hitting 60fps with framegen should be their main framerate target.


The latest DLSS and FSR are good actually. Maybe XeSS too.


The push for ray tracing comes from the fact that they've reached the practical limits of scaling more conventional rendering. RT performance is where we are seeing the most gen-on-gen performance improvement, across GPU vendors.

Poor RT performance is more a developer skill issue than a problem with the tech. We've had games like Doom The Dark Ages that flat out require RT, but the RT lighting pass only accounts for ~13% of frame times while pushing much better results than any raster GI solution would do with the same budget.


The literal multi-million dollar question that executives have never bothered asking: When is it enough?

Do I, as a player, appreciate the extra visual detail in new games? Sure, most of the time.

But, if you asked me what I enjoy playing more 80% of the time? I'd pull out a list of 10+ year old titles that I keep coming back to, and more that I would rather play than what's on the market today if they only had an active playerbase (for multiplayer titles).

Honestly, I know I'm not alone in saying this: I'd rather we had more games focused on good mechanics and story, instead of visually impressive works that pile on MTX to recoup insane production costs. Maybe this is just the catalyst we need to get studios to redirect budgets to making games fun instead of spending a bunch of budget on visual quality.


Well in the case of Doom: The Dark ages, it's not just about about fidelity but about scale and production. To make TDA's levels with the baked GI used in the previous game would have taken their artists considerably more time and resulted in a 2-3x growth in install size, all while providing lighting that is less dynamic. The only benefit would have been the ability to support a handful of GPUs slightly older than the listed minimum spec.

Ray tracing has real implications not just for the production pipeline, but the kind of environments designers can make for their games. You really only notice the benefits in games that are built from the ground up for it though. So far, most games with ray tracing have just tacked it on top of a game built for raster lighting, which means they are still built around those limitations.


I'm not even talking about RT, specifically, but overall production quality. Increased texture detail, higher-poly models, more shader effects, general environmental detail, the list goes on.

These massive production budgets for huge, visually detailed games, are causing publishers to take fewer creative risks, and when products inevitably fail in the market the studios get shuttered. I'd much rather go back to smaller teams, and more reasonable production values from 10+ years ago than keep getting the drivel we have, and that's without even factoring in how expensive current hardware is.


I can definitely agree with that. AAA game production has become bloated with out of control budgets and protracted development cycles, a lot of that due to needing to fill massive overbuilt game worlds with an endless supply of unique high quality assets.

Ray tracing is a hardware feature that can help cut down on a chunk of that bloat, but only when developers can rely on it as a baseline.


I think Nvidia realises that selling GPUs to individuals is useful as it allows them to develop locally with CUDA.


This is a huge reason.


They are already making moves that might suggest that future. They are going to stop packaging VRAM with their GPUs shipped to third-party graphics card makers, who will have to source their own, probably at higher cost.


They will constrain supply before exiting. It's just not smart exiting, you can stop developing and it will be a trickle, also will work as insurance in case AI flops.


In the words of Douglas Adams, there are those who say that this has already happened.


Honestly, I'd prefer it. It might get AMD and Intel more off their ass for GPU development. I already stopped buying Nvidia gpus ages ago before they saw value in the Linux/Unix market, and I'm tired of them sucking up all the air in the room.


Intel GPUs are probably not going to last much longer, considering they did a deal with nvidia for integrated GPUs.


Jensen is to paranoid to do it. But whoever comes after him will do it ASAP.


They did get burned when crypto switched to dedicated hardware and nvida were left with for them huge surpluses of 10xx series hardware. But what they’re selling to AI companies now is a lot more different from their consumer gear


Keep the retail investors happy so they keep pumping your stock.


Wonder if Google will ever start selling TPUs.




I was thinking large ones, to other AI companies.


The brand aware "consumers" are really just DIY PC builders, which is relatively a small number. Enterprise DRAM business is doing so great that Micron just doesn't see the consumer market is worth chasing.

This is bad for consumers though since DRAM prices are skyrocketing and now we have one less company making consumer DRAM.


The people who occupy the b2b ram buying kind of jobs are not aliens from another planet. Brand awareness in consumer markets, especially ones that are so closely tied to people's jobs (nerds gonna nerd) is going to have a knock on effect. It's not like a clothing brand or something.


Sometimes reputation and suchlike in the consumer market can directly boost your B2B business. Consumers and professionals alike will look at backblaze drive reliability figures.

Other times professionals will sneer at a consumer product, or a consumer product can diminish your brand. Nobody's wiring a data centre with Monster Cables, and nobody's buying Cisco because they were impressed by Linksys.


Not that it invalidates your point, but Cisco sold Linksys in 2013.


Yes, but the consumer brand has to have a good reputation for that to pan out positively in B2B. Crucial has a decent reputation, but the problem is that there hasn't been any innovation in the consumer DRAM market for 2 decades that wasn't driven by/copied from the enterprise sector. The difference between a Crucial DIMM and a Micron Unbuffered dimm is which brands sticker they put on it, and maybe a heatsink and tighter binning/QA. That's not unique to Micron/Crucial. Aside from "Moar RGB", what innovation has happened in this space in the consumer side of things that isn't just a mirror of the enterprise side (eg DDR4 to DDR5)? XPO/XMP? That's Intel/AMD dictating things to DRAM companies. So what impression really are people meant to carry over from Crucial to Micron in this instance? How is Micron meant to leverage the Crucial brand in this space to stand out above others?

Similar story on the SSD side of things regarding reputation/innovation, especially when you consider that Crucial SSD's are no more "micron" in a hardware sense than a Corsair one built using Micron flash (support is a different matter), as the controllers were contracted out to 3rd parties (Phison) and the flash used was entry level/previous gen surplus compared to what's put in enterprise. The demands and usecase for consumers and even prosumers/enthusiasts are very different and in general substantially less than on the enterprise side of things with SSDs, and that gulf is only growing wider. So again, what is meant to carry over? How can Micron leverage Crucial to stand out when the consumer market just doesn't have the demands to support them making strong investment to stand out?

Frankly, taking what you say farther, I think if this is what they want to do (having consumer brand recognition that can carry over in some meaningful way to B2B), then sundowning crucial now (given the current supply issues) and then eventually re-entering the market when things return to some sense of "normal" as Micron so that both consumer and enterprise brands are the same brand, "Micron", makes much more sense.


Well unless there's some ghost-like life form in a gas state, we sort of need the molecules to stay together to form life.


Obligatory KRAZAM video on microservices https://www.youtube.com/watch?v=y8OnoxKotPQ


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: