The bottleneck in training and inference isn’t matmul, and once a chip isn’t a kindergarten toy you don’t go from FPGA to tape out by clicking a button. For local memory he’s going to have to learn to either stack DRAM (not “3000 lines of verilog” and requires a supply chain which openai just destroyed) or diffuse block RAM / SRAM like Groq which is astronomically expensive bit for bit and torpedoes yields, compounding the issue. Then comes interconnect.
There's this curious experience of people bringing up geohot / tinygrad and you can tell they've been sold into a personality cult.
I don't mean that pejoratively, I apologize for the bluntness. It's just I've been dealing with his nonsense since iPhone OS 1.0 x jailbreaking, and I hate seeing people taken advantage of.
(nvidia x macs x thunderbolt has been a thing for years and years and years, well before geohot) (tweet is non-sequitor beyond bogstandard geohot tells: odd obsession with LoC, and we're 2 years away from Changing The Game, just like we were 2 years ago)
My deepest apologies, I can't parse this and I earnestly tried: 5 minutes of my own thinking, then 3 llms, then a 10 minute timer of my own thinking over the whole thing.
My guess is you're trying to communicate "tinygrad doesn't need gpu drivers" which maybe is transmutated into "tinygrad replaces CUDA" and you think "CUDA means other GPUs can't be used for LLMs, thus nvidia has a strangehold"
I know George has pushed this idea for years now, but, you have to look no further than AMD/Google making massive deals to understand how it works on the ground.
I hope he doesn't victimize you further with his rants. It's cruel of him to use people to assuage this own ego and make them look silly in public.
Look dude, this guy failed his Twitter internship and is not about to take on Jensen Huang. This isn't some young guy anymore and this isn't 200x where is he about to have another iPhone / Sony moment.
Also: https://x.com/__tinygrad__/status/1983469817895198783