> Originally released in 1993, the Onyx from SiliconGraphics was an absolute powerhouse. The machines were powered by between one and four MIPS processors (originally the R4400’s) — ranging from 100 MHz to 250 MHz.
250 MHz in 1993 is insanity, considering that was the 33 MHz 486 era.
> The RAM on these machines were not industry standard [...] and could handle up to 8 GB of RAM. 16 GB in the rackmount version (yeah, there was a massive rackmount version).
8 gig of RAM at a time when home users didn't even have 1 GB hard drives. 16 GB of RAM at a time when a home user's desktop could read memory at < 100 MB/s.
Having those specs then would be like running a 37 Ghz CPU with 16 TB of RAM now.
I once (2001) almost bought an Onyx at an auction where the ill-gotten stock (rather, what was left of it) of a con man who had managed to swindle a lot of computer companies out of all sorts of gear was sold. Apparently, this guy had dressed up as an Army general, gone to all sorts of companies in the first dot com boom, and gotten them to deliver all sorts of 'samples' to a fake dropoff point which he then sold on.
It was a massive auction, ranging from pallets full of backup tapes to stuff like this fridge size Onyx (looking at the Onyx wikipedia page, maybe it was actually two Onyx's in a rack, like the image for Onyx 2? Or maybe it actually was an Onyx 2? I didn't know anything about theses things, just that they looked cool, and they were legendary among us first wave Linux adopters, which most 'real Unix' folk at the time still looked down on).
Anyway there was so much stuff to be sold, that the normal auction house didn't have enough storage, so they had to move to an actual (disused) pig sty in the same village. Complete with smell from the remaining functioning animal stables on the rest of the farm (which was still an actual functioning farm).
Most of the stuff sold quite quickly, then when the Onyx went up, nobody wanted to bid. Everybody was sort of looking at each other with shifty eyes, waiting for someone else to make a move. As mentioned I didn't know anything about IRIX or any non-free Unix basically, nor did I have money or a place to put such a machine, but I just shouted out '250 euros!' and the auctioneer looked relieved that at least someone was going to take this thing off his hands. Then some other people started bidding, I got over excited and bid along up to a thousand euros or so and then I came to my senses and gave up. IIRC it eventually went for a few thousand.
That auction hall looked like a 'revenge of the nerds' casting call. Good times.
Their price reflected their capabilities. I think they were something like $20k on the low end. A nice new car in 1993 was maybe $10k. Lots of homes even in good cities sold for $20k.
edit: Wow these were actually $100-250k! Back in 1993 that was an immense amount of money. I bet you could have bought a nice San Francisco row house in the mission or other hot area for $100k back then.
> I wonder where all those machines ended up. Would love to get one and play with them.
A lot of them (probably the vast majority) simply got dumped, alas. I personally decided to dump a Challenge (basically an Onyx without the Reality Engine graphics: https://en.wikipedia.org/wiki/SGI_Onyx). As in: I could have it if I wanted it, I used it for years but it had to go when it was obsoleted. I could have put the rack in my basement, but I'm not young anymore and it wouldn't be that many years before it would be a headache to get it out. So I said: "Sorry. Dump it."
Smaller is more manageable of course - I have a number of O2, O2+, Indy computers and an Octane. And a Fuel.
At work we had a bunch of Origin servers as well. All dumped.
Back to the Onyx - when it came out in 1993 I had some access to one, I worked in the same computer room for weeks at the time on an off. Played with the various graphics demos, mostly. But I watched what the guys were doing with it. It was very impressive, a huge step up from anything else, and the sheer performance and memory bandwidth of the thing was one big reason I argued for getting an SGI Challenge for a later technical project. And I was right on that, as it turned out. Nothing else could do the job at the time (in this price segment, at least).
As for the CPU speed - Like the R4000, the R4400 internal pipeline ran at a higher rate (internal multiplier): https://www.cpushack.com/MIPSCPU.html
I got a Sun Sparcstation for free off my country's version of craigslist. I also got some other things like old mac machines and such for free like this. This was really out of necessity because I never had any money.
That was 20 years ago, when these machines were "only" 10 years old. Amazing how quickly things developed during the 90s; the Sparcstation 5 started at $3,995 (over $7k in today's money, and the sales price was probably higher still here in Europe), and ten years later you could get it for free!
I ended up giving it all away to a friend; when I was young I had a lot of time and interest to play around with this sort of stuff. These days, not so much.
There is a story from Weta Digital that by they time LOTR was in production they were using their first SGI box (used for Heavenly Creatures) as a doorstop.
After high school I got a job as video game artist for Virgin Games (in 92). While there another artist brought in an SGI that he was borrowing from a friend and demo'd it. I fell in love with it.
So a year later, after being rejected from art school, I used my bit of college savings to buy the SGI and Alias.
I used it for a few small video projects and animations, and then started contracting for SGI doing design work for their website. They even gave me an old Indigo to work on.
I kept both machines for about 10 years and then finally gave away them away to a collector.
I picked up an Indigo^2 with monitor in 1997 for $250. Had to buy IRIX CDs for $200 in eBay, but it was a nice machine. Gave it to a guy several years later, along with several other classic Unix workstations. Kind of wish I had them now.
I think the house prices were higher. I sold a house in Mountain View (Monta Loma neighborhood) in Jan 1995 for $255k. And that was a bit of a distressed sale as I was the executor for an estate.
What is crazy about those prices is even though they are high, they weren't higher than any other wealthy area of a metropolitan region at that time. Neighborhoods in the greater Cleveland area were getting those prices back then too (e.g. shaker heights, cleveland heights, hudson, plenty more). The only thing is, those same neighborhoods in Cleveland only went up maybe another $200k or so since, and median prices in SF went up by what $1 million or more? Things really flew off the rails for the housing market in certain places just in the last few decades too.
Of course they were, the idea everyone used to be able to buy a house on a minimum wage salary is an internet fantasy. My parents bought a fixer upper in a crappy neighborhood for $90,000 in 1996.
That's $20K in today's money, so depends on how you define "nice". A nice late model used Honda Civic.
$20K then is $40K now, so that's a bit of an underestimate for home prices, I think. [As it happens, my home was built about then, and the value has stayed in line with inflation figures]
P.S. that Honda Civic had one airbag, crank windows, didn't necessarily have a passenger side mirror, a tachometer, or a wiper on the rear hatch. And A/C was a dealer installed option. Don't ask about side impact protection.
>I bet you could have bought a nice San Francisco row house in the mission or other hot area for $100k back then.
To be honest, the idea of even getting a down payment of 100K for a house like that in SF is insanity. Crazy how prices skyrocketed in 30 years (I'm guessing 2008 didn't help much).
The real winners are the boomer generation. When they graduated college in the mid 60's wages were enormous relative to the cost of property. Property in all the now hotly competitive areas was dirt cheap too. If you bought in SF back then you've probably made 100x or more on the property value.
There's an interesting theory I came across that a lot of the cheap housing of that era was due to the interstate highway system, built partly during WWII. A lot of land that before was too far from cities for people to commute suddenly became in reach of people with decent cars. So that meant there was a whole lot of usable real-estate that hadn't already been built on, and that meant cheap land for about a generation.
If that's true, maybe we're approaching another phase of that, where people move away from cities because they can work remote. And things like Starlink might make some places tolerable to work from that wouldn't be before. On the other hand, even rural property is expensive in a lot of places now. Nowhere is immune to real estate speculation.
> On the other hand, even rural property is expensive in a lot of places now. Nowhere is immune to real estate speculation.
That's it, location is becoming less of a factor to determine housing prices. That said, besides internet you still need good access to things like plumbing and electricity, social aspects, and of course the climate has to be liveable; I can't imagine what it's like since I live in a very moderate climate country (western Europe), but stuff is a lot more extreme in the US, and the actual amount of liveable space is limited.
Access to services is probably the main thing: schools, hospitals, grocery stores. Plumbing and electricity is a pretty low bar since wells and septic fields are a thing, and even electricity can be generated on site these days if it's not available. But most people wouldn't want to live three hours from the nearest grocery store. That rules out some areas that would otherwise be pretty reasonable.
I drove through central British Columbia once on a road trip. There's probably enough reasonably flat land to fit the population of California with room to spare, but it'd be a pretty hard sell to get many people to move there now. Prince George has about 74,000 people but it's the only moderately large city/town for hundreds of miles in any direction.
Lots of California is this way also. People focus on the SF and LA areas but there is so much empty land in thr rest of the state. Much of the hurdle to living there, as previously noted, is access to services. Mainly medical and grocery for me. Having to drove 2-3+ hours to get to a decent medical facility isn't in the cards for us.
The biggest hurdle to living in those places is actually access to jobs. I guess it works if you can WFH, but if you don't have that option, you want a vibrant job market. With jobs comes wealth and then services like retail and medical care.
Prisons work well for these cases: Susanville is fairly rural (1.5 hours away from Reno?), it is the kind of place where if you drive north you get the dreaded "No services for 130 miles" sign. But they have a prison so they have jobs and services that come with the money those jobs bring (although you are still stuck heading to Reno if you need a real hospital or speciality care).
> Property in all the now hotly competitive areas was dirt cheap too.
Yeah, but you're ignoring all the losing bets. How have property prices in Detroit and West Virginia held up? Industry there was booming at the time, who was to know that it would change?
Meanwhile, California and Arizona were practically empty...
Not sure why you are downvoted. Prices in certain places like the rust belt have been frozen in time for the most part, and even recent surges have only sent them up so much. I just decided to take a peek at a wealthy neighborhood in Cleveland for an example of a not so hot market. Here a 5 bedroom mansion on an acre lot goes for $520k. 22 years ago that home sold for $450k (1). I bet when you factor inflation that gain of $70k over 22 years was actually a loss. This is just one example but it goes to show that even recent surges have only done so much to spike prices from what was honestly decades of stagnation in real estate across much of the country.
In the central valley of California, a house cost $130k in 1993 and today sells for $400k. In those 30 years you probably painted it a few times, changed the roof just recently, changed the kitchen, and so forth. Plus 30 years of property tax. So you're probably in for $200k on that $130k house. And that's not even counting interest on mortgages that most buyers had in 1993.
Doubling your money in 30 years isn't great at all. And this is within about 50 miles of tech jobs, although the drive will take 90 minutes in the morning and 60 minutes at night.
Ok yes but even if this was a rental home and the renters basically paid the mortgage.. so you didn't get any utility out of it. 30 years later you made $200k, even if you put basically nothing down. As a return it's a lot percentage wise. Overall though, it's not life changing for most people. It only gets crazy money in the most desirable of areas. And that desire circle is basically described as driving range from the highest paying jobs.
>In the central valley of California, a house cost $130k in 1993 and today sells for $400k
Holy shiz, is California real estate really that cheap?
400K is a very basic house in the outskirts of a city in Austria(Europe), and tech jobs here pay 1/4 of what you can make in California. I feel we're being scammed over here with housing and wages.
If you're willing to basically live in the boonies, yes. But you're not finding housing in or around the cities for less than a million in california. the median house in my area (after a skim on Zillow) is 800k and I'm a good hour north of downtown Los Angeles. Move another 30 minutes north (pretty much in literal desert) and housing is more around these numbers of 400-500k.
But who knows? With WFH being more accepted I can see some less city oriented folk moving out to those areas and gentrifying it. It may be desert, but it still has everything you'd want out of a neighborhood outside of entertainment.
If houses are so expensive vs labor, you could always buy some land and have one built?
The central valley city I was talking about is Tracy, CA. Without traffic, it's takes 1 hour driving to get to the tech companies. Monday to Friday at rush hour it would take 2 hours. 90 minutes offpeak at 9am instead of 7am. Virtually zero tech workers are willing to make that drive.
Also, quality of life in Tracy is pretty poor. Property crime is pretty big. Virtually nothing for children to do. Young adults go to the same 5-6 average restaurants and that's it. I'd much prefer to live in a town near a nice city in Austria.
Building houses in Germany (and Austria should be the same) is pretty expensive. Houses here are built to last 100+ years out of mostly stone-based materials with expensive wiring, insulation and heating. Wooden houses with electrical HVAC taking care of cooling and heating is basically non existant here. Most houses don't even have an HVAC system, but just a heating system (i.e. no ventilation, no cooling).
The sturdy way to build immensely drives up costs for material and labor.
The high real estate prices Germany have more to do with the price of land/area, bureaucracy around building and nimbyism rather than material costs and labor.
I wasn't talking about real estate prices. I was literally talking about construction costs. Having just been involved in building a small/medium 3 story + basement house for over 1 million Euros (we already owned the land), I agree about the bureaucracy driving costs (mostly through mandated actions adding high labor costs), but your other points were irrelevant. Construction prices are crazy when compared to the wood construction often found in the US.
Labor is expensive though. It's just that your income in relation to labor cost is relatively low due to high taxes, health insurance, pension fund and so on. Including your employers payments you go home with maybe 40% of what the company pays for your work.
Building even a simple new home will set you back at least 300k.
>I guess quit your job for a year and build your own home!
Who can build a house in a year by himself? Even with a group of skilled professionals it can take longer. Plus you'll be on zero income for a whole year.
>Maybe get into the home building business altogether.
Easier said than done. In Europe this is a credentialized profession so for insurance purposes not everyone can just roll up their sleeves anymore and just start building houses like it's 1886.
California is bigger than most European countries... As with them, some remote areas are much cheaper than others. The Central Valley is the most remote and depressed part of the state, used overwhelmingly for farming.
Seems like that's just standard appreciation. You replace the bits that deteriorate and conceptually lose value, so by replacing it, you just bring it up to par with what it would ideally be worth since buying it. Although my father did buy his house in 96 for $80k in Canada, hasn't replaced fuck all, and it's now worth hypothetically $250k just because that's what houses are going for in the neighbourhood
how about 50x return? Home that cost $100,000 CAD is now well past $5,000,000 where I live. If someone hodl'd their property in Vancouver even longer than that they would be seeing 100x return.
Hey, I live there too, I feel ya. Though you have to go pretty far back for that difference. I don't know that you'd have found what is currently a $5m property in Van that was $100k sometime in the last 30 years.
It'd be real swell if the residents of easy van stopped trying to block new ownable properties from being built so they can keep banking off their control over the housing and rental market.
Unfortunately, that's kind of often the case. If you have access to millions in capital for a home early enough in life where banks will give you the mortage, you're probably coming from intense wealth already, you inherited it, or you were dealt a lucky hand early on by making a lot of cash and not losing the job for the first decade of your adult life.
On the one hand I want to stop complaining about the price of high-end workstations like Apple's stuff, on the other, my employer wouldn't spend $6000 on hardware, let alone >$100K, I'm not that special.
Given the cost of those, did the companies actually own them, or was it more like a lease? And given the cost, would they work on them in shifts to get the most out of them? I can't imagine they would be treated the same as we treat laptops and other work items these days. For one, they'd probably be in a high security section. And I can imagine it'd only be used for the final work that actually needed that power, where most "grunt work" would be done on cheaper machines.
If regulations for building houses were compostable to those for building computers...
Just think you had to ask the most opinionated people in your town every time you want to get a new computer and potentially ask people in your neighborhood when you want to make minor changes to you existing computer...
Yeah that's the problem, you can't just make more land, so there's no incentive to do anything other than make any area you own into an expensive area.
Given that land and the like is a limiting factor, the only way to get it cheaper is to develop the middle of the US (assuming it's about the US anyway), and / or to go upwards.
But that too has limits; megacities like Singapore have built upwards for decades and is still one of the most expensive places to live.
And another factor is that landlords and investors will resist oversupply. This is where China went wrong, billions have been invested in brand new cities that are effectively ghost towns; preparing for an influx of people that never came.
> Having those specs then would be like running a 37 Ghz CPU with 16 TB of RAM now.
More, much more. We've long hit a point of diminishing returns for a lot of standard consumer workflows. More CPU and RAM helps mainly for "specialist" workflows (including software engineering) and can be really nice, but computers have overall been "fast" for a while now.
Back then you could notice an increase in CPU frequency or cache in almost everything you do, even if it was just writing a letter. And more RAM allowed you to do things that were virtually[1] impossible before. Especially because there were no SSDs: You paid dearly for anything that could not be kept ready in your puny RAM.
[1] Pun sort of intended: Virtual memory might technically allow you to do those things anyway, but often the performance cost was so high that it was no fun, if not pointless. But we're also talking of the DOS/Win 3.x/DOS extender days, where virtual memory was technically possible, but definitely not ubiquitous.
A 33MHz 486 was not even "standard consumer gear" in 1993 - it was a costly, business oriented machine that could cost as much as a high-end Mac Pro in 2022 (after controlling for inflation). Standard consumer prices would have gotten you a home-computer class machine like the Atari ST, that was pushing a handful of MHz. Go even cheaper and you would've had to make do with a 8-bit home computer, like the C128 - little more than a glorified desk calculator. There's not even a loose equivalent to that class of machines today, aside from deep embedded stuff that a consumer will never have reason to touch.
Not sure about 1993.. I had one at work (486/33) in 1991. 16MB RAM. It wasn't cheap, but it was cheaper than the personal AT 286 I bought in 1987.. relatively speaking the most expensive computer I've ever bought. So in 1993 the price point.. well, I don't know. Not out of range for consumers though (I found an article in the Los Angeles Times, from 1994, which mentioned a "fully equipped 33 MHz 486" at $1388 in 1993 - their point was that it was only $1000 in 1994). That was just a fraction of what I paid in 1987 for that AT.
I tested Linux on that 486/33 in early 1992, and when my office X terminal broke down I replaced it with that Linux box the same year (I can't remember what the experimental X11 was called - this was before XFree86. Something YFX86 I think.. but no search finds it). As soon as it was possible I replaced the CPU with a 486 DX2-66. Probably early 1993. Still only got 16MB though. Enough for me. Now this 16GB laptop runs out of memory all the time.. oh well.
Yes, and even with that "upper class" PC for business, upgrading CPU and RAM in small steps always gave you a massive gain and new possibilities, and you could keep doing that for a long long time. I'm not sure younger people who essentially grew up with supercomputers in their pocket can really feel what that meant. (No shade meant at all. I, in turn, grew up with many other things taken for granted the same way, before.)
Yep. I remember back in the day, getting a few extra KB from config.sys and other configuration files means you can start a game that would refuse to start if otherwise. There were a lot of tricks for that. People nowadays won't believe that. After all it's rare to find software that requires more than 16GB to start.
Upgrading one's PC was also a big thing even into the mid 200X. A lot of magazines devoted entire columns for upgrade and installation guide. Hardware manufacturers lured users by sending extra copy of AAA games bundled with purchasing their hardware.
I must say the 90s was really the golden era for players and developers. Lots of opportunities, developers usually had to learn asm or C or Pascal to do real work. While it is still possible to push AAA titles for a small team. Both ID and Epic were examples.
> Yep. I remember back in the day, getting a few extra KB from config.sys and other configuration files means you can start a game that would refuse to start if otherwise.
I remember discovering Quarterdeck QEMM and found it worked so much better than EMM386 for making programs load into Upper, High, or Conventional memory segments.
Oh yeah, and the whole clusterfuck of "extended memory" versus "expanded memory".
Yeah man those were the good/bad days. I had a friend who was a Japanese strategic game fan and he constantly bombarded us with tricks he used to modify the game or the ram to get better attributes for characters. He is also the one who lent me a CD containing a pirated copy of Duke3d which bootstrapped my modern gameplay life.
I think OP meant it more like an elite rig than actually comparing it to a theoretical 37Ghz. At that time most people wouldn't even have a computer at home, never mind one that could actually _run_ stuff. My first pc (a 80286) didn't even have a hdd! I had to load MS DOS from a floppy disk.
> 250 MHz in 1993 is insanity, considering that was the 33 MHz 486 era.
More precisely, it was the start of the Pentium era. The First Pentiums with 60/66 MHz were released in March 1993. But interesting enough, it seems the R4400 were already 64-bit.
The 250MHz R4400 wasn't released in 1993 though. It (the NEC one) sampled in mid '95 on a 0.35 micron process, and the RISCstation 2250 that it shipped in was announced in December '95. The Pentium Pro was available at 180MHz at about the same time and soon reached 200MHz.
The EV5 reached around 500MHz in '96 though! The high end RISCs really did blow the doors off everything back then. The Pentium Pro was the beginning of the end for them though.
I remember some senior managers from IBM at the time, who had 128MB powered Workstations using Windows NT. They did not brag about the machines, they simply stated the fact, that this was some engineering master piece.
That much RAM itself posed some problems, because Windows NT was still new, hardly any software ran on it and also that "much" RAM was new, so what to do about it? Virtual Hard Disc? For example, Windows 95 could only address 64 mb without slowing down.
I miss my W6-LI "Lightning" dual PPro board. Spec built that system with all SCSI internals for $5200 in about '95 (with just a single 180MHz PPro, upgraded later to dual)
OTOH, those machines were already quad processors (at the high end). And I don't know if you can feasibly have 16TB of RAM in a single compute node. In that case you'd need a rack-scale multi-node system like the gear the Oxide folks are pushing for "hyperscaler" workloads. (My guess is that such a thing could be made useful to a single user - along the lines of Alan Kay's quote referenced in a sibling comment - but it would likely need to be something that involves chewing through humongous amounts of data, to really make use of that scale. Not sure if the art-creation use case has any real need for that nowadays. Some sort of compute-heavy data analytics for decision support and the like would be a lot closer to the mark.)
You can have NVMe disk arrays that are faster than the main memory speed of even quite recent desktop PCs these days for reasonable money. My laptop can do 7 GB/s from one drive. I've seen demos of 30GB/s by simply using a 4-port PCIe x16 adapter card. There isn't even a need to venture into exotic technologies like NVRAM DIMMs..
But clock speed is a very poor metric for comparing older CPUs. The size of these CPUs were much smaller, the number of instructions they could cram into each clock cycle and complexity of those instructions was no doubt far lower, and the variety of instructions and ability to specialise far lower (i'm not even considering the whole RISC and MIPS thing, just the fact that they are working with much smaller dies and far fewer, larger gates)... instructions per second might be a better comparison, but then you still lack that qualitative difference in the variety of instruction as i said.
I just wish they would find a better port to remaster, the current one is a low resolution mess and looks far worse on PS4 than the original on PS1 because it was a bad PC port back in the day and were the only files they could find.
Unfortunately they lost all the files of the original game, so they can only: - Use the PC port - Emulate the PS1 game - Rebuild it (which will take a lot of time and resources)
Dealing with Japanese companies, printing code and faxing it is a thing that happens even in 2022.
I'm never surprised when I hear about old Japanese game files being lost. They probably kept the data stored entirely on some old laptop on a shelf after release and it was considered "done".
in the sympathetic sense, this was decades ago, wear and tear happens, and coporate orgs are messy. When Squaresoft merged with Enix, there was inevitably going to have some stuff, important stuff even, lost in the move. Even if they were perfectly careful to keep things archived.
> The concept of games preservation is a relatively recent one, and as Matsuda admits: "It's very hard to find them sometimes, because back in the day you just made them and put them out there and you were done – you didn't think of how you were going to sell them down the road. Sometimes customers ask, 'Why haven't you released that [game] yet?' And the truth of the matter is it's because we don't know where it has gone."
The 80's was just a small wing of developers trying to put out a toy. They weren't thinking of the original Final Fantasy as the Sistene Chapel of gaming. They thought as much about preserving it as I did about preserving that game jam game I made almost a decade ago. If it wasn't as easy for me to preserve it as throwing it on dropbox, I probably woulda lost my source code too.
>Could the existing copies of the game out there be reverse engineered?
It's physically possible. But the effort to extract assets, scene data, and source code and re-configure that into a clean port is much more gargantuan than using the source from "a bad port" and working from there.
Even if they did manage to do that, that's another thing about video game code that's only now starting to be less true; it's an utter mess of spaghetti code. Games weren't made to be maintained for years by a revolving door of developers like a website, and few people will ever see the code. They get something working, and leave it there as long as its not bothering anyone. coding standards were very loose
I'm sure many small intricacies people come to appreciate were merely products of rushed development at the 11th hour, with non-important bugs that just stayed in the final product. Even if they preserved the code, those imperfections fans appreciate would likely be patched out anyways (See: Backwards long jump in Super Mario 64 being removed in subsequent releases).
As someone who was a video game developer in a past life, you are right about spaghetti code. Video games past were move fast, break everything. Hacks on hacks.
And as soon as a game was out the door it was forgotten about. A build system would stay around only as long as it was needed to write any critical patches.
You'd be straight into another game, and you would go buy a whole new set of dev machines for the entire team because the ones you started the last game with are now horribly out of date. The old ones with all the source code on would get sold or given away to staff to reuse at home. And the central backup tape would eventually get lost or reused.
The fact that source code to 90s games like Wipeout sometimes appears out of nowhere is awesome. I think one of the only times you'll see that is if a developer burned off a copy of the source to keep for himself. I know I did that to use for personal projects and future resumes. But those discs are long gone.
It's the same story with things like the lost Doctor Who episodes; people just didn't think this children's show from a few years back that they were never going to air again would ever attract anyone's interest and they needed the space (storing lots of stuff isn't free!) so in the bin it went.
This problem still exists today, too. How do you know what will be interesting to people in 20, 30, or 50 years time? You don't always know that, and storing everything is expensive and time-consuming. In a few decades people will be lamenting lost media that was created today.
In the case of Doctor Who it also didn't help the video tapes from that era were expensive and reusable. Lots of tapes were reused for other productions.
On the other hand, it's simply impossible to preserve everything produced. Who's going to watch all of youtube again from the beginning to search for interesting material?
Basically all my files from 2011 and older are lost. My files from 2008-2011 were on a laptop which my parents threw away when it was broken, my files from 2003-2008 were on 3 computers which were also thrown away by my parents.
There was no ubiquitous cloud storage back then so most is gone, for basically no real reason.
Rebuilding is basically what they did for all the Pixel Remasters. They are remakes on the Unity Engine. I believe the earlier Final Fantasy VI remaster on PC literally wrapped a ROM.
I once worked on a military simulator, and it ran on very similar hardware, it might even have been the onyx. Actually yes I do believe thats it now that I've googled it! I worked on this in 2011-12, and I can tell you compiling stuff was not fast! It was kind of funny thinking that this had once been a "supercomputer"... now it was slower than even old desktops!
> 250 MHz in 1993 is insanity, considering that was the 33 MHz 486 era.
I think the 250 MHz ones appeared later on, in subsequent revisions. Still, one of the promises of RISC was that, since the logic was much simpler, you could jack up clock rates further than it was possible with CISC designs.
Much the same promise is made by VLIW today - it should let you "jack up" core counts and clock rates compared to even RISC, because you don't need to do energy-intensive OOO reordering and speculation so it becomes feasible to avoid thermal and power constraints even with otherwise heavy CPU use. (The clearest issue with it is that it leaks way too much of your μarch choices in the programming model, so every ISA is inherently bespoke.)
Or another way to consider it is my 2021 era $399 Jetson Xavier NX dev kit is a better computer in every way an Onyx. Other than my Jetson Nano only having 4 GB of RAM, it's also a more capable computer than an Onyx. Hooray for Moore's law and mass production.
I worked at Rareware in that same era, similarly ridiculous amounts of SGI hardware in the building. As I recall each artist had an SGI Indigo2 and later on the SGI O2 because the standard artist workstation. I believe our lead artist used an SGI Onyx. Programmers had Indy2's with the internal N64 development boards.
There were at least 2 rack mounted SGI machines used for large-scale rendering jobs (ie promotional images, magazine covers etc). May have been SGI Challenges (I know one certainly was) and were kept off-limits to most staff, at the time they were rumored to cost $250k each.
>I worked at Rareware in that same era, similarly ridiculous amounts of SGI hardware in the building.
I've always been fascinated with Rareware. For such a tiny studio, the level of quality in the games they put out during that era is completely unparalleled, and many can justifiably still be held up as the greatest ever made.
What was the secret sauce? What was it like working there? How was the culture?
So the secret sauce was... interesting! Unlike any other company I have seen.
Each game team (of which there were 4 who I was there) was isolated from the other teams, only senior management and IT had access to the other team's floors/buildings. If someone from another team needed to visit for some reason we were usually instructed to turn off monitors. There was very little sharing of code and core graphics/gameplay features were fiercely protected by each team. The competition was not other games companies (with the exception of Nintendo) but the team in the next building.
There was almost no recruiting from other games studios, almost everyone was hired out of college (or recently so).
The management (Tim and Chris) generally let the teams do what they thought best and didn't push hard on things like deadlines. Release dates existed but were argued/negotiated with Nintendo when things slipped.
Employees were expected to be in the building at 9am (there was a bonus for punctuality, which I dint think I ever received and was reprimanded about several times). Lunch and dinner were provided, the studio was literally on a farm so there were no lunchtime distractions, and long hours were expected, although not mandated. 70+ hour weeks were not unusual.
Teams got a small bonus for shipping a game but then got a split of game sales (after some 'production cost' was subtracted). The split increased as Rareware negotiated higher payments from Nintendo, for later N64 games the team received something insane like $1 per cartridge, which on a 10-16 person team with millions of sales was crazy money. People were paying cash for houses and Porsches.
Unfortunately while the generous bonuses drove the work culture, it also caused a large amount of resentment. People on shelved projects, projects on low royalty rates (eg Goldeneye), or poor selling products put in the same or more work but did not anywhere near the same fiscal rewards. It was a side effect of Tim and Chris being extremely generous and wanting to do the right thing by the staff but it did cause a number of departures (albeit mostly from the Goldeneye team), I think this has been discussed publicly by many of those folks.
As an outsider, I believe Rareware's secret sauce was a combination of a can-do, down-to-the-metal, fast-feedback-loop game development style that came from the pre-PC British bedroom game coders, a management team that understood how to manage game development and releases, and Nintendo's coaching on mascot development and general game polishing.
Rareware's talents were big advantages in the early 3D game console era. But by the PS2 / Xbox era, their special skills didn't help as much.
Today I'd say that Epic's Fortnite is the spiritual successor of the old Rareware.
To continue on the similarities, Epic and Rareware also had/have problems with worker burnout. Pretty much every N64 Rareware classic drove at least a few people out of the company. By the time Nintendo and Microsoft got into a bidding war over the company there wasn't much talent left in it[0]. Fortnite is the same way: the fast pace of content churn means people are working constant overtime, and the perpetual nature of the game means there's no release that you're crunching for.
I would disagree that this was good management, though. Burning out your talent is how and why game studios fall apart over time. Had they retained talent and kept crunch time low they probably would have continued churning out hits on the GameCube and Wii instead of stinkers on the Xbox. In fact, Nintendo probably understands this[1] - for example, when Retro Studios imploded they bought them out and immediately banned overtime work at the studio.
[0] Microsoft didn't understand this, and this is why they wound up overpaying for Rare.
[1] Or at least did in the Iwata era. No clue if Kimishima or Fukukawa have the same convictions, but given that Nintendo hired them both internally I imagine they do.
>I would disagree that this was good management, though.
well, "effective" management. Not necessarily good. Seems like a story that pretty much all large gen 5 (and many gen 6) studios share. It was this new cutting edge field right before/after the dotcom bubble requiring (at the time) very niche talent and passion. Perfect formula for burn and churn.
This was likely one of the many thousand cuts the industry faced when moving to the HD era in gen 7. You couldn't just brute force a bunch of assets to work at the expected HD fidelity without stepping back and actually understanding what the machine is doing. You couldn't just have two artists doing everything for asset production; you needed an organized pipeline of specialists. You absolutely needed a producer/manager/director to make sure pieces are fitting together. Huge wakeup call for game developers on software/business practices most other parts of the industry had to employ for years.
> Rareware's talents were big advantages in the early 3D game console era. But by the PS2 / Xbox era, their special skills didn't help as much.
StarFox Adventures on the Gamecube was probably their last "holy crap" game from a technical perspective. There wasn't anything else at the time, on any console, that did realistic-looking fur as good as that game:
It also looked better on a CRT. I still remember getting the GameCube with Star Fox Adventures as my first game for my birthday a few weeks after it came out and just being mesmerized by the graphics. (But I also remember a similar mesmerization when I got an N64 with Super Mario 64, the same TV serviced NES, SNES, N64, GC, and Wii.)
From Martin Hollis' wikipedia page(project head for Goldeneye 007):
> Hollis remarked that he worked non-stop on the game, "[averaging] an 80 hour week over the 2 and a half years of the project", and that the team he recruited was very talented and dedicated even though most of it was composed of people who had never worked on video games.
I guess the answer is insane levels of talent + dedication, and don't worry too much about domain expertise. Probably my favorite factoid out of there is that Goldeneye multiplayer was an afterthought, and basically one dude hacked it together in a couple of weeks at the end of the development cycle.
I went to Animation school and learned on Indigo's and O2s for Maya and Softimage... ~1994 -> 1996
Years later when we were collapsing ILM into the new Presidio Campus, the amount of SGI full-rack sized machines being thrown into the trash was insane.
I believe they turned at least one into a keg-erator...
I could have had an opportunity to get one of the cabinets, but I didnt have a place to put it. Wish I had figured out a place to keep one.
SGI was a key part of some of the most iconic early games (even Nintendo used them!) There isn't much of an emulation scene, probably because of how specialized they were, the weird architecture, and the diversity of workstations. And lots of the old machines were just tossed after they became obsolete. It's a huge shame from a preservation point of view.
I read somewhere, recently - from a developer at Rare; who stated that the government was actually concerned as to why this company would have, effectively - a huge cluster of supercomputers, and wanted to make sure they weren’t using them for military stuff, etc.
The interviewer stated they’d laughed and said ‘no, we’re just making video games’. It gave me a giggle, and demonstrates just how advanced these systems were.
Ha! I'm long gone from Rare and was on a different team... lead designer of Banjo and a number of the engineers/artists are still there I believe so who knows. Dream can come true!
"Dream can come true!" is not a typo, dear readers.
Here is a video, only released decades later, of the SNES game midnightclubbed refers to, which fell by the wayside as the N64 came about:
https://www.youtube.com/watch?v=w72kj20YNA0
I was surprised to find my jaw dropping at an SNES game after all this time.
Imagine, this and Super Mario World are the same platform!
ohhh i fantasize about being 3d artist / producer in the 90s. i collect whatever I can find from this era. I will make use of my Net Yaroze and release a PS1 game one day!!!
Ah yes, good ol' SGI, it always brings a smile to my face reading these old war stories.
I wish we could get some insight on the development of the first successful 3D game on PC, Quake by I'd software as there's a famous picture of John Carmack sitting in front of some SGI workstation with a monitor with a resolution ox 1920*1080( in 1995!)
Also, SGI powered most VFX Studios of that era, so many great movies went through those machines before ending up on the big screen.
It's insane how quickly 3dfx, Nvidia and Intel X86 consumer hardware made SGI workstations overpriced and completely obsolete within the span of just a few years. The '90's were a blast.
But still, I'm sad to see SGI go, as their funky shaped and brightly colored workstations and monitors had the best industrial design[1] in an era of depressing beige, grey or black square boxes.
> It's insane how quickly Nvidia and Intel X86 consumer hardware made SGI workstations overpriced and completely obsolete within the span of just a few years. The '90's were a blast.
Let's at least give some credit to 3dfx Interactive for PCs dethroning the 3D giants. There was a time practically everyone playing Quake had a Voodoo card.
With the advent of semi fabs like TSMC, STM, etc. making their processes more accessible to smaller fabless companies, 3dfx, PowerVR, ATI, Nvidia and other startups in the 3D space back then, realized they can replace all those expensive discrete RISC chips SGI was using for their massive 'reality engine' PCBs, and instead design a cheaper and more efficient custom ASIC from the ground up that does nearly the same things SGI's reality engine was doing (triangle rendering and texture mapping was enough for a PC 3D accelerator back then), but at 1/100th of the price, and sell it to consumers.
Fast forward, and we all know how the story played out and where the industry is today.
>There was a time practically everyone playing Quake had a Voodoo card.
Yeah they were the most desirable piece of tech back then, plus, the marketing and advertising 3dfx had at the time was wild as hell.[1]
Even the box-art on their GPU boxes was the most memorable of any HW of that era. Anyone remember those eye glaring down on you from the store shelves?[2] I feel like this is now a lost art.
As a teenager/young adult in the late 90s/early 00s, I was so glad to see 3dfx fail. I hated how popular the Glide API was since you could only run it on a 3dfx card. I had asked for a 3dfx Voodoo for Christmas one year, and my dad got me a Rendition Verite 2200. It supposedly had better performance than a Voodoo while having a lower price, but it couldn't run Glide, so couldn't play half the games I wanted a Voodoo for.
I didn't want to feel ungrateful to my dad, so my frustration got targeted to 3dfx for making a proprietary API when OpenGL and Direct3D existed.
I eventually got a Voodoo Banshee, but by that time Glide was falling out of favor.
>my frustration got targeted to 3dfx for making a proprietary API when OpenGL and Direct3D existed
Do you happen to know a fruity HW company that today runs it's own proprietary graphics API when the open Vulkan or OpenGL exist? /s
All jokes aside, back then it made sense why every 3D HW company was baking their own API. It wasn't just for gatekeeping/rent seeking, but the consumer 3D graphics acceleration business was brand new, there was no standardization, so nobody knew where the future was heading, so they wanted to have full control over it as they built it. Plus, they were shipping hardware before Microsoft had come up with DirectX so they needed some API until then, and I assume they were afraid to touch OpenGL, the API of their biggest competitor.
To be honest, I can't think of anything that's actually exclusive to the fruity company's graphics API? I mean, they have no game market to begin with.
> Do you happen to know a fruity HW company that today runs it's own proprietary graphics API when the open Vulkan or OpenGL exist? /s
Yeah, but how many AAA games use it exclusively? Every AAA game I know of is either using Vulkan, OpenGL, or DirectX directly, or they're using a game engine like Unity or Unreal and abstracting away the graphics API.
Abstracting away the graphics API and using the one native to each platform is using it directly.
You forgot the Playstation (LibGNM/LibGNMX) and Nintendo ones (depending on the generation, microcoded GPU, GX (Wii), GX2 (WiiU), NVN (the main one on the Switch), Vulkan and GL 4.6).
> Abstracting away the graphics API and using the one native to each platform is using it directly.
No, it's not. What a silly thing to say.
Is the developer writing Metal code? No. Then they're not using it directly. They're using an abstraction.
"using it directly" means the developer is writing Metal code themselves. Writing Unity code that eventually gets compiled to Metal is not writing Metal any more than writing C that gets compiled into machine code means you're writing machine code.
Think of it this way...if someone wants to hire an experienced Metal developer for a Mac-exclusive game, do you think someone who's only developed using a game engine like Unity would be qualified?
> You forgot the Playstation (LibGNM/LibGNMX) and Nintendo ones (depending on the generation, microcoded GPU, GX (Wii), GX2 (WiiU), NVN (the main one on the Switch), Vulkan and GL 4.6).
Irrelevant to the conversation. We can talk about developing with proprietary APIs and continue to just use Apple's Metal as an example without bringing up all the other proprietary APIs. Not sure why you think I "forgot" all the other APIs.
I also know a couple of game console vendors with their own proprietary APIs.
If networking protocols were managed the same way as ARB and then Khronos do their work, we would only have the RFC for the IP layer, and then lots of extensions to choose from for the upper layers and traffic monitoring from each networking card vendor.
I think you're retrofitting the common clichés about proprietary APIs into your memories. First (circa 1995) 3D accelerators were all different and incompatible with each other: NV1 was using Saturn hardware (and only got used in some Saturn ports), 3D Rage had its own API (creatively named CIF, from “C interface”), S3 Virge had its own API (S3D), Vérité had its own API (RRedline), Voodoo had its own API (Glide). Interoperability wasn't even discussed; when (some) games added support for (some of) those, it usually was a completely rewritten executable, i. e. port to that graphical architecture. The main difference between those accelerators was that the former targeted fillrates of top consoles of previous years, still hanging on early nineties 3D vision (a couple of low poly models; fake perspective, like in racing games; mix of 3D and 2D sprites and backgrounds; z-buffer? you can simply draw polygons in proper order!), while the latter competed with professional accelerators (in fillrate, not in shading capabilities), and made a breakthrough.
(By today's standards, those famous early 3D games were laggy crap, both hardware- and software-rendered, on many/most period-correct computers. It's rarely said that simple things like texture interpolation (computationally way too much for a CPU) were new, and impressed people more than fps counters. Yes, the vaseline polygons were once the look of the Future!)
Then two companies leveraged their positions to change that course. First was Microsoft, which really needed everyone to start using system-wide multimedia APIs instead of dozens of unmanageable DOS era bypasses offered by this or that hardware manufacturer. So Direct3D was introduced, and then quickly remade in a numbers of later versions to match the breakneck progress in hardware features (or directly wrap around them, if you wish). On one hand, it was a universal API, on the other, people would still write “ATi code” and “Nvidia code” years later.
The other was id Software, though not in a straightforward way. Quake I was ported to accelerated hardware (vQuake, glQuake). Partially because of vQuake experience, Quake II compartmentalized the renderer into a library, and only that library was supposed to be written with the help of hardware makers (or by them). Ironically, OpenGL support was more of a tech demo thing because id Software had the expensive professional hardware, and regular users obviously didn't, despite the calls to support that API in consumer 3D accelerators. Even more ironically, 3dfx decided to make a bare minimum OpenGL wrapper library for Glide to let already existing glQuake run on Voodoo instead of trying to make some deal on the porting. They called it MiniGL. Soon other manufacturers, game makers, and people trying to make this or that existing software work on consumer PCs followed the example, and the world saw a lot of custom hacked "opengl32.dll" semi-implementations tightly coupled to respective applications. Of course, if any of them happened to be copied into system directory, everything but the specific software running with specific hardware that library was supposed to support crashed or broke. Those who tried to write new programs by the book using these wrappers probably had some problems, too.
Then Microsoft was forced to act, even though it never really intended to support alternatives to DirectX family. It made OpenGL (standardized) an official system-wide API, but made the system library to be just a wrapper over complete OpenGL implementation provided by hardware manufacturer (if present). So it continues to this day, and making everything work properly has been AMD/Nvidia's problem, not Microsoft's. This is why people on Windows can still choose between OpenGL and Direct3D in game settings, and see something on screen using both. But when it all got started both were just thinner or thicker wrappers over what chip designers initially had in mind.
The first version of Direct3D shipped in DirectX 2.0 (June 2, 1996) and DirectX 3.0 (September 26, 1996) [0], but the 3dfx Voodoo came out in October 1996 [1].
In other words, by the time 3dfx had a product out the door, Direct3D already existed. The fight over proprietary APIs should have been over and Glide should not have existed. DirectX 5, which greatly matured the Direct3D API, came out in August 1997, less than a year after the Voodoo, and still 6 months before the Voodoo 2. Glide should have been dead by then.
Another bit of history, before DirectX, during the days of Win32s extensions for 16 bit Windows there was WinG library, which provided an accelerated framebuffer.
There was a strong pro-OpenGL faction at Microsoft. Their goal was to support ports of Unix OpenGl apps to Windows NT. That the resulting OpenGL driver also supported 3D games was mostly an unintended side effect.
> realized they can replace all those expensive discrete RISC chips SGI was using for their 'reality engines', and instead design a custom ASIC from the ground up that does nearly the same things SGI's workstations were doing
This one. It is amazing seeing how some monstruosity of a full-length, full-heigth cards were replaced by, essentially, a single chip solutions. I would recommend something akin to a computer museum to see exactly how it came to be.
On mobile ATM, if someone is interested - leave a reply, I would provide some links on the most interesting ones.
I did a stint as a co-op for SGI back in the late 90s. What was clear at the time was that there was a flight of the smart people out to the early PC graphics card industry causing serious brain drain in the company. This was also the time the company was making astoundingly overpriced PCs that made the bad bet on RAMBUS.
The reality is the company suffered from the same fundamental market forces that killed off most of the Workstation market in the 90s. No niche company could spend what Intel was spending on R&D every year so their performance advantage was continually eroding, while the price points for the hardware were not. Trying to transition to being a PC manufacturer wasn't totally crazy, but it would mean competing a highly price competitive market which SGI was absolutely not equipped to do.
I had the impression that the smart people in the graphics department saw that management was never going to go along with their "lets build far cheaper and better versions of our existing products on a PCI card that you can stuff in a cheap off the shelf PC" that would massively undercut the core business. So they quit the company and started nVidia.
Even Intel flirted with RAMBUS and paid for it. When I was at Pandemic Studios in 99-02, we'd get lots of prototype hardware from Intel and they sent us PIII's with the RAMBUS-exclusive i820 chipset. The things were impossible to get working stably and the RDRAM was ludicrously expensive. Total dead end from the get-go, and slower than AMD's stuff.
Intel was really on a dumb path starting in the late 90s, with RAMBUS, Itanium, the P4 debacle, and missing the emerging mobile market, and didn't right themselves until 2006 with the Core series. But they were big enough to be able to make a few mistakes unlike SGI.
Yeah the Battlefront games were awesome! They were a bit after my time but built on the foundations of multi-vehicle combat that we worked on in Battlezone II, just supercharged with more open world capabilities and obviously fantastic IP with Star Wars.
Originally we were just x86 PC games but started working on console stuff in mid-2000. I got a chance to head up design on a PS2 game with Sony, and we had one or two other 'keep the company alive' projects as well before having the breakout hits of Full Spectrum Warrior and Battlefront. By then I was well off to college though.
> there was a flight of the smart people out to the early PC graphics card industry causing serious brain drain in the company
Yeah, I can imagine you could count on the fingers of your hands, the number of people could, back then, design 3D acceleration hardware, so it must have been a pretty exclusive club in the Bay Area at the time where everyone in this field knew each other, I can only assume.
3DFX did not dethrone any CGI giants though. The PC did, but at the time it was by getting an Evans & Sutherland or similarly novel 3D accelerator for like $3000 and a copy of Maya or 3DS Max for like $1500
Which had enough power to dethrone the Indy or O2 at a fraction of the price :)
Had to have a Voodoo to stay competitive in QuakeWorld by seeing people hiding in the water. The peasants with software rendering had opaque water textures. My Voodoo card eventually developed a texture-memory corruption bug that would result in brightly-colored speckles appearing on Quake's dingy industrial textures, turning it into more of a Candyland world. Good times.
as I recall the SGI monitors were actually the pinnacle of Sony Trinitron CRT tech, rebadged, and connected using a 13W3 analog video link. Similar to the very high end Sun at the time. As neither SGI or Sun actually made CRTs, they went with the state of the art from the world's top CRT maker. Might have been some mitsubishi diamondtron in there too.
In 1995 I think there were 16:9 aspect ratio japanese model TVs but I am not sure about monitors, might have been more like a 4:3 1600x1200 display.
>In 1995 I think there were 16:9 aspect ratio japanese model TVs but I am not sure about monitors, might have been more like a 4:3 1600x1200 display
Nope, it was definitely 16:9, but I was mistaken, it was an Intergraph[1], not SGI like I originally thought, but still connected to an SGI workstation.
Just look at this beast[1]. Also, the monitor is in the photo as well :)
There's a semi-easy way to spot a Trinitron if you knew where/how to look for the tell. There were 2 horizontal lines that were shadows from some bit of wiring that could be seen when viewing the screen when certain images/patterns like solid colors were displayed.
I had a Sun workstation with a beast of a 19" Trinitron CRT in the late '90s in my apartment (working remotely for a California startup) and remember it fondly. The lines you're referring to are called "damping wires" in this article: https://en.wikipedia.org/wiki/Aperture_grille
Of course, the final-gen Trinitons (circa 2000 onward) had truly flat glass, so there was no tell-tale curvature to look for.
And once Sony's patents started expiring, there were competitors like Mitsubishi's Diamondtron displays with glass shaped like Trinitrons. I'm not sure if they had the two horizontal lines like Trinitrons.
I remember that shadow! FWIW it was just one line in smaller Trinitrons, and it was located 1/3rd of the way from the top or bottom edge rather than in the middle.
Apple mounted the Trinitron tube upside down compared to other vendors, so that the faint horizontal line would be in the bottom third of the screen rather than the top third.
You could also "overclock" them to run at higher refresh rates (frame rates, hz) depending on the resolution. The top CRT's were blowing away LCD's for many, many years. Good color, higher refresh, good dynamic range, etc etc. Those 21" CRT's were massive but I held onto my second hand one for as long as I could. I remember getting the VGA adapter for the Dreamcast and it looked damn good on that CRT!
I still remember buying a dirt cheap SGI monitor back in the day. But it took a while before I figured out how to mod the cable because of sync on green.
Fixed Frequency, crafting an X mode line on a cheap 14", no text readable during boot, fingers crossed if X came up ok, if not, swap monitors, boot with "init=/bin/bash" and goto 1...
The SGI monitors were 4:3 but I don't recall the resolution. I do recall that they were beasts, weight of a small elephant and a not dissimilar size - you needed to pull your desk away from the wall to get any kind of distance to the screen.
Over time their timings would drift and you could never get the entire screen completely sharp.
I just finally gave away a 17" Trinitron SGI monitor when cleaning my office.
That thing was a tank! I bought it at a CG house bankruptcy sale in the 90s for $2k, which was less than half the going price at the time.
But you're right, it weighed a ton. It did need periodic degaussing. And when a communication company half a mile away put in some satellite uplinks, I could see when there was heavy communication traffic by a slight color shift on one side.
Degaussing buttons were common on high-end CRT displays. They were always that fun!
In the mid-2000s, I worked at a place where we had several "decommissioned" Indys living a second life as Apache servers for some of our hosting clients - not uncommon in those days. We had one of the big 19" 4:3 CRTs on a KVM, too, and its weight put a noticeable if graceful curve in the MDF desktop on which it stood.
There was a time I had a 20” Apple behemoth on my desk in the open plan office. Degaussing it made the image on every monitor within 3 meters curl up in magnetic agony.
It was an Intergraph monitor, connected to an Integraph workstation. Carmack didn't use SGI's for development but they did use their servers for level processing
> But still, I'm sad to see SGI go, as their funky shaped and brightly colored workstations and monitors had the best industrial design[1] in an era of depressing beige, grey or black square boxes.
This is what I miss most about the 90's and proprietary computer vendors: the exotic fun looking cases they had vs boring beige Mac and PC cases.
I have an SGI 230 which was a last ditch effort to stay relevant via offering a regular x86 machine in an ATX SGI case. Unfortunately SGI ditched the snazzy cube logo sy then so it only has the lame fisher price sgi logo stenciled on it. It now houses a 12 core threadripper running Void Musl.
Before that workstation, the 320 and 540 used Intel P3/Xeon chips on proprietary motherboards, 3D GPU/chip-set and even proprietary ram modules. Only ran nt4 and was a miserable failure of a machine.
It works for what I need and I like the BSD influence. Musl explores a cleaner c library. I run glibc stuff like Steam in containers and I run an AMD GPU. I don't have any direct need for systemd so its interesting to try alternatives.
I recall CRT monitors were capable of some quite high resolutions (often with a compromise to refresh rate), and it took a while for LCD panels to overtake them.
CRTs, being analog, are theoretically capable of _any_ resolution. But in practice, most monitors were limited to between a few to a few dozen common modes. Toward the end of the CRT monitor's reign, most monitors could display higher resolutions than was really practical for their size, as the physical limit is the size of the "dots" that make up the phosphor layer. (Which is probably not at all the right terminology, because I'm not a CRT geek.)
The refresh rate compromise at higher resolutions was due to the limitations of the graphics card, NOT the monitor.
LCDs took a little while to catch up for a few reasons:
1) expense! It was hard and expensive to manufacture a display containing millions of transistors with an acceptable (read: profitable) failure rate.
2) colors! LCDs had a reputation for extremely poor color quality in the beginning. Blacks were medium gray at best and primary colors all looked washed out. Today's LCDs still have a hard time getting to "true black."
3) ghosting! Early LCDs had poor response times. Move your mouse cursor and watch it leave a trail across your screen. Fine for word processing and spreadsheets. Terrible for games.
>as the physical limit is the size of the "dots" that make up the phosphor layer.
Another physical limit was the weight of the glass. The larger the display, the glass got thicker and thicker to "lens" the beam correctly so the edges/corners were straight. We had a Sony reference CRT for our film transfer/color correction suite that was a 32" HD monitor. MSRP was >$30k for it. (Sony's reference monitors were roughly $1k per inch in pricing.) The thing was stupid heavy requiring a minimum of 2 people if their names were Arnie; otherwise it'd take at least 3 maybe 4 typical post house employees. All of the weight was in the front.
CRTs had limits on how quickly the beam could scan across a line (and return to the other side of the screen for the next line), which imposed a tradeoff between the number of lines per frame and the number of frames per second. Within a line, the number of pixels per line was often limited either by the speed of the graphics card's DAC or the analog bandwidth of the VGA cable. But sometimes it wasn't, and you could take a monitor originally intended for something like 1280x1024 and get it to display 1920x1080 with acceptable sharpness, after adjusting the picture height to compensate for the changed aspect ratio.
It certainly took a long time (mid 2010s) for panels in their various guises to truly overtake CRTs for film work. The reason given at my workplace was the calibration control of CRTs took time to be superceded.
The Sony 21" CRTs (and 24" in widscreen) had a very good run -- spanning several decades. They were certainly capable of higher resolutions, but the 1600x1200 was pretty much a standard (or 1920x1200 on the 24" widescreen) for that entire run. I can't think of any other desktop workstation performance 'metric' that was stationary for so long.
I miss my Sony Trinitron that I gave away in 2015. I remember not having an "HD TV" when I first bought a PS3, so I hooked it up to my Trinitron with component cables and it displayed beautifully. The 1080p TV I bought at a later date felt like a downgrade, despite the larger size.
1600x1200 is 1.92mp, while 1920x1080 is 2.07mp. Technically more pixels, but juuust barely. For quite a few years my setup was a 1080p panel next to an aged ColorSync 20" and the difference was mostly the falloff in the corners of the CRT. (And the incredible weight of the CRT, of course. 78 pounds!)
Late to respond, but 1600 x 1200 was the default resolution on my Trinitron. I believe the max preset resolution was 2048 x 1536 @ 75hz. So it was able to display 1920 x 1080 letterboxed just fine. I wish I never gave it away, but I sure don't miss moving it!
Input latency, flexibility of native resolution (they don't really have one). It's been awhile since I've looked at them side by side, but I think they're still better on black levels. I suspect color reproduction has caught up though.
It's not surprising they fell out of favor though. Once the ghosting stopped being absolutely horrendous the LCD is just superior for office work. Uses less power, saves tons of desk space (and due to less weight, doesn't need a strong desk) and no flickering means a cheapo LCD is probably easier on the eyes than a cheapo CRT.
I bet the CRT wwould still be used more often if the supply chain to make them didn't fall apart when the demand disappeared.
When Sony first brought out their OLED reference monitors to the market, they had a demo at NAB to demonstrate the various screen types. All of the monitors were the equivalent reference version of that series: CRT, LCD, OLED.
It was an interesting demo as they had the same feed going to each monitor. When showing how each monitor could display black, the OLED looked like it was off, the LCD was just a faint shade of gray, while the CRT was much much more noticeably not black from its glowing screen. I started to think to myself how they might be pushing the brightness on the CRT to make it look bad against the others. Right as I was thinking that to myself, the narrator said something to address this thought and then displayed bars. All 3 monitors were correctly adjusted. A gamed CRT with brightness adjusted would have been obvious at this point to those knowing how the test pattern is meant to look.
The only thing I'd suggest that a CRT looks better is on true interlaced content.
>I'm still stumped by nvidia rise and how gaming propelled them into HPC/Supercomputing. Who knew playing video games who amplify research that much ?
Jensen Huang did. When people think of tech visionaries they think of Jobs or Musk, but Huang is just as great. He's been bang on the money on the future of this industry since he founded Nvidia, which is how they managed to not just consistently stay ahead of their competition (ATI) or put them out of business (3dfx), but leapfrog them (AMD) by branching in several fields (AI/ML, PhysX, computer vision, self driving, compute etc.) He saw early on that GPUs should push into general-compute and not just be for video games, and he executed well on that.
There are interviews on Youtube with Huang at Standford IIRC, where he discusses his vision of the GPU industry from the early days of Nvidia. Check them out, the guy's not your typical CEO suit focused on the share price, but he's basically a tech genius.
So, to answer your other question about why only Nvidia manage to win compute and not the other GPU companies, it's simple. Huang had the vision for the entire ecosystem from GPU chips, to drivers, to APIs and SW libraries, to partnerships and cooperation with the people and the companies who will use them. Building great GPUs for compute is not enough if you're just gonna throw them on the market without the ecosystem and support behind them, and expect it to be a success. That's what Nvidia gets and the rest (AMD/Intel) don't. So while ATI/AMD had tunnel vision and was focused only on building gaming chips, Huang was busy building a complete GPU-compute ecosystem for their gaming chips with the rest of the industry.
To be fair, Nvidia hired a lot of people who at some point worked at SGI and 3dfx, so there was already a lot of HPC/server/supercomputing talent working there. There are articles (e.g. https://www.extremetech.com/gaming/239078-ten-years-ago-toda...) showing the transition from having fixed vertex/pixel shaders in the GeForce 7000 GTX series to the generalized stream processors starting with the GeForce 8000 GTX series going forward.
And it's easy to see that Huang and Nvidia put their money where their mouth was. The first GTC was 2009. That's 3 years before the famous AlexNet paper that's often credited with kicking off the current AI on GPUs trend.
Not to take anything from what you've said because Huang is really all that and more. Find his interview with Morris Chang to get some more insight. Nvidia, ATI and other players for the most part were seeded by ex-SGI crew however. SGI had an instrumental role in those companies that ate it.
The guy really is a visionary - although I gotta say he really needs to diversify his wardrobe. How long are we gonna see him in the same exact look with the black leather jacket?
Rendering 3D graphics for games and the supercomputing used by AI/ML/research both need the same thing: Embarrassingly parallel math calculations with little or no branching in the code.
For example, a feed-forward neural network is just a whole lot of multiplication and addition. Transforming a 3D vertex in space to a 2D screen coordinate is matrix multiplication, which is just a whole lot of multiplication and addition. If you've already designed silicon to perform those operations in a single clock cycle, making it do HPC/SC rather than gaming isn't that big of a switch.
Neural networks have non-linearities. There were hacks for doing some sort of general purpose computation using the GPU fixed rendering pipeline, but they're not enough for neural networks especially as understood today. You need general shaders, which were a relatively late development.
Aight but then why no other company managed to compete on the numerical core array ? still funny, as if the wealthy gaming market funded nvidia venture into serious computing enough to choke the potential competition.
You need huge money to develop the tech, but the HPC industry is tiny. People are always impressed at these million dollar machines, but only a handful are built every year. There is a lot more money in gaming selling millions of $200-$1000 cards every year.
Silicon is expensive to design, and even more expensive to build. The startup costs are crazy high. By the time people realized it could be a thing, nVidia and AMD already owned the market.
>. Who knew playing video games who amplify research that much ?
Unix was born to play a video game on a different platform. The Curses library for terminals (Windows users: "widgets" for the console") where born for Rogue.
Today, a damn serious OS like OpenBSD gives the BSD games set by default on the base install.
Text adventures for the Z-Machine had top tier grammar recognition parsers.
Then, a lot of simulation games overlapped the serious simulation software features, such as Sim City.
Nvidia is truly special because their first product was a total market miss. They pushed for Quad rendering architecture and as a result could only do _seven_ crappy Saturn game ports. Its amazing they were given 2 more tries! Cancelled NV2 doubling down on quad rendering, and finally Riva 128 almost two years later.
every single teenager whose parental units asked where gaming skills would come into use later in life as an adult. at least the were hoping for something like that.
Surprisingly, because most CAD and computer graphics programs are still largely not optimized for multi-threaded processors, the fastest workstations are typically using whatever processor tops the single core compute performance category, not the multicore. They are then paired with as much ram as the chipset supports. Also, when you are talking about pro graphics cards like Radeon Pro and the RTX A (formerly called quadro) lines, it only pays to upgrade to the next gen graphics card once your software vendor has had a year or two to integrate with a new hardware gen's capabilities. The pro gfx card market (at least when it pertains to OpenGL performance) is one area that will actually punish you for being too early an adopter, which is disappointing when cards go for several thousands of dollars new. The whole area of CG software has been stagnating for 5 years while OpenGL driver improvements have fallen out of favor for more bare metal processing approaches that are only now getting to feature parody and developer adoption like vulkan. Hopefully the next few years brings a positive trend in CG price to performance again as these new architectures actually start shipping in CG software products. As someone who works daily in CAD, the performance stagnation over the last 5-10 years has been depressing to say the least.
Thanks. This is totally new to me. I guess it's the same pic for professional level designers or game designers who basically work in a CAD like environment? For example the people who design levels and scripts for games such as Skyrim.
Got it. I Google around and surprised to find vendors who basically build workstations from market components. Not sure whether they are mainstream though.
This makes me really nostalgic for being introduced to SGI dogfight multiplayer on the local LAN I had occasional access to visiting a family member's employer. It really felt revolutionary for the day compared to the Pentium 1 whatever I had access to at home!
Fun fact, the O2 had an 'optional' expansion port that was an additional ~$1,000 or so... but the thing is, ALL the O2s had this port - it was that if you paid for it, they popped the plastic cover off the case to reveal the port...
I'm not so sure about this, I've owned a lot of O2s.
The only blocking plates on the rear cover of an O2 usually cover the spot where the Flat Panel Adapter or Dual Monitor board goes, and it's not installed by default, or the spot where a PCI card would go, which, well, it's a PCI card.
I cant recall what the port was... A serial port? I cant recall - but just that it was on all O2s and it was just knocking out the plastic cover to get access to it.
Oh, this was the SDI hack, but it needed a separate port expander - it was more involved than just removing a plate.
You could buy the base AV1 "analog" video I/O card and then plug an SDI expansion breakout board (I think the Miranda VIVO was the most popular) into the webcam port, instead of buying the much more expensive AV2 "digital" video I/O card.
There were two video options you could buy, with analog and digital versions, plus there were special parts to provide alternative options for display outs (by default it had sgi-style 13W3 only)
https://github.com/ESWAT/john-carmack-plan-archive/blob/mast... is interesting. It transitions from todo list to a narrative format right after Quake launches :) In subsequent years you see Carmack opine on various high-perf platforms for gamers, and also for Quake level pre-processing (engine licensees had money to spend on productivity)
(tl;dr - Quake was developed on NeXTSTEP + an MS-DOS port of GCC (DJGPP), Quake2 was on Visual Studio + Windows NT (this was the era of that Carmack giant-monitor photo), there was an in-between period of free updates to the original Quake, like WinQuake, QuakeWorld and glQuake).
Such a Lisp Machine is usually one machine with two screens. Typically it would be a XL1200 (or earlier an XL400). It would have a black&white console and a color screen. The color screen would be driven by a color graphics card, possibly a FrameThrower - which is an accelerated graphics card.
The graphics editor seen is just the S-Paint part of S-Graphics - it could use a FrameThrower, but also other graphics cards. There was also S-Paint on the MacIvory running in a Macintosh. S-Graphics also ran on earlier Lisp Machines from Symbolics, like a 3670 from 1984.
A bunch of TV studios, video production companies, animation studios and game developers were customers.
The software it runs is S-Graphics, which was later ported to SGIs and Windows machines as N-World, by Nichimen (then using Allegro CL from Franz Inc.).
I had no idea these machines existed. When tasked with writing a ray tracer way back in college, the first thing I did was create a scene description format based on S-expressions. Yesterday I was nostalgically looking at backups of the assignment and found my first scene file:
I went so far as to make the scenes scriptable with Guile. I had another scene that procedurally generated spheres positioned about a helix, but that seems to be lost to the bit gods.
To me, there's something very natural about using S-expressions to create graphics. I wish there was a video (with a high enough resolution) that shows the Lisp interactions—especially in the subdivision modeler.
It was a common practice in the 90s for creative engineers to use extremely expensive "supercomputer" workstations to pay for productivity gains & live on the bleeding edge (e.g. Silicon Graphics workstations, NeXT workstations, and so forth). Question: What is the equivalent way to do this today? That is, is there a way to pay a lot of money to use a computer which is 5-10 years ahead of its time?
Ok so I'm pretty biased here, but I think the answer lies in VR computing. There's no doubt VR computers are more expensive than their PC/laptop counterparts, but they allow you to adopt a bleeding edge technology which is essentially 5+ years ahead of its time in terms of where it is on the "commodity computing" frontier.
A good quote from Alan Kay I find pretty inspirational on this front: https://youtu.be/id1WShzzMCQ?t=3345 Here he basically advocates for spending hundreds of thousands of dollars on a computing machine, in order to compute on what will be a commodity product 10-15 years into the future. VR computers aren't this extreme on the cost curve, but I think there is something to this point of view which I find really inspirational.[1]
[1] Caveat: I'm one of the founders of SimulaVR (https://simulavr.com), so admittedly am very biased here. But I do think VR provides a way to convert money into "better compute" in a way that hasn't been available since the 70s-90s super workstation era.
> Question: What is the equivalent way to do this today?
I don't think there is an equivalent, because as I explained here, the very noticeable, very significant gains in almost every aspect that even simple hardware upgrades brought haven't been a thing for a long time: https://news.ycombinator.com/item?id=30951533
It's not just that back then an "extremely expensive supercomputer" workstation effectively enabled you to do a lot of things that we take for absolutely granted nowadays in the first place. When not having such a beast meant essentially not being able to do any serious CAD (whether mechanical or electrical engineering), large scale software engineering, number crunching, anything related to graphics/visuals...
It's also that super mundane tasks like writing a letter, editing an image, or even just booting your computer received a significant, immediately noticeable boost for comparably plain seeming hardware upgrades like a 33MHz higher clock or a bit more L1 cache.
True, but I think hyperscale gear that's $200k nowadays from specialty vendors (much like those workstations of old) might easily become "commodity" enough in the next 10 to 15 years, and that's what Alan Kay was really trying to get at. What might that be useful for, in the hands of a single person? That's a really interesting question, and I don't have a sure-fire answer. (Even AI inference on the huge models that seem to be popular as of late - an easy choice for something that will chew through a lot of data and compute - feels like way too much of a toy to be genuinely useful.)
I'm surprised nobody has stated the obvious. See the recent post on Dall-E 2. It's likely to change the face of creative design and was done using modern supercomputers (e.g. cloud-based deep learning). To make a direct analogy, I imagine some of the newest game AI-graphics and engines require training on such supercomputers.
I think todays equivalent is more running a desktop with a 64 core CPU and 128 GB RAM (or more), with as many screens as you want (I don't find more than two useful, but preferences vary).
I could see VR computing as the mobile version of that. Laptops can be pretty powerful, but their screens are limiting. VR gives you the screen real estate without requiring you to set up monitors where you are.
Today's equivalent is probably just a cluster and requesting more cpus or gpus vs having some ahead of time chip locally. I guess you could argue the filesystems on clusters tend to be pretty performant too so maybe that's ahead of time in terms of consumer ssd speeds.
Not a lot of people put two RTX3090 cards in there desktop for over $2000 each, but I know some who do. When rendering times for production work is cut in half it pays back fast.
I'm glad the middle screen is explained, Symbolics always gets overshadowed by SGI. If you want to see it in action, watch this: https://www.youtube.com/watch?v=gV5obrYaogU
I'm astonished by the amount of money Squaresoft was investing in game development at the time. Obviously, it paid off big time for them, but I can't imagine they realized the game would be as successful as it was. If I'm honest, their follow-ups make it seem like they never understood why the game was a success.
Would you say you know why it was a success? This game has had a massive impact on me and I am spending a lot of time trying to understand why (along with FF6 and Chrono Trigger). I have identified 3 things : the music plays a massive role, the way emotion are conveyed using by the posture of the characters as well and finally the story telling that mixes stories and battles and that can hardly be recreated using an other medium.
Typing this I realise that does not explain why the follow-ups were not as good.
In a lot of ways I think the big reason the subsequent games were not as good was because square took the wrong message from the success. They didn’t go “we should release more emotionally impactful video games with extremely well-told stories” (although they did this well into the ps2 era), they went “we need to be the most technically innovative company in the world” and this mindset brought them into the the failures of crystal tools and the luminus engine, where they were pulling scenario writers off of games to work on their engine (the saga of Final Fantasy VS XIII is because of this).
They got the music part down though. That stuck around.
Oh, no. But I can say that the game is still objectively incredible. I recently did a full play through of the Steam version with the 7th Heaven upgrades, and enjoyed every minute of it. And I don't think it's all nostalgia either.
And yes, the music is incredible, as evidence by the fact that the Shin-Ra Orchestra still tours.
A lot of us had never really played an RPG before, especially if we never had an SNES. Most games had a generic poorly-written story that might be a page in the manual instead of even being in the game itself.
Then suddenly a game comes along on the popular console, and everyone's playing it and talking about it, and it puts you in this 3D living world, and a real story is happening with real characters, with real conversations - there's even swear words in it! It's treating me like an adult! - and yes the music is beautiful too, and the whole thing seems impossibly huge, the world seems to go on forever and now I can fly? And now I can go under the ocean? There was nothing else like it, not on a mainstream console anyway.
I feel like FF7 benefits a lot from its first act being pretty cyberpunk-y. The pre-rendered graphics make it also a game where you see it and are like "wow", as an observer.
Things changed a lot since then but I think that so much of the Playstation's success comes from having these games that are quite high fidelity. You no longer need your brain to fill in too many of the blanks, making it way more mainstream and interesting for a broader spectrum of people.
> If I'm honest, their follow-ups make it seem like they never understood why the game was a success.
This sounds like a pretty simplistic reasoning to me. Do you really believe this?
The fact that your follow up game (or movie/book/album/painting) wasn't as successful as the previous one, doesn't mean you don't understand why the latter was a success. Understanding success and replicating it are two different things.
Surely they had some idea given that every game did better than the last until 7. You can only be the first immersive 3D game on a new platform once. Personally I still find FF6 to be one of the few 10/10 games that never gets old. After the 3D and tonal jump to more sci-fi I just wasnt as interested - and western PC RPGs started to get really good.
I remember being on a stand next to SGI at E3 in 1997. They had a giant black truck in the arena like the one that Knight Rider drove into. They were selling these machines that looked way more powerful and expensive than anything the games industry could afford. People at the show were mainly debating when and if Intel could release a 1ghz processor. Stange what you remember.
Also there was a Final Fantasy film produced at that time that took 1000 workstations, 200 persons and 4 years to render at a cost of $100m+. Thought it made only $80m.
> The movie was created using a 3D CG tool called MAYA as well as original tools created in Honolulu.
> By the time the final shots were rendered some of the earlier ones had to be redone because they did not match anymore. Also, the software used to create them had become more advanced (and hence more detail was possible).
I worked in the same building in Honolulu as these folks. Several of us were able to buy various bits of hardware left over after they wrapped. I ended up with a massive monitor that I used for years after. My recollection is that one of my co-workers bought a rack and a few servers, but sadly I can't remember the details (other than that it was a hopelessly impractical purchase on my co-worker's part).
And despite all that, by the time it was released, it was really not very impressive graphically, which was all it had going for it since it was a terrible film.
Yeah the faces were alright for the time, but everything else wasn’t great. The environments in particular were pretty bad. The opening scenes of the movie have the characters flying through a burnt out wasteland and it really, effectively looks like a 16x16 texture has been draped over several square kilometers of mountains. Texture filtered, so no giant chunky pixels, but it still looked awful. Absolutely no detail. And this was one of the first things you saw in the movie!
Also it was just dreadfully boring, which is basically the worst thing a piece of entertainment can be, even worse than the visuals.
Amen! As a FF fan in the 90s, I was so incredibly hyped about this movie. I remember it major getting press in gaming magazines and going to it with my cousin. I also fondly remember it as the only movie I've ever walked out of.
My friends and I were big Final Fantasy fans in the 90s but really had no interest in this film when it came out because it was clear that despite being directed by Hironobu Sakaguchi, it didn't really tie-in to any of the FF games. Sure, all of the mainline Final Fantasy games are stand-alone stories, but when you base a movie on an existing property, that doesn't really fly.
I could see what they were going for. A lot of the pieces of good or functional ideas and hooks are there: the central mystery of why there are all these ghostly creatures everywhere, trying to solve this mystery before it’s too late, the uneasy alliance between the scientists and the military where both are trying to solve the problem both have different ideas about how to do it but both need resources from the other, trying to convenience other of things you know are true but are difficult to understand or sound crazy, trying to have functional and healthy relationships in a crumbling world, etc. There is a lot there that could work and a lot of media has similar themes and plot points, it’s just the film doesn’t do a very good job of it. It’s not even really worth a watch to see “it’s so bad it’s good” or “let’s see and laugh at Square’s hundred million dollar mistake” because as I said it’s just kind of mediocre and boring.
>The RAM on these machines were not industry standard — they were proprietary, 200 pin SGI RAM modules available in 16MB, 64MB, or 256MB variants. The memory board (known as MC3), had slots for 32 memory modules — and could handle up to 8 GB of RAM. 16 GB in the rackmount version (yeah, there was a massive rackmount version).
>Think about that for just a moment. This was the mid-1990s.
Let's pose the more theoretical question of what is the most powerful computer that could be built if enough recourses were summoned to make it. We're talking a single rackmount. What about a rackmount the size of a city.
I would imagine it'd be similar in architecture to existing supercomputers: many identical compute units connected over a network. Scaling a single rackmount computer design up to the size of a city would not be practical, a network would have to be included at some point to keep scaling adequately.
Big computers are kind of like that - it’s just that the network is composed of ridiculously fast buses and pipes and other forms of pushing vast amounts of data.
The biggest issue you’ll hit as your computer gets bigger is latency, because even light has a speed limit. You can, however, cheat, by distributing your inputs and outputs in space so that, by the time information flows to the right places, that’s where it should be. That and adding more pipes for increased bandwidth.
I wonder what the total compute power of earth is if you hooked all the cpus together into a cluster? It would be interesting if in times of extreme need you could harness the earth's compute and run a job parallelized across all nodes, like if an asteroid was imminent and some complex modelling had to be done with no time to wait for compute, or we contact intelligent life and have to upgrade our planet's technology as fast as possible to compete. It's also really a shame so many working cpus just go to the landfill instead of to some agency that could pull them out and throw them into this would be earth supercluster.
SGI was building these enormous supercomputers with the NUMA architecture, kind of like a cluster of super fast units with fast interconnects and a OS and support software so you could actually use it. This is one of the less photogenic setups:
If you can make your problem fit the architecture you can work on enormous tasks. It wasn’t bad stuff at all but probably impossible to make money on. Commodity hardware improved so fast they couldn’t keep up.
at this point its more about what metric of powerful are you comparing to? for example, nobody wants a liquid nitrogen cooled 8 gigahertz CPU any more. and measuring "flops" isn't that useful either. the old Silicon Graphics machines didn't have a single GPU in them and would not be able to render shaders or have/manipulate rasterized textures of basically any resolution, they just threw a lot of energy at a problem for a now-antiquated and useless approach to that problem.
and finally, if you make something with a bunch custom chips with a bunch of pins with high bandwidth, then you don't have another metric to compare to something else.
I'm open to the thought exercise but I can predict which directions the conversation would go, I'm just content with the variety of metrics on CPUmark these days.
Wow this brought back memories! In the late 90's, I got to work on SGI machines at college learning 3D animation in Maya v1! The school had 4 labs with about 30 SGI O2's in each; all networked with no security. I could send messages from my workstation to the teacher's open terminal session.
No one there knew (or cared to learn) IRIX. When they converted the biggest lab to Windows NT4 everyone abandoned the SGI machines. Which worked out great for me, because it was much more peaceful in the SGI labs compared to the NT4 lab. Some of those StarCraft matches could get kinda rowdy!
This sort of archaeology is great fun. I spent a huge amount of time figuring out what was happening on the screens in the first Westworld film (1973): https://www.youtube.com/watch?v=UzvbAm0y8YQ
Oh man the memories of this time. Around 97-98 was when workstations were on the way out and workstation cards were on the way in, but regardless these SGI boxes were just so lustworthy - the style, the performance, the otherness and clear superiority in all dimensions to my lowly hacked together PC.
I was just a teen getting into 3D animation & game design in 1998, and since I couldn't ask my parents to mortgage the house to buy one, I wound up picking up a workstation card instead - a footlong Dynamic Pictures Oxygen 402 with 32mb ram and four 3dlabs chips - for a much more reasonable $750 used. I think about a year and a half before these went for $4k new, that was the pace of 3d innovation at the time. It suited me really well to learn Softimage 3D on until I got a job at Pandemic Studios as an artist/designer. Even this beast of a workstation card couldn't run Quake without errors though, there was still a separation of functionality between consumer 3D accelerators like 3dfx and the $1k+ workstation ones.
How did you find out about 3D animation and the hardware used for it way back when? Were there any particular forums you visited or magazines you subscribed to?
I was part of the Total Annihilation community and a local LAN party scene with the Gamespy folks in 97-99, so got to meet some industry folks that way and learn what was going on. The big in that I got was going to a local 3d animation meetup in 98 where I met two artists that would become the art leads at Pandemic Studios when it were founded a few months later. I don't remember the exact sources, I just hustled a lot during those times because I wanted to get in the industry and was bored of high school, and I was lucky to be near the action in LA. I eventually took a Softimage 3D animation course in summer 98 and I needed a computer that could handle it! I was only 16-17 at the time and looking back I think I had a pretty wild experience as a teenage game designer during a really cool time in the industry. Might write about it one day.
In the video game I'm writing, I made the enemy computers blue and purple in color -- as a tribute to SGI in the era when it seemed RISC architecture really was gonna change everything.
RISC did change everything - the primary computing device of users are ARM-based phones. Dependent on x86 servers in the cloud for much though.
Of course it's arguable how RISCy ARM really is, but x86 is the only CISC left for non-embedded computing anymore, and really A) it's a hybrid with SIMD and a lot of recent instructions, and B) I the ISA is essentially internally virtualized atop a microarchitecture which operates vastly differently than the ISA - all the wacky stuff done for performance is tucked away there until it rears its head with Spectre-like issues and such.
Further, couldn't it be said that RISC changed Intel? - one wonders what Intel would have done had RISC not been making gains in the 90's against Intel.
Microcoded CISC has been state of art method to implementing CISC cpus since ... 1960s?
The one x86 that actually had a RISC core inside was AMD K5, which essentially used 29050 core with an x86 frontend slapped on it (in great simplification). Am29k architecture is still used and produced by Honeywell for their avionics systems.
I started a video game company in 1994 that used SGI Indigo workstations to render animations created in StrataVision 3D [1]. The artists and animators would model, surface and animate on Macs and then send the files to the SGI machines for fast rendering. I can't remember the name of the SGI rendering software or find any signs of it on the internet.
If we would have had the funds, it would have been nice to put Alias on the SGI boxes. We were later able to purchase Lightwave for Irix which was pretty nice, but StrataVision wasn't that bad in retrospect.
Hard question to ask, but what was the workflow used in development? Two programmers sitting at many monitors powered by x different workstations, and a monitor. What are they all doing, say: here?
They were probably posing for a photo there. Pair programming wasn't coined as a term or popularized by Kent Beck until the early 2000s and 'Extreme Programming'.
Extreme Programming seemed to be en vogue among the early adopters that led to "agile manifesto" in late 1990s.
That said, it looks like somewhat typical case of discussing a bit of design, with main artist sitting at the workstation, while the other person came from elsewhere.
> Genera is an operating system, originally developed in the early 1980s, by Symbolics.
> … a fork of the LISP Operating System developed at MIT. Virtual memory, a full GUI and window manager, neworking, Emacs… this thing had it all.
I’d never heard of Genera until today. I read about it earlier this morning in an article on non-C operating systems¹.
> Symbolics, the company which owned the first ever dotcom domain, built an entire OS in Lisp, called Genera. The last version, OpenGenera, ran on an emulator on DEC Alpha workstations, and today you can run it on Linux, but sadly the inheritors of the defunct company won't open-source it.
Very sexy setup for that era. As a kid I went to a computer faire near my hometown in the relative middle of nowhere and not only did I get to check out a Video Toaster, there were two guys there with an Onyx. As an SGI obsessed dork, I almost fell over. If I recall they used it primarily for architectural visualization. It was running a scene with trees that were two intersecting rectangles with texture and transparency maps. It’s hard to convey how cool this was at the time. Plus, that machine was in the $100,000 dollar range in 1995 dollars. Fun times. I’m bummed that over there years I lost the very slick printed material SGI sent me on their machines because I called up an asked.
I received a demo of a $250k SGI when I was about 14 (1990). It powered a military F16 flight simulator and the experience was nothing short of mind blowing. Those machines were tightly packed magic.
My first job out of college was implementing the image generator for the simulator for the landing signal officer (LSO) on the USS Nimitz.
It ran on an 8 CPU SGI Onyx that was about the size of a refrigerator. The view was from the position of the LSO, at the aft end of the carrier deck. In the actual installation, the images were projected onto curved screens giving a 270 degree FOV. I do wish I could have seen the final product!
When I was in Civil Air Patrl, we went to Fallon Naval Airstation in Nevada, when they were still doing Top Gun there.
The flight review theatre was AMAZING, it had a huge screen, and the graphics were 3D wireframe - but they had the entire valley modeled and had a huge trackball and they could review the flight scenes in 3D -- this was ~1988/89
It was amazing... but I am not sure if it was backed by SGI, but based on your comment, I believe it would have been.
---
I bought one of the early OpenGL capable graphics cards frm Evans and Sutherland in ~1997 to run Softimage on Windows NT with a Dual PII 266 based machine...
The card had 32MB of graphic ram. it cost me $1,699 -- and it was a full length AT board.
I was trying to get an O2 -- but it was way out of my price range.
Sorry, I've posted this before so always feel like blabbering if I repeat - but here goes - I was 14, my dad had a print business and printed all the cockpit panels for a private company developing an F16 simulator for the Israeli air force. He took me to their offices one weekend.
The setup was a full 180 degree screen projections, a realistics 1:1 F16 cockpit with all the panels and buttons etc, and a SGI running the show. They gave me the spinning Beatle car demo, and then sat me down to fly. That day left a hard imprint (including the price tag on the SGI which they were proud to mention). I was an Amiga kid, and to top it all off, the other room had what seemed like hundreds of Amigas, which were used to build 3D models for the simultaor.
I remember that flight simulator demo, it was something you'd find at events and trade shows or even super fancy arcades. This was back in the first VR 'boom' and tail end of the era of arcades. Some companies used SGI and similar powerful workstations to build simulator game pods, like for mechwarrior and spacecraft racing games. People would pay for a 5-10 minute session in one.
"Demo" can mean a lot of things. Getting hands on a VIP pass at an airshow meant I received, as an 8 year old, a rather comprehensive demo... of JAS-39 Gripen multirole fighter jet. Just the seat I sat for half an hour cost $250k.
Sometimes you can get yourself into really interesting places :)
Something I completely didn’t appreciate as a kid is that they designed an entire 3D scene for every background and rendered a 2D output of it. For some reason I just assumed it was all drawn.
I could say "I'm not that old" but I guess that's subjective, since the article says this is archeology.
I think it's been said elsewhere, but those entry level SGI boxes were around $25,000 in the mid 90s (and oh boy did I ever drool over those things back then).
I made out alright, with some friends on IRC having mailed me warez CDs, from their tiny middle-American town in Iowa to my depressed backwoods in WV... precious gold discs arrived loaded with wintel based 3D modeling and DAW software.
Forget the machines: I'm vaguely impressed by the controllers with lots of weird buttons, with analog knobs, and I think with some LCD screens—casually sitting before the monitors. These days every home video editor can buy such things—but were people doing much software video editing in '96? Wonder how many of them were sold in a year.
I was strolling in the mall when I noticed a really cool movie on a TV inside a random store; it was ffvii. Damn, I was like, this is the best 3d I have ever seen. By the way, I had never played the game by that point, none of the franchise. I kinda got familiar with the Loren just so I could watch the movie. Excellent job
I studied 3d animation in college from 2001-2004. Our lab was outfitted with tons of SGI Octane workstations. By the end, we were getting better performance out the the one lone mac there, though. Was such an awesome animation lab. I kinda miss those days.
Really wondering how the Indy has this hagiographic reputation. It was, as far as I could tell at the time, the slowest and generally worst workstation you could buy. People bought them because they were waiting for unix technical software to be ported to Windows NT on x86 and didn't want to spend $100k per seat on RISC workstations they knew were already obsolete.
Oh this post brought back so many fun memories of college days playing Flight Simulator on SGI boxes in the computer lab. Having used IRIX on SGI Indy, Windows 3.1 on PCs seemed like a toy.
A bit random, but, the guy on the left must have been nodding or something, and the camera being used had a slow shutter speed. I did a double take in seeing two mouths.
the onyx reality engine was a true beast. I did a lot of computer graphics work in the late 90s and it was a very impressive tool. everybody laughed at my linux box running software OpenGL!
There used to be such a wonderful diversity of architectures, operating systems and platforms in comparison to today's boring landscape of really only 3 end user platforms and basically 2 viable server environments. Alas, i miss the old days
Bad management made the wrong bet, thought Itanium and Windows would take over the world.
But what really broke all UNIX workstation manufacturers' backs was the unwillingness to cannibalize their products with affordable machines. SGI workstations were not affordable to students, so they got x86 machines instead and installed Linux. Google was built with x86-based Linux boxes because that's what the founders were using and could afford. UNIX workstation manufacturers lost an entire generation of young engineers that way. Apple eventually offered what they should have: Sleek, affordable machines with a rock-solid UNIX underneath a polished UI.
This is why some people are so excited about RISC-V, BTW - they're re-enacting the exact same market play as x86 did back then. Starting out from low-end hardware only good for single-purpose use (we call that "embedded" these days) and scaling up to something that can run a proper OS, with MMU and virtual memory support. And doing it while beating everyone else on price, as well as potentially on performance.
I think it was 2001? that Industrial Light & Magic (ILM) replaced their SGI workstations with linux boxes running RedHat 7.5 and powered by a Nvidia Quadro2 gpu.
They made most of their money on expensive hardware. Their workstation market was killed by Windows NT and OS X (later Linux too) running on mass-market CPUs once graphics accelerator boards became good enough. Their server market was killed by Windows and Linux running on mass-market CPUs.
In another life I worked on a SGI Onyx for Print PrePress of Rotogravure Cylinders. Now I am working in VFX and sometimes I read up on the history of it usually the Onyx pops up and even if I did something completely different than the VFX artists at the time I get nostalgic.
250 MHz in 1993 is insanity, considering that was the 33 MHz 486 era.
> The RAM on these machines were not industry standard [...] and could handle up to 8 GB of RAM. 16 GB in the rackmount version (yeah, there was a massive rackmount version).
8 gig of RAM at a time when home users didn't even have 1 GB hard drives. 16 GB of RAM at a time when a home user's desktop could read memory at < 100 MB/s.
Having those specs then would be like running a 37 Ghz CPU with 16 TB of RAM now.