I recently wrote a "remix" of visual6502 (just for fun), with C (and a bit of C++), compiled to WASM to see whether the rendering performance of the chip visualization can be improved while still running in browsers, and also to improve the "UX" a bit:
Check Help -> About for a list of dependencies used in that project (lots of good stuff in there), the two most important being the original data sets from visual6502, and a C re-implementation of the transistor-level simulation, called perfect6502 https://github.com/mist64/perfect6502)
No display for that, I was mostly interested in the single-stepping capability for investigating the chip behaviour and validating against my CPU emulators.
But when clicking the "play" button it's throttled to one half-cycle per 60 Hz display frame (requestanimationframe) so "usually" it should run at 30 Hz.
I haven't checked how fast the WASM version would run unthrottled compared against a natively compiled version of perfect6502, but performance should be somewhat close (much closer than to the JS version anyway).
As far as I have seen, the C rewrite in perfect6502 uses a handful compact arrays for the simulation state, unlike the Javascript version which seems to be more like a huge graph of linked nodes, where each node is a JS object, so the C version should be a lot more cache-friendly.
This page was confusing to me until I followed the Github project page and saw this "Transistor level 6502 Hardware Simulation in Javascript". Why this same sentence couldn't be anywhere on the demo page though, is a mystery.
A bit offtopic, but I'm constantly annoyed by applications using 'x' and 'z' for related operations like zoom in and out in this case.
The reason is, German keyboards use the QWERTZ layout and as you can tell from the name, the 'z' key is in the upper row, right in the middle.
Maybe use 'w' and 's' instead? That's the default in first-person-type games. Actually, never mind, that doesn't work for the French who have AZERTY...
Since mouse dragging works fine, I expected the scroll wheel to control zooming. In fact, that was the first thing I tried before reading the instructions.
But AFAIK nobody really knows yet whether it works in all situations, because not all of the "trap transistors" had been found yet which the Zilog designers put in to make reverse engineering harder.
...maybe it would have been better to decap one of the "unlicensed clones" of the Z80, like the East German U880, because that definitely had the trap transistors fixed ;) The U880 had some minor differences in the undocumented behaviour too though.
Visual 6502 is a godsend for emulation. For a brief time I dabbled in emulating the 6502 and every question that couldn’t be answered by the manual was answered by this.
You can actually check on the webpage when the simulation is running: on my machine it shows around 17 Hz, so it's about 60000x slower than a 1 MHz 6502.
For comparison, the C reimplementation of the transistor level simulation, running unthrottled and without visualization (I think that's the main performance killer) is about 150x slower on a modern CPU (according to the readme here: https://github.com/mist64/perfect6502)
Yes, but IMHO spreading the simulation over multiple threads would be quite a challenge. As far as I understood, the simulation essentially starts with an initial state of high/low nodes/paths, and then for each node, switches each connected node throughout the node-graph accordingly, until the entire chip simulation "settles down", and then moves on to the next node.
Maybe this linear algorithm can be converted to some sort of parallel "cellular automata", which would then probably be a much better fit for GPUs than CPUs.
https://floooh.github.io/visual6502remix/
Check Help -> About for a list of dependencies used in that project (lots of good stuff in there), the two most important being the original data sets from visual6502, and a C re-implementation of the transistor-level simulation, called perfect6502 https://github.com/mist64/perfect6502)