The z80 speed vs today’s modern processors

Страница 5/5
1 | 2 | 3 | 4 |

By Grauw

Ascended (10772)

Аватар пользователя Grauw

20-01-2023, 18:06

Actually during my day job working with modern systems and game development, I often wonder how specific things can still be so slow when today’s machines are a million times faster than our humble MSX.

Why is it so hard to run games at 60 fps. Why do we still have to worry about the performance cost of a heap allocation, or how many objects are registered on to receive frame tick events (it’s just a method call).

Sometimes I’m baffled by it, thinking “I can probably implement this even on MSX with acceptable performance, so why is this such a big deal”.

By santiontanon

Paragon (1810)

Аватар пользователя santiontanon

20-01-2023, 19:58

Haha, indeed. All the layers of abstraction we use in modern code are mostly to blame I would say. You write a line of Javascript code, that then needs to be interpreted/jit-compiled and then executed on some VM, etc. And the second blame are modern data structures that are very flexible, but slow compared to what we used to do in the 80s Smile Adding an element to a fixed-length array is just 2-3 CPU instructions, whereas adding it to a linked-list probably requires 100s Smile

By Micha

Expert (103)

Аватар пользователя Micha

20-01-2023, 20:08

Grauw wrote:

Sometimes I’m baffled by it, thinking “I can probably implement this even on MSX with acceptable performance, so why is this such a big deal”.

This is exactly what I'm thinking every time I push a button on my microwave; the screen reacts so slow... Not to mention the firmware on my TV. And also startup times are bloody annoying.

I'm working on something in Unity right now and I was copying the project I just started; nothing fancy yet; and the project already contained 30.000 !!!! library files.... So by definition I don't know what I'm doing, and so are all other people that develop modern software.
I think the MSX1 (and maybe MSX2) is a computer that one person can eventually fully understand, but with more complex computers that is impossible.

By santiontanon

Paragon (1810)

Аватар пользователя santiontanon

21-01-2023, 13:08

Btw, I just compiled that benchmark and ran it in my personal computer (with an Apple M1), and got 19230769 dhrystones.

I had to modify the syntax a bit for modern C compilers, and increase the number of loops 10x, as otherwise, it was too fast, and measurement was not reliable.

So, Z80 @ 2.5MHz is 91, and Apple M1 is 19230769 -> about 211327 times faster. Having in mind that I was running it on a single core out of the 8 cores, it might be even 1 million times faster for this benchmark. Moreover, notice that this benchmark only uses integer arithmetic, so, if we include floating point arithmetic, vector arithmetic, etc. we can easily get into the 10 million, 100 million range or even more haha

By Bengalack

Paladin (747)

Аватар пользователя Bengalack

21-01-2023, 21:24

Very nice Santi. A massive number. Probably a reasonable number, but we should just adjust it towards 3,5MHz. My take on multicores is that you can merely multiply the score by this. The cores are there, part of the available power. 8 cores are quite normal today.

I think it makes most sense to compare ints (and be clear on comparison is not only), as we have proper ints on the MSX. Floats comparison will not compare as well when there is no hw float support on MSX.

New cpus are aided by a frequencies that are 1000 times faster, there are many more registers, bigger word size, new special opcodes, there is prefetching, level 1 and level 2 caches, pipelining, branching, superscalar cpu, out-of-order processing, and some more juice.

One could also say that PCs/devices today also may have help from dedicated processors for cryptography and video-encoding, and gpus, which all speed up massively. Sure, but we can also claim that the v9938 is like a GPU Smile So it makes sense to look at the cpu alone.

By Edevaldo

Master (154)

Аватар пользователя Edevaldo

22-01-2023, 05:05

Quote:

Btw, I just compiled that benchmark and ran it in my personal computer (with an Apple M1), and got 19230769 dhrystones.

This number seems a bit off. M1 should give about 30000-50000 DMIPS (per core) or 53-88MDhrystones/s. Optimizations make a difference. It may be that.

The Z80 in the MSX measures at about 226 Dhrystones/s or ~0.13DMIPS. I remember measuring those number ages ago and finding good agreement on the article bellow. I measured it in an Gradiente Expert and in that board is easy to remove the M1 (not the apple :-)) wait state. When removing it I got 0.142DMIPS which is the number that he got for a Spectrum. So it seems to make sense.

Z80 Dhrystones

I think the 91 number for a 2.5MHz Z80 a bit on the too low side. But it really depends on the quality of the compiler and the string routines in the C library. In the old days those would to vary a lot.

By santiontanon

Paragon (1810)

Аватар пользователя santiontanon

22-01-2023, 14:39

I think the meaning of a "Dhrystone" changed over time. I read in this article ( http://www.roylongbottom.org.uk/dhrystone%20results.htm ) that at some point Dhrystone measurements were changed by dividing the result of the test by 1757, and also that there are later versions of the benchmark for modern C compilers, since, as originally coded, modern compilers would be smart and optimizing parts of the operations away. So, it could be a combination of both. But I just wanted to run the version of the benchmark for which we had numbers for the Z80 :)

That being said, it might be interesting to compile newer versions using the existing modern C compilers for MSX, and do the same for modern computers. I am not very familiar with the different C compilers for MSX, but is there any compiler that can generate code for both Z80 and intel or arm? In that way we can compare more fairly.

Edit: wait, the article you link has already done part of what I say! (should have read it before commenting, and not the other way around, haha)

Страница 5/5
1 | 2 | 3 | 4 |