Disk copy protections

Page 2/2
1 |

By TomH

Champion (327)

TomH's picture

29-11-2018, 18:49

I agree entirely with NYYRIKKI's existing response, but had a few extra pieces of colour to add:

sd_snatcher wrote:
TomH wrote:

So the diversity isn't anywhere near as great as a recitation of part numbers would make it appear.

Take a look at the openMSX FDC source codes. Then you'll realise that's not only a mater of the chip(s) used, but also: ...

The issue raised was whether the protection schemes described for the Atari ST might have been used on the MSX, as I argued they easily could. So the issue is whether the MSX disk controller offers the same discrepancy between what it can read and what it can write.

That I'd have to write eight versions of the test for different methods of accessing the chip isn't so relevant I think.

sd_snatcher wrote:
Quote:

The WD33C93A is a SCSI controller so isn't used for floppy drives

Back then, I've seen quite some people using LS-120 and floptical drives connected to their MegaSCSI interfaces, along with a hard disk, because this saved one slot by using a single interface. So, yes they can be used for floppy drives, and will even work with 1.44MB disks. The same goes for the IDE version of the LS-120 drives.

I don't think anybody is seriously discussing whether they should engineer new copy protection now, or should have done after the original commercial death of the MSX.

But that's not the only point: mass storage interfaces can use SofaRunit to emulate floppy drives without any expensive hardware juggling.

sd_snatcher wrote:

My point is: from the MSX software point of view, all that black magic doesn't matter, except maybe for fuzzy sectors/tracks. The majority of the other tricks would just be translated as sector read errors. That's all the user software would receive. IOW, anyone with FDC skills could easily replicate a read error for that sector/region just by using whatever simpler trick they wanted.

The terminology is a hassle but to my mind:

Assuming fuzzy bits are those placed directly on a window boundary to fall the one way or the other, one detects them by reading the same sector a few times; it should randomly switch between one of two values, one of which may have a valid CRC — so the controller-independent implementation could read the same single-sector file repeatedly and expect it to fail approximately 50% of the time.

Taking weak bits as those sections of the disk with no strongly-encoded fashion, the detection mechanism is to look for a different set of bytes on every read. You don't tend to pay any attention to the CRC.

For sector-in-sector, which is slight misnomer since the one needn't fully contain the other, just overlap, you read both sectors and check their exact values. And put enough on the track so that there isn't room for each individually. You can engineer it so that the CRCs always pass.

For variable rate data — storing one sector close to the top of readable tolerance, another close to the bottom, and timing how long each takes to read — that's also not detected as a CRC error. It's taking advantage of the fact that controllers write at a fixed speed but read within a tolerance, so if you copy it using an ordinary machine then the two sectors will take very close to the same amount of time to read, rather than being observably different.

Page 2/2
1 |