Questions on writing a new Mega CD emulator

Ask anything your want about Mega/SegaCD programming.

Moderator: Mask of Destiny

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Re: Questions on writing a new Mega CD emulator

Post by Nemesis » Sat May 04, 2019 8:30 am

byuu wrote:
Sat May 04, 2019 5:49 am
You are correct that TOC is stored in q-channel within the lead-in. Pretty sure the cd ripper programs that extract subchannel data don't even include this, but that info is stored in the cue file to be recreated.
So then as I presume the Mega CD BIOS expects to be able to read the raw TOC data, it would be nice to be able to recreate it, even if presently we have to HLE the CDD that reports on what it says. I don't know the format, though.
I can help with that. It's pretty straightforward. First of all, you need the authoritative source, the "Red Book" standard:
http://nemesis.exodusemulator.com/MegaD ... system.pdf
In that, check out section 17.3, that talks about the subcode structure in general. Follow that to section 17.5.1 specifically, and you'll find that it has a section that states "During the lead-in track, the data format shall be:", which will show the way the channel Q subcode data is used to encode the TOC information in the lead-in area. Of course, a sample makes it easier to understand, so here's some TOC data from a MegaLD rip:
http://nemesis.exodusemulator.com/MegaLD/TOCSnippet.bin
This is just the subcode data, stored in a de-interleaved format (so that each subcode channel appears as a separate sequence of bytes). There's 96 bytes per sector, with 12 bytes per subcode channel running from P-W. Here's a handy program to interpret it:
http://nemesis.exodusemulator.com/MegaLD/sca073.lzh
Open up the TOCSnippet.bin file with that program, and it'll decode the Q channel into its sections for you. The names are based on the ones used for the standard "audio" sectors, but the standards document above will help you map them correctly. One nice thing about the CloneCD (ccd) format is that it actually lists the separate dcoded Q channel fields in its file, at least in the ones I've seen, so it's fairly easy to map the entries in the file to the actual subcode data and rebuild it from that.

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Re: Questions on writing a new Mega CD emulator

Post by Nemesis » Sat May 04, 2019 9:25 am

Well, you know me (probably, anyway.) I've already dealt with this and the unacceptable way Famicom Disk Images are currently stored, I'll handle this the same way.

I'll support a new image format that contains all data: lead-in, pregap, postgap, lead-out, subchannel data, etc in scrambled form.

Understandably we can't yet (or possibly ever) create images in this format, but like with FDS, I can write code that "imports" a Mega CD image from a variety of lossier formats by trying to parse cue sheets, rescramble data, etc. I'll store data at 2448-bytes/sector. The raw channel frames is kind of pushing it.

I get that the data won't be guaranteed to be accurate when it's not truly ripped from the original CD, but in this way, I can *support* a proper rip in the event that it ever becomes possible now, and the emulation core only has to implement one format instead of 30 formats.

Conversely, people are going to lose it when they find out higan expects them to import entire CDs to play their games. But I think it's too complex to have the emulation core have to support every image format on the planet and to generate the data dynamically.

It sounds like you know what you're doing, so if I can get the Mega CD emulated, I'll defer to your advice on a Laserdisc image format to support
I'm leaning the same way for the Laserdisk rips, in terms of defining a new format. I do however think there's merit in separating the sector data and subcode data into separate files, my suggestion would be something like .rbd (RedBookData) and .rbs (RedBookSubcodes). There are a few good reasons to separate the data like this:
-When users are making their own rips, disks without unrecoverable errors should end up with identical .rbd files. The .rbs files however will vary greatly from successive rips even by a single user. You make life easier for people ripping and reconciling images if you keep the subcode data separated, as it's easy to update them with newly corrected rips (IE, by reconciling a few user's separate corrected rips), without invalidating the much larger .rbd file, and you can store/transmit multiple subcode rips separately from the .rbd content. Release groups and people downloading hashed sets will appreciate this greatly.
-CD images will compress better with the files separated
-The subcode data and sector data are actually separate streams of data anyway, they just happen to be interleaved together for encoding purposes on the CD itself. It's "cleaner" to be able to browse both of them as fully intact data streams rather than a fragmented interleaved stream, if you're working with the raw data.
-Many tools already exist to work with separate subcode files and separate 2352-byte sector dump files. By keeping the data separate, you can still leverage a lot of those tools.
-Most existing images can easily be converted to .rbd files by scrambling/padding out the existing data, but if there's no subcode data, the .rbs file will have to be generated from scratch. It feels nicer to have the truly generated data separate from the known good data when you're generating dummy subcode data.

These are my thoughts anyway. In terms of converting to/from other formats, a CloneCD image with subcode data is a viable target. You can use the "DataTracksScrambled" setting to know/control whether the ccd image has scrambled sectors or not, and the TOC data is easy to convert. IsoBuster at least works with a full pre-gap and scrambled sectors in this format too, but I'm not sure how many other programs do. At any rate, this can be your preferred source format for converting to/from this .rbd/.rbs format.

Eke
Very interested
Posts: 884
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Re: Questions on writing a new Mega CD emulator

Post by Eke » Sat May 04, 2019 11:59 am

So then as I presume the Mega CD BIOS expects to be able to read the raw TOC data,
No, it doesn't. BIOS only reads the TOC data through CDD command/status. Q channel raw data is decoded by CD DSP chip and sent to CDD micro-controller (through SUBQ signal) which extracts the TOC infos from it while in Lead-In area and current track info in Program area.

Subcode raw data (P-W) is also sent to Gate-Array by CD DSP (through other signals) and stored in buffer area accessible by software but afaik only R-W bits are used, for CD-G disc processing (TOC data in Q-channel is afaik never interpreted by BIOS): the subcode buffer is still updated on each sector for cd audio or games but is afaik unused unless you call subcode BIOS functions (which only CD player software does apparently and only in case of CD-G reproduction), hence why subcode data is not really needed for Mega CD games.
Last edited by Eke on Sat May 04, 2019 12:42 pm, edited 1 time in total.

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Re: Questions on writing a new Mega CD emulator

Post by Nemesis » Sat May 04, 2019 12:39 pm

What Eke says is true, and no existing games rely on subcode data, but if you want to emulate/preserve the hardware itself rather than just the games, and if you want to support homebrew, all the subcode data is made available to programs running on the system, and should be properly supported for complete MegaCD emulation. It's also possible to inadvertently swipe some raw TOC data in user code too, as when you tell the drive to seek to a certain sector and play, it's common for it to pre-seek by a few sectors, at least on the MegaLD hardware (haven't tested on normal MegaCD). If you tell it to seek to the start of the pre-gap area, if it pre-seeks, you can end up getting a few sector's worth of subcode data from the lead-in containing partial TOC data.

Eke
Very interested
Posts: 884
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Re: Questions on writing a new Mega CD emulator

Post by Eke » Sat May 04, 2019 1:12 pm

I agree, and I'm all for supporting a full image format with leadin/leadout sectors if that's ever possible to get these dumped more accurately than current TOC infos reported in cue/ccd/toc/etc files if concerned people ends up with one that is 'perfect' enough for everybody.

Personally, I also have preferences for separated 'data' and subcode files but I think going through unscrambling and error correction layers in real-time a little bit too extreme and probably unnecessary to be undistinguishable from real hardware from software point of view... unless you are planning to emulate CD bitstream random errors (or CD-DSP/RF signal processor /amp / servo driver chips at low level :roll: )

Sik
Very interested
Posts: 939
Joined: Thu Apr 10, 2008 3:03 pm
Contact:

Re: Questions on writing a new Mega CD emulator

Post by Sik » Sat May 04, 2019 1:33 pm

byuu wrote:
Sat May 04, 2019 5:49 am
I'm gonna laugh if we end up redumping every single CD from scratch using a Mega CD/LD simply because it's capable of dumping all that.
That's gonna get real expensive, real quick with the high and rapid failure rate of Sega CD hardware.
I mean, we probably want to do maintenance on the mechanical parts of the drives anyway (and from experience the custom chips rarely die, so outside the mechanical parts, most of the failures would be the discrete components like capacitors, which is cheap). I wonder how it'd go to have a full refubrish of a drive, which if working should last quite a large bunch of years hopefully (no idea if this could also have an effect on the tolerance of the parts, the Mega CD as-is doesn't seem to like the tighter pitches from newer discs).

The most important part here seems to be the part that controls the drive, not so much the pieces themselves.


EDIT: by the way, I wonder if all this talk about CD accuracy could be relevant to Wonder Library? We could never find a disc that thing is OK with reading in an emulator… I think there was some disc out there that was supposed to be compatible with it, but if it was, we could never get it to load. (part of the issue is finding a suitable disc in the first place, we aren't even sure what format it used)
Sik is pronounced as "seek", not as "sick".

Near
Very interested
Posts: 109
Joined: Thu Feb 28, 2008 4:45 pm

Re: Questions on writing a new Mega CD emulator

Post by Near » Sat May 04, 2019 2:08 pm

I'm not opposed to splitting the subcode data, but my question is then, where do we stop?

Should we split data sectors' 304 bytes of extra data to take 2352-byte sectors to 2048-byte sectors, and then have a separate .ecc file?

They would probably compress better without the error correction data.

Then again, Sega CD games definitely mix audio and data tracks, so it'd be pure chaos seeking if you did this.

Another question: is there any reason to store data scrambled on disc? Won't it destroy compression ratios?

Firebell pointed me at a weird issue in Silpheed that I did not understand: http://redump.org/disc/39378/
First 88706 sectors are correct, the next sector has 1104 bytes of data, then 7508 bytes of zeroes, then it goes data again, but, of course, shifted by 7508 bytes. Therefore, the first 7508 bytes of the audio track also belong to the data track. So the last 68255 sectors of the data track were left scrambled to preserve this anomaly. The 1st pregap sectors (LBA -150 to -1) have the same issue. Technically, the whole CD is mastered incorrectly and defected.
I understand that whether we store scrambled or descrambled, we're going to have to be able to convert the data to the opposite form since the Sega CD can read back data with or without descrambling.

But I don't understand why it would be necessary to store data scrambled. And if we did store it scrambled, then compressibility of subcode data being separate seems moot (but the file checksum concern remains a valid point.)

...

Aside, that TOC PDF format doc looks rather intimidating ... very heavy reading, fun. Well, I'll do my best and see where we get.

F1ReB4LL
Interested
Posts: 15
Joined: Thu Apr 24, 2008 6:46 pm
Contact:

Re: Questions on writing a new Mega CD emulator

Post by F1ReB4LL » Sat May 04, 2019 5:26 pm

Nemesis wrote:
Fri May 03, 2019 11:59 pm
If you can get three or more rips of each disk (preferably more like five or six), you can reconcile these errors and cancel them out, to get a true image of the disk as it really is, apart from surface damage or smudges on the disk surface that are interfering that is. If you can get three or more cleaned up rips like this from disks that have been pressed from the same master, you can then cancel out mastering errors from the individual presses too, building an accurate image of the original master, which will itself contain errors, but those are true errors from the factory, so in a preservation format like that you should preserve them.
That will never work. You will get you more or less 'constant' image, but when you do your 5 or 6 dumps on another unit, the image will have many 'new' differences. Different lasers read the certain bits differently, you can't really filter all the reading errors due to that.
The same story with the subchannels, btw. You can do 10, 20, 30 dumps on a single drive and combine them into a more or less reliable dump, but another drive will give you quite a different result. You can preserve the 2-bit (and 3-bit, 4-bit, etc..) subchannel mastering errors, but not the 1-bit ones, it's just impossible to distinct them from 1-bit read errors. Dumping different copies of the disc with the same ringcode allows you to move a little further, but, again, no way to get a 'perfect' sub with all the pre-recorded mastering errors, they need to be fixed to get a good reproducible dump.
Nemesis wrote:
Sat May 04, 2019 12:05 am
Actually, from what I've seen, the purported reading of the "lead-in" area on Plextors is just the "pre-gap", not the true lead-in. That term is usually mis-used by both users and software in the PC space. There's no way to instruct a PC drive to seek to the lead-in area, just the pre-gap. You can read into the lead-out on a lot of drives though.
Exactly the lead-in. https://www.sendspace.com/file/1kb1ov -- have a sample dump of lead-in + lead-out + ring for one of the Saturn titles (the 'main' image part is cut, so, no copyrighted data there).
Nemesis wrote:
Sat May 04, 2019 12:05 am
The Saturn protection ring can't really be "read" in the traditional sense, as the key on that protection mechanism isn't a data issue, it's a geometric one. Normally data is written to the CD surface in a smooth spiral. For the Saturn copy protection, they "wobbled" the laser as it cut the track. The low-level CD hardware on the Saturn is still able to follow this spiral, but it reports the tracking error information as it goes, and the copy protection requires to "see" this wobble occur in order for the protection to pass. Getting a full data rip wouldn't help with that, you need geometric information to go with it. You'd need some geometric info for multi-session CDs for full preservation too, to report the physical location of the sessions on the physical CD surface, but tracking the wobble is next level above that.
The 'wobble' part could be probably secured by the DPM measurements (I haven't experimented with it), but, technically, you don't need the wobble itself, you need the data encoded in it. Like the SCPS/SCES/SCUS encoded on PSX discs or the ATIP data encoded on CD-Rs.

TascoDLX
Very interested
Posts: 262
Joined: Tue Feb 06, 2007 8:18 pm

Re: Questions on writing a new Mega CD emulator

Post by TascoDLX » Sat May 04, 2019 6:05 pm

byuu wrote:
Sat May 04, 2019 2:08 pm
Should we split data sectors' 304 bytes of extra data to take 2352-byte sectors to 2048-byte sectors, and then have a separate .ecc file?
ECC is normally redundant data. Note that the 304 bytes also includes header/sync and CRC data. In most cases, all that data can be faithfully recreated. For instance, Neil Corlett created a utility, ECM, to remove (and subsequently regenerate) this data for compression purposes.

Unless you're archiving a rare image that has explicitly damaged ECC or header, something like ECM would be useful for storage efficiency (depending on your methods). For purposes of emulation, it would be technically proper to handle 2352-byte sectors and cook them down to 2048 as they are processed, but it's up to you if you want to try to optimize this.

Either way, you should be attempting to gracefully handle disc images with damaged 2352-byte data sectors, whether correctable or not -- though this might be something you'd want to handle in preprocessing rather than doing it live. Adding support for 2048-byte-sector images would be more sketchy due to lack of error detection, so I wouldn't blame you for not supporting those.

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Re: Questions on writing a new Mega CD emulator

Post by Nemesis » Sat May 04, 2019 10:34 pm

F1ReB4LL wrote:
Sat May 04, 2019 5:26 pm
Nemesis wrote:
Fri May 03, 2019 11:59 pm
If you can get three or more rips of each disk (preferably more like five or six), you can reconcile these errors and cancel them out, to get a true image of the disk as it really is, apart from surface damage or smudges on the disk surface that are interfering that is. If you can get three or more cleaned up rips like this from disks that have been pressed from the same master, you can then cancel out mastering errors from the individual presses too, building an accurate image of the original master, which will itself contain errors, but those are true errors from the factory, so in a preservation format like that you should preserve them.
That will never work. You will get you more or less 'constant' image, but when you do your 5 or 6 dumps on another unit, the image will have many 'new' differences. Different lasers read the certain bits differently, you can't really filter all the reading errors due to that.
The same story with the subchannels, btw. You can do 10, 20, 30 dumps on a single drive and combine them into a more or less reliable dump, but another drive will give you quite a different result. You can preserve the 2-bit (and 3-bit, 4-bit, etc..) subchannel mastering errors, but not the 1-bit ones, it's just impossible to distinct them from 1-bit read errors. Dumping different copies of the disc with the same ringcode allows you to move a little further, but, again, no way to get a 'perfect' sub with all the pre-recorded mastering errors, they need to be fixed to get a good reproducible dump.
No, it works. Think of the random errors on a given disk as noise. Rip the same disk, on the same drive, a few dozen times. Now compare all the rips to each other, using a simple counted voting system. The bits with the highest occurrence win. This sucessfully cancels out most errors for a given disk. If there's a smudge or damage on a surface, you can get an essentially random result for some bits, and there will be errors that are unique to that individual pressing, but with enough samples this approach filters out the random failure "noise" very well and gets you the most accurate image you can of how an individual disk is reading. Now repeat this process for half a dozen disks which were burned from the same master, and then compare those cleaned images to each other. Now use simple bitwise voting again. You'll cancel out errors again, but this time you'll be cancelling out errors from the individual disks, and end up with a representation of what the actual data was on the master pressing. I've already done this for a few MegaLD rips I have extra copies for. I know it works, I can give you a confidence percentage on every error correction, and it's very high.

EDIT: This assumes you can read raw subcode data without the drive attempting to validate or correct it, and that it will happily return subcode data with CRC failures. The part below about TOC data read from a Plextor drive suggests to me maybe this is problematic when you're trying to read through software on a PC. It's not a problem on the MegaLD, or if you sample the raw RF from the laser pickup and decode from there.
Nemesis wrote:
Sat May 04, 2019 12:05 am
Actually, from what I've seen, the purported reading of the "lead-in" area on Plextors is just the "pre-gap", not the true lead-in. That term is usually mis-used by both users and software in the PC space. There's no way to instruct a PC drive to seek to the lead-in area, just the pre-gap. You can read into the lead-out on a lot of drives though.
Exactly the lead-in. https://www.sendspace.com/file/1kb1ov -- have a sample dump of lead-in + lead-out + ring for one of the Saturn titles (the 'main' image part is cut, so, no copyrighted data there).
You're right, that's the proper lead-in alright. Sucks you can't seem to read the dodgy sectors though. When I do a read from the MegaLD, I can read the weak sectors back (with corruption) until the sync goes bad and it totally breaks down. Not really required though, you can reconstruct with just a few good repeats of the TOC, and the lead-out is (supposed to be) totally known content.
Nemesis wrote:
Sat May 04, 2019 12:05 am
The Saturn protection ring can't really be "read" in the traditional sense, as the key on that protection mechanism isn't a data issue, it's a geometric one. Normally data is written to the CD surface in a smooth spiral. For the Saturn copy protection, they "wobbled" the laser as it cut the track. The low-level CD hardware on the Saturn is still able to follow this spiral, but it reports the tracking error information as it goes, and the copy protection requires to "see" this wobble occur in order for the protection to pass. Getting a full data rip wouldn't help with that, you need geometric information to go with it. You'd need some geometric info for multi-session CDs for full preservation too, to report the physical location of the sessions on the physical CD surface, but tracking the wobble is next level above that.
The 'wobble' part could be probably secured by the DPM measurements (I haven't experimented with it), but, technically, you don't need the wobble itself, you need the data encoded in it. Like the SCPS/SCES/SCUS encoded on PSX discs or the ATIP data encoded on CD-Rs.
Hmmm, that data has been encoded as scrambled. Want to see what it looks like descrambled? It's quite a bit simpler.
http://nemesis.exodusemulator.com/MegaC ... led.img.7z
While the data itself may be important, my understanding of Saturn copy protection, if I understood it correctly from the guy who cracked it, is that the SH1 (with an internal ROM the game can't mess with) that handles low-level communication with the drive gets a tracking error signal directly from the drive. This is effectively an extra data input. Unless this input indicates tracking errors in an expected pattern, the disk is going to fail copy protection. The data may be important too, but the tracking error is also important as I understood it.
Last edited by Nemesis on Sat May 04, 2019 11:14 pm, edited 1 time in total.

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Re: Questions on writing a new Mega CD emulator

Post by Nemesis » Sat May 04, 2019 11:06 pm

I'm not opposed to splitting the subcode data, but my question is then, where do we stop?

Should we split data sectors' 304 bytes of extra data to take 2352-byte sectors to 2048-byte sectors, and then have a separate .ecc file?

They would probably compress better without the error correction data.
I think compressability is a fairly weak argument to separate the data, I only threw it in there as it's an upside, but the other reasons are far more compelling IMO. If it was only an issue of compressability, I wouldn't bother.
Another question: is there any reason to store data scrambled on disc? Won't it destroy compression ratios?
The main reason to store the data scrambled, is that's how it's actually encoded on the disk, and descrambling it implies you know, precisely and exactly, when the data is actually scrambled. You could be wrong. Just because the subcode data indicates a section is a mode 1 data region, doesn't mean it necessarily is, nor does it mean the data has been encoded that way, no matter what the redbook standard says should be done. Consoles like the MegaCD in particular are free to do whatever they want, and standards are frequently violated, while things still work quite well in practice afterwards.

The good news is scrambling/descrambling is a simple algorithm, and symmetrical too, so if you run unscrambled data through it, you scramble it. If you run scrambled data through it, you descramble it. All you need to do is pre-calculate a sector's worth of data as the "scrambling key" effectively, then XOR each byte of your sector after the sync header with the corresponding byte in that key. That's it, it boils down to a simple XOR operation per byte, so overhead/complexity isn't an issue here. It's "cleaner" to simply store the data in its true, original form, especially for emulation. If the LC8951 register settings request descrambling, do it (using the XOR table), otherwise pass the data through.
Firebell pointed me at a weird issue in Silpheed that I did not understand: http://redump.org/disc/39378/
I didn't totally follow what they wrote about this either. I'll track down the image and take a look.
Aside, that TOC PDF format doc looks rather intimidating ... very heavy reading, fun. Well, I'll do my best and see where we get.
It is rather poorly written IMO. I'll add some clarity. In that TOCSample.bin file, the subcodes are stored in groups of 12 bytes, for subcode channels P-W. We only care about the Q channel, which is 12 bytes in, and repeats every 96 bytes for each new sector. The format of the Q channel subcode data in lead-in for the 12 bytes is as follows (showing one char per nibble):

Code: Select all

12 33 44 55 66 77 88 99 AA BB CC CC
1    = Control
2    = 1 [ADR]
33   = 0 [TNO]
44   = Point
55   = Min
66   = Sec
77   = Frame
88   = Zero
99   = PMin
AA   = PSec
BB   = PFrame
CCCC = Checksum

So taking a TOC entry from that file:
44 00 01 03 01 23 00 00 02 00 7A25
4    = Control
4    = 1 [ADR]
00   = 0 [TNO]
01   = Point
03   = Min
01   = Sec
23   = Frame
00   = Zero
00   = PMin
02   = PSec
00   = PFrame
7A25 = Checksum
Basically that's it. The Point field tells you the track number, and the PMin/PSec/PFrame fields tells you the seek time on the disk for the track. There are also the "special" Point values A0, A1, and A2 that explicitly record the first track location, last track location, and start of lead-out (end of disk) locations explicitly. The TOC just repeats in a rolling fashion throughout the lead-in area. As a bonus, note that the value of the "ADR" field was actually "4" on this disk, when it's explicitly supposed to be "1" in the TOC region? Yep, fun.

TascoDLX
Very interested
Posts: 262
Joined: Tue Feb 06, 2007 8:18 pm

Re: Questions on writing a new Mega CD emulator

Post by TascoDLX » Sun May 05, 2019 8:21 am

Nemesis wrote:
Sat May 04, 2019 11:06 pm
Firebell pointed me at a weird issue in Silpheed that I did not understand: http://redump.org/disc/39378/
I didn't totally follow what they wrote about this either. I'll track down the image and take a look.
Here's how I read it: Basically, there's a break in the data track. 7508 bytes of zero (presumably scrambled) got inserted in the middle of the track, so all the sectors after the break are misaligned. Since those sectors would normally unscramble into garbage as a result, the dumper left those sectors scrambled.

I'm not sure about the comment regarding the 1st pregap sectors, though. Maybe there's a wraparound issue where the last 7508 bytes of the disc that got cut off are pushed into the start of the pregap? Hard to verify, since data from the 1st pregap isn't included in the image. :lol:

F1ReB4LL
Interested
Posts: 15
Joined: Thu Apr 24, 2008 6:46 pm
Contact:

Re: Questions on writing a new Mega CD emulator

Post by F1ReB4LL » Sun May 05, 2019 1:22 pm

Nemesis wrote:
Sat May 04, 2019 10:34 pm
Hmmm, that data has been encoded as scrambled. Want to see what it looks like descrambled?
Yeah, forgot to descramble it (the data is dumped as scrambled by default to avoid the forced descrambling & data altering by the drive's firmware).

http://forum.redump.org/topic/3367/sega-saturn-cp-talk/ -- all the pics are, sadly, gone, but it is possible to burn the rings back with the same readable SEGA text (but it's impossible to burn it as a part of lead-out nor to write the proper groove, so it's useless in terms of beating the protection checks, but still a little step to preserving all the data from the CDs).
Nemesis wrote:
Sat May 04, 2019 11:06 pm
Another question: is there any reason to store data scrambled on disc? Won't it destroy compression ratios?
The main reason to store the data scrambled, is that's how it's actually encoded on the disk, and descrambling it implies you know, precisely and exactly, when the data is actually scrambled.
Erm. The main reason it's scrambled is to provide an additional level of error detection and correction. When you loose a sequence of bytes due to some tiny scratch, those damaged bytes are spreaded across the whole sector after the descrambling, raising the chances to recover them.

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Re: Questions on writing a new Mega CD emulator

Post by Nemesis » Sun May 05, 2019 1:57 pm

I was speaking about the reason for storing them scrambled in a cd image file, which I thought was what byuu was asking about, but I can now see he did say "on disk", so yes, you're right, scrambling is done edfectively to increase the reliability of error correction, although my understanding is it achieves this by making it unlikely for various patterns to occur in the data that make error correction harder to perform. The scrambling process itself is just an in-place XOR operation, so it doesn't change the distribution of the sector data, or affect the locality of errors.

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Re: Questions on writing a new Mega CD emulator

Post by Nemesis » Sun May 05, 2019 2:01 pm

Pretty cool that people have managed to burn those rings back and make the visible text readable. I wonder how many coasters the guys at Sega made when they were trying to pull that off?

Post Reply