New Documentation: An authoritative reference on the YM2612

For anything related to sound (YM2612, PSG, Z80, PCM...)

Moderator: BigEvilCorporation

Nemesis
Very interested
Posts: 791
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Post by Nemesis » Thu Jun 18, 2009 11:59 pm

That table is correct, but mostly redundant. I use the following table in my core:

Code: Select all

const unsigned int YM2612::phaseModIncrementTable[1 << pmsBitCount][1 << (phaseModIndexBitCount - 2)] = {
	{0, 0, 0, 0, 0, 0, 0, 0},   //0
	{0, 0, 0, 0, 1, 1, 1, 1},   //1
	{0, 0, 0, 1, 1, 1, 2, 2},   //2
	{0, 0, 1, 1, 2, 2, 3, 3},   //3
	{0, 0, 1, 2, 2, 2, 3, 4},   //4
	{0, 0, 2, 3, 4, 4, 5, 6},   //5
	{0, 0, 4, 6, 8, 8,10,12},   //6
	{0, 0, 8,12,16,16,20,24}};  //7
This corresponds with the lookup table for fnum bit 9. You'll note that the lookup table for every other bit is simply a shift of this table. For fnum bit 10, shift the values up by 1. For fnum bit 8, shift the values down by 1. For fnum bit 7, shift the values down by 2, etc. Note that I say the table above corresponds with fnum bit 9, but in MAME, since they're giving the values for use with a 12-bit fnum calculation instead of an 11-bit calculation, the MAME table for fnum bit 9 will be shifted up by 1.


Looking back on my notes, I've realised there's some testing I still haven't completed for frequency modulation. I've talked about the upper six bits being relevant for frequency modulation, and the lower bits not having an effect. That's not entirely true however. According to my notes, I measured sign-extension taking effect in the negative portion of the frequency modulation wave, even when the only fnum bits set were below the upper six bits, all the way down to fnum bit 0, which suggests that the frequency modulation value may be calculated at full precision, then the lower bits discarded when it is combined with fnum, eg:

Code: Select all

//  ---------------------------------------------
//  |               Fnum (11-bit)               |
//  |-------------------------------------------|
//  |10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
//  ---------------------------------------------
//  ---------------------------------------------------------------------------------
//  |                    Frequency Modulation Value (20-bit)                        |
//  |-------------------------------------------------------------------------------|
//  |19 |18 |17 |16 |15 |14 |13 |12 |11 |10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
//  ---------------------------------------------------------------------------------
//10                    | 4 | 3 | 2 | 1 | 0 |
// 9                        | 4 | 3 | 2 | 1 | 0 |
// 8                            | 4 | 3 | 2 | 1 | 0 |
// 7                                | 4 | 3 | 2 | 1 | 0 |
// 6                                    | 4 | 3 | 2 | 1 | 0 |
// 5                                        | 4 | 3 | 2 | 1 | 0 |
// 4                                            | 4 | 3 | 2 | 1 | 0 |
// 3                                                | 4 | 3 | 2 | 1 | 0 |
// 2                                                    | 4 | 3 | 2 | 1 | 0 |
// 1                                                        | 4 | 3 | 2 | 1 | 0 |
// 0                                                            | 4 | 3 | 2 | 1 | 0 |
In this case, the lookup table I've given above would correspond with fnum bit 0, and you would shift up from that for each place above bit 0, then negate the value if we're in the negative half of the wave, and finally you'd shift down 9 places and add the result to fnum. I don't do this in my core currently however. Like MAME, I'm just grabbing the upper bits of fnum and calculating the frequency modulation value directly in the 11-bit target. If the real hardware calculates the frequency modulation adjustment at full precision based on all the bits of fnum however, this may not be accurate, since a carry could be generated from bit 8 of the frequency modulation value for example. I was going to perform a set of tests to confirm whether this was the case, and then make adjustments to my core as necessary, but I lost time to work on it, and I didn't end up completing the test. I'll try and find some time to carry out this test, so we know for sure how the frequency modulation value needs to be calculated. Personally, I think it's likely the value is calculated at full precision. MAME would also effectively generate a carry from bit 8 currently, due to the extra bit of precision they added to the pipeline. I'll have to do some math and see if I can construct a test where I can measure a carry taking place from bit 7 or lower.

Eke
Very interested
Posts: 885
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Post by Eke » Fri Jun 19, 2009 1:40 pm

This corresponds with the lookup table for fnum bit 9. You'll note that the lookup table for every other bit is simply a shift of this table. For fnum bit 10, shift the values up by 1. For fnum bit 8, shift the values down by 1. For fnum bit 7, shift the values down by 2, etc. Note that I say the table above corresponds with fnum bit 9, but in MAME, since they're giving the values for use with a 12-bit fnum calculation instead of an 11-bit calculation, the MAME table for fnum bit 9 will be shifted up by 1.
oh, thanks for clarifying that.
That means that I also need to divide all the values in the MAME table by 2, right ? I was not sure if I should do that :-/

This also means that FNUM bit 4 is not taken in consideration as it is currently done in MAME core and only the 6 higher bits should be used, which would involve some table size reduction and additional modifications when calcultating the current phase modulation.

Do you have some kind of test ROM for this LFO thing or did you tested audio output directly ?
In this case, the lookup table I've given above would correspond with fnum bit 0, and you would shift up from that for each place above bit 0, then negate the value if we're in the negative half of the wave, and finally you'd shift down 9 places and add the result to fnum. I don't do this in my core currently however. Like MAME, I'm just grabbing the upper bits of fnum and calculating the frequency modulation value directly in the 11-bit target. If the real hardware calculates the frequency modulation adjustment at full precision based on all the bits of fnum however, this may not be accurate, since a carry could be generated from bit 8 of the frequency modulation value for example. I was going to perform a set of tests to confirm whether this was the case, and then make adjustments to my core as necessary, but I lost time to work on it, and I didn't end up completing the test. I'll try and find some time to carry out this test, so we know for sure how the frequency modulation value needs to be calculated. Personally, I think it's likely the value is calculated at full precision. MAME would also effectively generate a carry from bit 8 currently, due to the extra bit of precision they added to the pipeline. I'll have to do some math and see if I can construct a test where I can measure a carry taking place from bit 7 or lower.
hum, I not sure to follow anymore... wouldn't this mean that MAME was already somehow quite correct (at least more than your implementation currently is) by adding this bit of precision and taking bit 4 in consideration ?

I mean, the only thing that the MAME core was doing was shifting the FNUM value by 1 (x2) then adding the phase modulation value from the LFO table (which was also shifted by 1 compared to your values).
The way the frequency table is initially computed make the frequency increment being also correctly computed (the table is set for a block value of 7 but use a multiplier of 2^5 instead 2^6, taking the later bit shifting in account).

To me, this seems very similar to the process you describe (except it's limited to bit 4 and do not take all bits in account).

This certainly need some confirmation before I keep modifying the current code ;-)

Lord Nightmare
Interested
Posts: 19
Joined: Sun Oct 12, 2008 10:45 pm

Post by Lord Nightmare » Thu Jul 16, 2009 5:26 pm

What happened with this? Did the weird stuff get documented? Eke, did you manage to fix the core?

LN
"When life gives you zombies.... *CHA-CHIK!* ...you make zombie-ade!"

Eke
Very interested
Posts: 885
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Post by Eke » Mon Jul 20, 2009 3:10 pm

No, I didn't have much time for coding lastly and I prefer waiting for Nemesis definitive documentation anyway.

From memory, here are the few things that still need to be covered:

(1) How LFO amplitude and phase modulation work (see above posts) ?
In this thread, Nemesis once mentionned that even when LFO was disabled, significant AM (and PM ?) values could still be applied depending on which step the LFO was stopped. Is that correct ? One game (Spiderman Separation Anxiety) seems to rely on this feature (intro track). It seems also that the LFO is reseted when switched from OFF to ON. For the record, in MAME implementation, LFO AM and PM values are reseted as soon as LFO is disabled. MAME gets this track wrong anyway and seems to update LFO too early (should be done AFTER output calculation I think, like EG and PG updates)

(2) How operators outputs are combined together to form a channel output ?

- Accumulator: is there an accumulator or not ? If not, what does combine operator outputs together and how are channels ouputs to the DAC ? It seems to be acknowledged now that channels are indeed multiplexed (interleaved ?) and that the "real" ouput samplerate should therefore not be VCLK/144 (~53kHz) but VCLK/24 (~320 kHz) with each 6 channels being sampled succesively. I wonder how that matters in term of emulation: doesn't that means that clipping can not occurs and that adding channel ouputs then applying a limit could be wrong ? How can we accurately emulate channel multiplexing without multiplying the output samplerate by 6 ?

-Operator 1 Self Feedback: the MAME core sums up the two last samples to calculate the Self-feedback phase modulation. Steve Snake confirmed it was a common way for Yamaha synthetisers to implement feedback. It's probably correct but still need confirmation.
Another thing that needs clarification is how the 14-bits operator 1 output is transformed into 10-bits Phase input, in relation with the feedback level (in MAME, this is NOT done the same way as other output modulation, described by Nemesis here)

-Chained Operators delayed samples: again, Steve Snake explained that there were in fact 2 operators running in parallel which would explain why some operators outputs were delayed by one cycle before being output to chained operators. The MAME core indeed implements that for algorithms 0,1,2 & 3 when either M2 or C2 needs inputs from M1 or C1, which seems quite logical (two modulator/carrier pair running in parallel). Strangely M1 (Operator 1) output is also always delayed by one sample clock, I'm not sure to understand why though (maybe because of self-feedback) and if this is correct or not.

-Slots (carriers) outputs combination: algorithm 4 sums two carriers to get the channel output, algorithms 5&6 sum 3 carriers and algorithm 7 sums 4 carriers . We know that an operator ouput is 14 bits but what happen when you sum up multiple operator ouputs ? MAME implementations use 32 bits integer to hold the result so there isn't any possible overflow but what happen on real hardware ?

Various possibilities:

(*) final output (DAC input) is 16 bits so that a maximal sum of four 14-bits operators ouputs can fit without overflow. This is how MAME implementation works.

(*) operator outputs (14 bits) are shifted left by 2 bits to form a 16 bits output. This results in some kind of precision loss at lower volumes and clipping (or overflow ???) is possible when multiple operators ouputs are combined. This was once described by Nemesis but never really confirmed. For the record, clipping each channel output to 14-bits fixes a bug in Sonic 3 (the boss ship area where bombs are dropped) where the sound appears very distorded when not cliped.

(*) operators are added inside the DAC with an unknown process

(3) How is the unsigned 8 bits Channel 6 DAC value handled and mapped to a (16 bits ?) signed channel output ?
Current MAME implementation simply transforms the unsigned 8bits value into a signed one (data -0x80) then shift this value by 6. Shifting by 8 results in too high volume relatively to other channels.

(4) Internal DAC precision: it is said to be 9 bits but how do you emulate this accurately (emulators generally need to output 16 bits signed samples) ?

Well, that's all I could remember (not counting analog filtering effects and differences between several ym2612 chip variations) :wink:
Last edited by Eke on Fri Oct 23, 2009 11:07 am, edited 3 times in total.

GManiac
Very interested
Posts: 92
Joined: Thu Jan 29, 2009 2:05 am
Location: Russia

Post by GManiac » Sat Jul 25, 2009 2:35 pm

So whats news about slowing of YM2612?
TmEE co.(TM) wrote: I recorded some music off my sound system and when speeding it up 14 times, I get correct speed and pitch for the song so thigns work. due to HW nuances in analog area, of course things won't sound exactly same when sped up

The file is too big for me to upload (it would upload forever.....). I'll see how small I can get it, if it becomes small enough I'll put it up.
Can you record pure sine, one second or even less (in original)? Then it will be equal to 15 seconds or less with slowing. Frequency of 1 Hz will be enough to get whole period of wave, actually we need only quarter of period.
Then 50000 samples of YM emissions (25000 samples from -1 to +1 of sine) (15 seconds or 1 second in original) will show us resolution of DAC.
And file won't be too large: 15 * 48000 samples per second * 2 bytes * 1 channel = 720 kbytes.
And can you make such recording from MD1 too?

HardWareMan
Very interested
Posts: 745
Joined: Sat Dec 15, 2007 7:49 am
Location: Kazakhstan, Pavlodar

Post by HardWareMan » Sun Jul 26, 2009 8:20 am

My turn.
First, I was trying to divide VCLK by 2 for YM2612 source CLK. I had make 2 records: from "Comix Zone" and "Batman & Robin". Why? Becouse "Comix Zone" uses GEMS, wich use VBlank interrupt based timings (but DAC is still analyzing BUSY flag, so PCM tempo slowing down too). But "Batman & Robin" uses YM2612 timers timing, so melody is slowing down with tempo proportionally, like tape, and you can just increase playing speed (just set 96000Hz as discretization frequency instead of 48000Hz) and get right result (with some "features": hardware low pass filter will equivalent pass twice higher frequences, than at normal speed). You can download this records: Comix Zone (5,3MB), Batman & Robin (29MB sample, full - 170MB!) (all in lossless WAV ofcourse). It is huge, but it very interesting.
Unfortunately, I can't make YM2612 work at VCLK/4 and more. Maybe some synchronization was broken. In future, I will try to slowdown whole system with didve MCLK and test this.
So, make your own test PD ROMs and I will test it on my hardware. ;)

neologix
Very interested
Posts: 122
Joined: Mon May 07, 2007 5:19 pm
Location: New York, NY, USA
Contact:

Post by neologix » Wed Aug 26, 2009 6:33 am

been a while since i read this topic. awesome to see that the ym2612 is one step closer to being spec complete thanks to the ym3438 sheets!

now who has an updated open-source ym2612 core i can use to make a preset previewer? ;) it would be primarily for the vgm-to-midi converter i'd be implementing into vgmtool, which i'm currently testing in a javascript version.

Eke
Very interested
Posts: 885
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Post by Eke » Wed Aug 26, 2009 7:56 am

http://code.google.com/p/genplus-gx/sou ... d/ym2612.c


should be up-to-date with Nemesis last findings regarding CSM and SSG-EG modes (verified with his test ROM), also include modified behavior for LFO (instead of being reseted, current modulation level is held when LFO is disabled, and is reseted when LFO is enabled)

there are a few stuff that are related to the FIR resampler I'm using or were modified for the emulator but otherwise it should be straight to use it in other project with limited changes

neologix
Very interested
Posts: 122
Joined: Mon May 07, 2007 5:19 pm
Location: New York, NY, USA
Contact:

Post by neologix » Fri Aug 28, 2009 3:58 am

awesome stuff, eke. will DL & have it ready for vgmtool soon enough :)

Paul Jensen
Interested
Posts: 32
Joined: Mon Apr 06, 2009 4:17 pm
Location: Hiroshima, Japan

Post by Paul Jensen » Tue Sep 08, 2009 4:36 am

Any new discoveries?

I've got some questions in case anybody can answer them for me.

After reading this thread I started working on VGM2MID (now tentatively called VGM Studio as it can also output other types of files now). I've already updated my Midi converter code, and now I'm working on adding PCM audio output.

The core of the Midi code in my program is based on getting an FNote (i.e. frequency in Hz) based on an FNum, Block, and the frequency of the sample clock (FSam):

FNote = FNum * FSam * (2 ^ Block - 21)

Also, the code for volume changes uses decibels for its calculations.

I'm thinking of going the same route for generating PCM output. That is, using "real" frequencies and decibel values instead of counters and lookup tables. I'm not necessarily concerned with producing binary accurate output, I just want to get output that sounds good.

So here are my questions:

1) For the Envelope Generator, is there any good method for calculating real time values for AR, SR, DR, and RR (in ms or FM clock cycles)? That is, is there any formula to determine how long it will take for a note to decay, etc, based on the rate value parameter?

2) Also, what's a good method for converting a dB value to a linear value? Unless I'm wrong, you're supposed to be able to calculate linear amplitude by using the equation:

A1 = 10 ^ (dB / 20) * A0
where A1 = amplitude; dB = decibel value; A0 = reference amplitude

But I can't figure out how to make that work. Right now I'm using the following formula to calculate output level:

outputLevel = 1 / (2 ^ (attenuationIndB / 6))

The formula seems to work well. I'm using it in my SN76489AN output code, and the output sounds almost identical to Kega and other emulators.

Anyway, this post is getting longer that I thought it would, and I've gotta go to work now. But I'd be really grateful if anyone could help me with those two questions. Thanks.

ETA one more question:

3) I've noticed in my Midi output that some games seem to have missing channels. This seems to happen with games that make a lot of "illegal" writes. Super Hydlide is a prime example of this (the bass line is missing in a lot of tracks). Also, a lot of Namco games seem to do it too. Most of the games also seem to rely on keying on/of operators individually rather than all at the same time. I remember that old versions of Dega exhibited the same behavior. Has anybody encountered this issue while writing a YM2612 core?

ETA another question (related to (3)):

4) What's the best way to handle "illegal" channel settings. On the YM2612, register $28 is used for setting KeyOn/Off, and the lowest three bits are used to select the channel to be Keyed On/Off. The manual lists the following channel select values:

0 0 0 CH1
0 0 1 CH2
0 1 0 CH3
1 0 0 CH4
1 0 1 CH5
1 1 0 CH6

But what about the following undefined values:

0 1 1 ?
1 1 1 ?

Some games (like Super Hydlide) try to set these values. Does anybody know what effect these two settings have on a real YM2612?
Last edited by Paul Jensen on Tue Sep 08, 2009 2:07 pm, edited 1 time in total.

GManiac
Very interested
Posts: 92
Joined: Thu Jan 29, 2009 2:05 am
Location: Russia

Post by GManiac » Tue Sep 08, 2009 6:15 am

Paul Jensen wrote:Any new discoveries?
I have mathematical proof of "mirroring" effect on spectrogram. :D I'll post it later. ...but maybe it's common fact? :?
FNote = FNum * FSam * (2 ^ Block - 21)
* F-Number = 144 * freq * (2^20 / MCLOCK) / 2^(Oct - 1) / multiplier
* freq = multiplier * F-Number * 2^(Oct - 1) * (MCLOCK / 2^20) / 144

With your identfiers formula will look like:
FNote = FNum * FSam * Multiplier * 2^(Block - 21) / 144
You only need to keep in mind that last note (2047, but we'll use 2048) of last octave (7) with multiplier=8 will give you base channel frequency (53 kHz).
A1 = 10 ^ (dB / 20) * A0
where A1 = amplitude; dB = decibel value; A0 = reference amplitude

But I can't figure out how to make that work. Right now I'm using the following formula to calculate output level:

outputLevel = 1 / (2 ^ (attenuationIndB / 6))
10 ^ (dB/20) should be equal to 2^(db/6). But in second case you divide original amplitude by this value (because it's attenuation, not apmlification) and in first case you mulitiply by it. Maybe here is mistake? Remember that both YM and PSG use attenuation.

Eke
Very interested
Posts: 885
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Post by Eke » Fri Oct 23, 2009 4:08 pm

Nemesis wrote:I just encountered another quirk with the YM2612 which I wasn't fully aware of. This may be common knowledge for people who've worked with the hardware, but I noticed this isn't emulated in Gens, and I wasn't aware of it until I was running these tests either, so I think it should be mentioned.

There's repeated warnings in numerous places throughout both the Sega documentation and the documentation from Yamaha, regarding how the registers which set the frequency and octave for a channel should be written to. What they say is that you should always write the Block/Fnum ($A4-$A6) byte first, then write the Fnum ($A0-$A2) byte second. What they don't say is what happens when you ignore this advice. :D

It appears that the YM2612 doesn't commit a write to the block/fnum register until the fnum register is written to. If you were for example to perform the following writes:
-Set 0xA6 to 0x1E
-Set 0xA2 to 0xFF
-Set 0xA6 to 0x24
The effective settings for the channel will remain as 0xA6 = 0x1E, 0xA2 = 0xFF. The second write to register 0xA6 will not be seen until 0xA2 is written to again. Here's a test ROM which demonstrates this behaviour:
http://nemesis.hacking-cult.org/MegaDri ... eOrder.bin

And here's the source:
http://nemesis.hacking-cult.org/MegaDri ... eOrder.asm

If register changes were applied immediately, the result would be an A4 (440Hz). Instead, this ROM will play an A6 in the actual hardware.

For the sake of completeness, it's also worth noting that writing an fnum byte for one channel doesn't commit fnum/block writes which have been made to other channels. If you write a block/fnum byte to channel 2, then write an fnum byte to channel 3, the block/fnum byte for channel 2 will not be applied. The update will remain in limbo until the fnum register for channel 2 is written to, regardless of how many writes are made to the registers for other channels.


Once again, this behaviour is emulated correctly in Kega. Does anyone know of something Kega doesn't get right?
For the record, I didn't noticed until yet but I'm not sure if those register quirks are emulated correctly by the MAME implementation. There is indeed a common block/fnum value that is used for all channels:

Code: Select all

   case 0:    /* 0xa0-0xa2 : FNUM1 */
        {
          UINT32 fn = (((UINT32)((ym2612.OPN.ST.fn_h)&7))<<8) + v;
          UINT8 blk = ym2612.OPN.ST.fn_h>>3;
          /* keyscale code */
          CH->kcode = (blk<<2) | opn_fktable[fn >> 7];
          /* phase increment value (17 bits) */
          CH->fc = ym2612.OPN.fn_table[fn*2]>>(7-blk);

          /* store fnum in clear form for LFO PM calculations */
          CH->block_fnum = (blk<<11) | fn;

          /* phase increment value (20 bits) should be updated */
          CH->SLOT[SLOT1].Incr=-1;
          break;
        }

    case 1:    /* 0xa4-0xa6 : FNUM2,BLK */
          ym2612.OPN.ST.fn_h = v&0x3f;
          break;

This indeed means that if you write a block/fnum byte to channel 2 (0xa5), then write a fnum byte to channel 3 (0xa2):

(1) block/fnum value for channel 2 will not be changed until the next write to fnum for channel 2

BUT

(2) block/fnum value for channel 3 will be applied with the block/fnum byte previously written for channel 2 and the fnum byte written for channel 3

I wonder if this is correct ?

Also, in the case of CH3 special mode registers, I'm curious to know how this works ?
Does it use the same latched fnum/block byte for ALL operators (incl. 0xa6), does it use an additional latched value only for 0xac-0xae (this is what MAME is doing) or does it latch the fnum/block byte for each operator separately ?


Again, I doubt any games try to access these registers in such uncommon way and the MAME implementation is probably (again) right but it would be nice to be sure, again "for the sake of completness"
:wink:

blargg
Newbie
Posts: 7
Joined: Sat Feb 20, 2010 6:27 am

Post by blargg » Sat Feb 20, 2010 6:39 am

I was doing some testing today on a YM2612 emulator of the number of bits in the DAC. I believe it's a 7- or 8-bit DAC, that's time-division multiplexed (as the YM2612 Wikipedia page says). This means that the DAC really operates at 6 times the normal rate, or 1/24 the clock input. So for the Sega Genesis where the YM2612 operates at 7.6704545... MHz, the DAC operates at 319.6022727... kHz. For an emulator, this just means you clear the lower bits of each channel before mixing them together, not after. The result can then be played at 1/6 that rate, 53.26704545... kHz. Making this change finally brings the proper sound to music from games like Wonder Boy in Monster World and Target Earth. In my listening tests, it seems that a 7-bit DAC sounded closer to my recordings, but I can't be really sure so take this with a grain of salt. Someone needs to run some hardware tests.

Eke
Very interested
Posts: 885
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Post by Eke » Sat Feb 20, 2010 9:22 am

blargg wrote:I was doing some testing today on a YM2612 emulator of the number of bits in the DAC. I believe it's a 7- or 8-bit DAC, that's time-division multiplexed (as the YM2612 Wikipedia page says). This means that the DAC really operates at 6 times the normal rate, or 1/24 the clock input. So for the Sega Genesis where the YM2612 operates at 7.6704545... MHz, the DAC operates at 319.6022727... kHz. For an emulator, this just means you clear the lower bits of each channel before mixing them together, not after. The result can then be played at 1/6 that rate, 53.26704545... kHz. Making this change finally brings the proper sound to music from games like Wonder Boy in Monster World and Target Earth. In my listening tests, it seems that a 7-bit DAC sounded closer to my recordings, but I can't be really sure so take this with a grain of salt. Someone needs to run some hardware tests.
the thing is; how much of the lower bits need to be cleared ?
It is not totally clear what's the width of channel output before going to the DAC: operator output is 14 bits (incl. sign bit) but what happen with "algorithms" that sum up multiple operators (up to 4) ?

In my emulator, limiting the channel output to 14 bits then summing up channels seems to fix some music but I have no confirmation it's actually correct.

In this case, you mean that "emulating" 8-bit DAC precision is simply clearing the lowest 6-bits, right (you need to keep sign bit somehow though) ? I remember Steve Snake explaining it was not exactly like that (something related to "floating-point DAC", that is if the sample value fits in the DAC precision width, there are not shift/mask and you lose nothing, i.e you lose more precision at higher volumes)

Regarding DAC, I think it was established previously in the topic that channels were indeed time-multiplexed (in a strange order like 1,4,3,2,5,6 i think, must look back) and that DAC precision was 9-bit (apparently higher on MD2 ASIC) with some strange behaviour with negative values.
Making this change finally brings the proper sound to music from games like Wonder Boy in Monster World and Target Earth
Do you have some recording that I could compare with ?

blargg
Newbie
Posts: 7
Joined: Sat Feb 20, 2010 6:27 am

Post by blargg » Sat Feb 20, 2010 8:53 pm

Here are some recordings (sorry for mp3, as I lack FLAC or anything modern): genesis_recordings.zip (2.6 MB; let me know if you have problems, as this host sucks)

These are from the original Sega Genesis model, not a later one where sound is really crappy (on those, getting rings in Sonic for example sounds all distorted).
In this case, you mean that "emulating" 8-bit DAC precision is simply clearing the lowest 6-bits, right (you need to keep sign bit somehow though)?
Yeah. I just modified the channel muting mask from ~0 to ~((1 << 7) - 1). This preserves the sign bit. This was in the MAME version, where it applies this mask to each channel, sums them, divides by 2, then outputs that.
I remember Steve Snake explaining it was not exactly like that (something related to "floating-point DAC", that is if the sample value fits in the DAC precision width, there are not shift/mask and you lose nothing, i.e you lose more precision at higher volumes)
I don't think it could be that, because some of the example tracks above have a tone fading out, yet it's still grossly quantized by the DAC. If it were floating, those would fade out smoothly as on most YM2612 emulators.
Regarding DAC, I think it was established previously in the topic that channels were indeed time-multiplexed (in a strange order like 1,4,3,2,5,6 i think, must look back)
Fortunately the order doesn't really matter, since the effects are all well above what is filtered out. That's why you can just mask the channels, add them together, then output at the usual ~53 kHz rate.
and that DAC precision was 9-bit (apparently higher on MD2 ASIC) with some strange behaviour with negative values.
I tried higher precision, but I couldn't get anything close to the distortion in the example recordings.
Last edited by blargg on Mon Feb 22, 2010 10:32 am, edited 1 time in total.

Post Reply