First, in the official documentation, it is told that the FM Timers period are:
where TIMER is the 10bits (for timerA) or 8bits (for timerB) timerbase register value.TIMERA_period = 18 x (1024 -TIMER) microseconds
TIMERB_period = 18 x 16 x (256 -TIMER) microseconds
I believe (tell me if I'm wrong) that the "18" value is an approximated microseconds value and that the exact one is given by:
- 1000000/(VCLK/144) = 18.77 microseconds for NTSC
Or maybe it is the content of the timer base register which is directly incremented and set overflow flag when rolling back to 0 ?
Also, I am wondering about the best way to emulate those timers regarding synchronization, and from what I know, there seems to be 2 common ways:
1/ the first one, used in Genesis Plus, synchronizes the FM timers with CPU execution time: it uses timer values in microsecs and the counters are incremented by 64 microsecs at each scanline
The timer overflows when it is over the programmed timer period
2/ the second one, used in Gens and other emus I think, synchronizes the FM timers with audio samplerate: the timer counter is also incremented at each scanline but the increment value is interpolated according to the number of samples expected for the scanline (approx. SOUNDRATE/60/262 for NTSC timings). The increment value is then given by Nsamples x (VCLK/144/SOUNDRATE) x 4096 and the first timer, for example, overflows when it's over the value (1024 -TIMERA)x4096, (I imagine that this factor is applied to fix rounding errors)
I am not sure to understand what are the advantages/inconvenients of each other method, could someone with better knowledge in sound interpolation/sampling explain the difference?
Another thing: I recently added cycle-accurate sample generation in the genesis plus NGC port (which mean that at each FM writes, we look the number of CPU cycles executed so far to know the exact number of samples that need to be rendered or not before processing the write):
I would like to add also cycle-accurate FM timers emulation by updating the counters (and eventually detect timer overflow) before each FM status read , according to the CPU cycles executed so far.
My question, which I believed is related with the one above is the following:
would it be correct to increment the counter value by one each 144 CPU cycles (18.77 microsec) ? or maybe each time a new sample is rendered (approx. each 160 CPU cycles at 48Khz with NTSC timings) ? In both case, timer overflow would occur when the counter is over the Timer Base Register Value (not the microsecond value)
Thanks for any good advices, I am really not very confident with this to be correct