z80 clocks

For anything related to sound (YM2612, PSG, Z80, PCM...)

Moderator: BigEvilCorporation

User avatar
TmEE co.(TM)
Very interested
Posts: 2296
Joined: Tue Dec 05, 2006 1:37 pm
Location: Estonia, Rapla City
Contact:

Post by TmEE co.(TM) » Fri Feb 21, 2014 7:04 am

There's still the other 25µs of that line, and VDP is outputting during that time too. That is where the border and sync periods are.
Mida sa loed ? Nagunii aru ei saa ;)
http://www.tmeeco.eu
Files of all broken links and images of mine are found here : http://www.tmeeco.eu/FileDen

User avatar
Nemesis
Very interested
Posts: 670
Joined: Wed Nov 07, 2007 1:09 am
Location: Sydney, Australia

Post by Nemesis » Thu Feb 27, 2014 2:27 am

I find the easiest way to think about analog video is to forget about pixels all together. All a CRT tube is doing is visualizing an analog waveform. It's basically a three channel oscilloscope, with red green and blue inputs. Unlike an oscilloscope, rather than varying the beam position based on input voltage, the voltage level for each input is mapped to the brightness or intensity of that colour channel at any given point. The sync inputs determine the speed and position of the raster beam, basically adjusting your time.

In a nutshell, you've got analog waveform input, and that's it. There are no pixels, or any fixed sample points in that waveform of any kind. There isn't even any real requirement for the sync to be sensible or stable from one frame to the next, and in the case of interlaced displays, it isn't.

In reality, a graphics processor is a big digital to analog converter, and its analog output is usually going to be varied at fixed intervals corresponding with some kind of clock pulse. That clock pulse may vary depending on various video mode settings, but at this level, you can usually quantize the video signal into discrete blocks you could call pixels. The problem is, the "pixels" in this case don't have any inherent size mapping, that's determined by the sync pulses which define the screen geometry, and it's quite common, and perfectly valid, to have pixels that are non-square. In a more extreme example, you could have pixels which even potentially start on one line and finish on another, start on one frame and finish on another, or be offset in any conceivable way on the screen. The screen itself is a big canvas with no defined grid or geometry of any kind. The sync pulses allow the mapping of the pixels to the physical screen surface to be constantly redefined. Interlacing is a simple example of this, where each frame is offset by 0.5 of a "line" from the previous frame, but all kinds of transformations in the screen geometry are possible.


The biggest difficulty in converting analog video to some kind of fixed image in square pixels is actually dealing with the sync pulses. Any correct transformation of the colour data would need to continually calculate the screen geometry at any given point based on the current and past sync input, mapping the current raster position to its own pixel grid in the same way a CRT would adjust its raster beam position. If you do this right, it's then just as simple as sampling the red green and blue channel inputs at that point in time, and you've got your pixel colour, with a transform to give you your canvas location. With this technique, you could even visualize the full overscan region, with no blanking areas where video signal is dropped at all. There's quite a lot of processing to do this kind of transformation in software at real time, but it's possible. Personally, I'd love to see someone build this as a hardware device. The current scanline converters and the like that appear to be on the market are pretty crappy, they all seem to only work for a few screen modes and that's it. There's no reason you couldn't make one decent converter that can handle any analog video signal and convert it to an image. It might be possible to do this in realtime using an off the shelf microcontroller, perhaps even on the raspberry pi.

Mechanical Menace
Newbie
Posts: 7
Joined: Mon Apr 07, 2014 6:00 am

Post by Mechanical Menace » Thu May 01, 2014 9:08 am

Nemesis wrote:There's no reason you couldn't make one decent converter that can handle any analog video signal and convert it to an image. It might be possible to do this in realtime using an off the shelf microcontroller, perhaps even on the raspberry pi.
I'd personally look more at an XMOS for that. If taking the ARM devboard route a BeagleBone Black.

Post Reply

Who is online

Users browsing this forum: No registered users and 8 guests