You could also trade resolution for speed by halving vertical resolution:
* Either render at halved vertical resolution, then DMA that to the VDP, then reduce vscroll by 1 every 2nd row so the plane is stretched vertically by a factor of 2.
* Or DMA the gfx with a transparent line between each pair of lines, set both planes to the same tilemap, shift one of those vertically by 1 pixel so you'll get doubled pixels for free (except that the 3d view requires both planes now)
Yes, you could do that, but you'd end up looking much more like SNES Wolf 3D . . . though not QUITE as chunky. (smaller horizontal pixels)
Note that for ray casting, cutting vertical res doesn't help nearly as much as horizontal res. (vertical res cuts down on columns and thus rays to cast)
With that in mind, dropping to H32 and using a 208x128 window would probably be a better compromise if need be.
For a polygon renderer, cutting vertical resolution helps a bit more than horizontal, since that means fewer lines to draw (though doesn't cut on vertex computation, so not as helpful as cutting H-res in ray-casting).
Chilly Willy wrote:While the colors look great here for being dithered colors, it's a little easier than Doom would be because of the lack of shading. Doom maintains 32 shades of many of the colors used (look at a picture of Doom's palette and you'll see multiple runs of shades of the same colors). Doing Doom on a stock MD graphics might be possible, but you'd probably have to eliminate shading (or maybe cut it back drastically). Doom would be more a SCD project than a Genesis one. The SCD 68000 is more than capable of running the game logic - that's what the Jaguar version does, and its 68000 is just barely faster. You'd have to do most of the rendering on the Genesis side. You'd REALLY need to cut down the textures a lot - basically, someone would need to edit the Doom wad file and redo all the level and things graphics. It's a lot of work... much more than W3D.
Posterizing Doom's palette to 12-bit RGB should give you a good idea of how look-up based shading would work in dithered 9-bit color. (since you can approximate any 12-bit RGB color by blending 2 9-bit RGB colors)
I'll bet the actual color count of that palette will drop quite a bit from that truncation/posterization alone, quite possible low enough to fit into the 120 (or 136) psuedo color limit a 16 color palette allows. (so no further optimization required)
Of course, the shading itself would look worse than Doom's 18-bit color, and it would probably be better to cut the light levels to more like 3 or 4 (maybe 3 plus solid black) instead of 8 (SNES does 4, I think), but it should be workable. Besides that, it would also largely be the really far off objects that will be super poserized/desaturated from shading, so not that much detail loss as such.
You'd also probably want to make up completely new shading LUTs rather than just downconverting the PC originals directly.
Doing fewer shades would cut down on overhead too, more so if you opted for per-sector lighting like the 32x version. (which also avoids the super heavy posterization issue of gradient lighting in general -and you could modify light levels to stay within a reasonable color threshold too)
Also, the color is more complex than just considering two pixels... look at this
AB CD EF GH etc
That's how people are considering the pixels to calculate the colors. But that's rather arbitrary... it could just as easily be
A BC DE FG H etc
Yes, which is why you get little fringes on the left/right edges of objects. Which, in the case of composite video, is something you always get anyway to some extent. :p (and to lesser extent in S-video -and in chroma only)
In the Apple II, it's also not just blending, but actual color artifacting, so the order of the pixels matters as well. (why you get 4 unique colors from blending 2 colors, rather than the 3 unique possibilities from actual blending -and why you get 16 colors in high-res mode from blending 4 adjacent 2-color pixels)
Same as artifact colors in CGA or A8. (except CTIA and GTIA have different color artifacts, so it couldn't be used consistently -plus the A8 palette is big enough that using lower res 2-bit pixels is usually better anyway)
Chilly Willy wrote:
So was I. On Doom, to draw at a particular shade, they merely pick 1 of 32 colormaps, each one presenting a particular shade. The colormap takes the texture pixel value as an index, and gives the new value that points to the same color at the proper shade in the same palette (assuming there is one). At really low light levels, many of the colors are pointed to the same values, giving the unsaturated look to dim parts of the level.
I thought Doom used 8 light levels, and 8 colormaps corresponding to that. (Quake used 32 though)
Of course, if you are outputting in 15bit color mode, you could make much more accurate colors at all the levels since you aren't limited to the 256 color palette. That's basically what the Jaguar version does, and many PC source ports when set for 15 or 24 bit mode.
The Jaguar does more than that, it uses the full 256 light levels possible in CRY. CRY uses 1 byte for lighting, so no colormaps to deal with at all,
As far as 15-bit color goes, you've got some limits there too compared to PC Doom, which I assume is why 32x Doom uses vertical line (strip) dithering. (though TBH, posterizing the palette to 15-bit probably wouldn't have looked much different at all)
Actually if they WERE going to do dithering like that, it would have been better to convert all the textures themselves to undithered 15-bit values, and then optimize any left over (redundant) color values around making a palette optimzied for smoother lighting via dithered pixels. (except if you're just doing per-sector lighting, that doesn't matter either)
Yetti 3D on 32x would be a good example of LUT based shading in 15-bit direct color though. (more so for the non-interpolated version, since the interpolation makes dither colors)
Yeah doom use shade level, we cannot reproduce it the same way but Toy Story is actually using 3 shade levels for his 3D level and that does look great
Palette is really heavily turned around gray, purple and green colors though.
Zero Tolerance is an example of doing gradient lighting with more colors. I'm not sure that's LUT based either. (it looks pretty decent too, especially when blended, and it looks like they manage at least 6 light levels that way -might be 7, probably not 8, but I could easily count 6)
And again, given the color loss you'd get from conversion down to 12-bit colors anyway, I doubt lighting would be much worse than what you could get from a proper 256 color 12-bit palette. (as it is, the limited number of pseudo colors generated might not be the limit so much as the 9/12-bit RGB limits in general)
Zero Tolerance also "cheats" like SNES Doom with untextured floors with a static shading gradient using lots of complex dithering (nor ordered, let alone 1x2 pixel pair).