danibus wrote: ↑Fri Apr 13, 2018 1:26 am
- VPD1 and VPD2: VPD1 writes in framebuffer. VPD2 reads this framebuffer and copy all this info to one of its layers. Then mix all layers and send result to TV: If this is true, then all "sprites" are in one layer. This seems usual way.
The Video Panda 1 draws one frame to its framebuffer 0, while the Video Panda 2 is reading framebuffer 1.
When the VDP1 finishes, it draws the next frame to VDP1 framebuffer 1 while the VDP2 is accessing framebuffer 0.
Repeat.
The VDP1 only ever has one framebuffer active and accessed, because the VDP2 is using the other one, and composes the final image sent to the TV using that, as well as the other background layers in either or both banks of the VDP2 VRAM. So the VDP2 doesn't "copy" the framebuffer out, it reads it directly, and so it works out of as much as 3x 16-bit banks simultaneously.
If you use RGB sprites, then yes, all sprites are treated as 1 layer by the VDP2. 15bits are for colours, and the MSB tells the VDP2 that this is a RGB sprite, not a palette one.
If you use palette sprites, then you can put priority bits in the framebuffer, which the VDP2 uses to determine the position of a pixel. It is still treated as 1 layer, but some pixels can be below or above other backgrounds. Kind of like a VDP2 only z-index.
It's a completely stupid setup because this way you have a mode suitable for 3d (RGB mode) with shading and transparency, but you are limited in how to use the VDP2. OR, you have a mode suitable for 2d (palette mode) where you can mix VDP2 backgrounds better - but you are limited in VDP1 colour calculations (sprite transparency and gouraud shading only work on RGB mode). I would've used a 12-bit RGB mode where you have 4096 colours and 4 bits for VDP2 priority/transparency/shadowing. It would've made things SO easier (as an alias of ARGB4444), you could have used all advantages of both chips together and with the same colour fidelity as palette mode. But, alas.
danibus wrote: ↑Fri Apr 13, 2018 1:26 am
But when researching about "BURNING RANGERS" game, seems they make this 2 times.
First VPD1 put transparent elements (like fire) in framebuffer. VPD2 reads and put in a layer.
Second VPD2 erase framebuffer (is this possible) and put sprites. VPD2 reads and put in another layer.
Finally VPD2 mix everything.
Burning Rangers works in a roundabout way in order to display transparent polygons. What it does is:
- draws explosions etc. with the VDP1, fully opaque, at half resolution
- does some trickery to erase parts of the screen obscured by other polygons (think of it like software z-indexing)
- SCU DMA the VDP1 framebuffer to VDP2 VRAM
- VDP1 framebuffer is erased and VDP1 proceeds to draw the non-transparent parts.
- VDP2 then composes the final image normally from the other framebuffer and the VDP2 VRAM, and applies blending to the explosions, which is now a VDP2 background even though it was drawn by the VDP1.
Essentially you have one VDP2 background, in VDP2 VRAM, being continuously, dynamically re-drawn by the VDP1. Or depending on which way you look at it, you are using the VDP2 to render what the VDP1 drew.
The advantage of this is that it can display transparent explosions which blend with both sprites and backgrounds (normally you could only have one or the other). The disadvantage is that you only have 1 transparent layer, you have to take care to manually obscure parts which get hidden by other parts (like if someone is standing in front of the explosion), and that your transparent layer is half the resolution to speed it up. Also the framerate might be uneven as sometimes only 1 of the VDP1 passes can be finished in time, not both.
It is really complex but it works. Chris Coffin explained that this was something they came up with after STI dissolved and they went on to work on Saturn devkits. Burning Rangers seems to be the only game it was put into, as far as we know.
danibus wrote: ↑Fri Apr 13, 2018 1:26 am
- SH2 and VPD2: I read that is possible to use only VPD2 (avoiding VPD1) as draws faster. But if VPD1 is the one that make sprites/quads, how SH2 can draw?
Two ways. One is to draw in software and upload it to VDP2. Doom does this for everything but the HUD, AMOK does this for the voxel landscape and draws polygons and sprites on top, and Sonic R does it for the environmental mapping on the Sonic R logo and the loading screen.
The other way is to use expansion hardware which gets fed into the VDP2 as a background. The MPEG card does this.
danibus wrote: ↑Fri Apr 13, 2018 1:26 am
- Virtua Fighter 1 vs Virtua Fighter REMIX: I see that floor is different. In VF1 seem to have poligons but in VF1 REMIX seem to be a big texture manage by VPD2. Is this true?
This is correct. VF Remix also loses all lightning effects on the polygons. I think the original VF looked better because of the lightning. If only they could've done it in hi-res.
danibus wrote: ↑Fri Apr 13, 2018 1:26 am
- VPD1: Some people say that VPD1 lose time when "writing" texture in poligon even if the poligon is smaller than texture. I don't understand why (if it's like this). Read this about texture rendering
Normal renderers determine which part of the screen they are drawing to, and then use UV texture coordinates to determine which pixel of a texture is to be written to where they are drawing. So the only textures sampled are the ones displayed on the screen.
The Saturn works "backwards". It samples every pixel of the texture and determines whether it needs to be written to framebuffer or not. So if you have a 64x64 texture but only write 32x32 of it, then you waste a fourth of your fillrate checking texture pixels that don't end up being drawn. There are some mitigating factors like texture end codes that can be used to reduce the amount of pixels sampled, but it's still bloody stupid either way.
One thing to note though is that while it samples every pixel of the texture, it doesn't write multiple values per line to the framebuffer. So you don't actually get pixels written multiple times (in one line, anyway); the speed is wasted when reading the texture. When you DO get overwrites, is when lines intersect, ie. when the polygon is not a perfect square (4-point transformations, the manual calls these "Distorted Sprites").
I don't know if you get overdrawn pixels when you draw a poly where the right side is larger.... logic would dictate that you get dropouts here, but the VDP1 does some aliasing to get around this, but I don't know how.
I'm not clear on the actual speed of the VDP1. An old Sega tutorial lists an equation to approximate the cycles it takes for the VDP1 to draw something, but if I assume the 28.6 million cycles from the main clock and 16x16 sprites, I get something ridiculous, like ~4 MPixel/sec. Using no textures and very large sprites (to reduce memory read and VDP1 setup overhead), I got up to ~10 MPixel/sec, the last time I checked the equation. This is disturbingly low and yet it jibes in with the few developers who commented on how slow the VDP1 is compared to the PSX (which has a theoretical peak of 33MPixel/s, and some demos have done ~24MPixel/s in practice).
I also don't know if that equation takes 8bit or 16bit sprites into the equation.