3D rasterisation

صفحة 3/4
1 | 2 | | 4

بواسطة hit9918

Prophet (2927)

صورة hit9918

15-07-2017, 11:00

106k, 2 byte per pixel, is it this
{ nextx, color }
the L register of the struct address is x, nextx - x gives the size of the span.

without z buffer byte I imagine the span logic a lot easier.
it is hassle enough to have all the x management.

4-5 cycles for external RAM,
it would be funny when nextx is in internal RAM and the rarely acessed color byte on the cartridge.
add 16k to the H register, offset to another 16k page.

بواسطة Lord_Zett

Paladin (807)

صورة Lord_Zett

15-07-2017, 12:07

but is this way just for making 3d objects or can you do a more world like gfx?

بواسطة bore

Master (161)

صورة bore

15-07-2017, 12:29

For perspective calculation I used a table with the perspective of a point at 256. ptable[z] = (256 * P)/(z + P) where P is the distance between the screen and the viewers eye in pixels. (You have to guesstimate that value.)
Then then perspective calculation is reduced to x * ptable[z] / 256.
A nice thing here is that skipping the lower byte of the multiplication introduces less than one LSB worth of error and it is in the other direction than the one introduced by the quantization in the table so in total the error is within +/-1 LSB
Since you only need to check the upper byte in the square table during the multiplication you can save a lot of cycles there.

Another way to check what way a surface is facing is to check the sign of its area.
For a triangle between the points A B C it would be ((By-Ay)*(Bx+Ax) + (Cy-By)*(Cx+Bx) + (Ay-Cy)*(Ax+Cx))/2, so three multiplications and the last shift can be omitted if you only care about the sign.
I've never tried this method so I don't know about the caveats.
I guess it should be possible to shift down x and y before the addition to avoid having to deal with overflow.
It will make things a bit "fuzzy" with areas close to 0 but as long as you consider those as visible it would probably be fine.

بواسطة hit9918

Prophet (2927)

صورة hit9918

15-07-2017, 14:48

lord_zett yes, it could make some world, big polygons go very fast. and CLS goes very fast.

بواسطة Grauw

Ascended (10713)

صورة Grauw

16-07-2017, 17:26

hit9918 wrote:

106k, 2 byte per pixel, is it this
{ nextx, color }
the L register of the struct address is x, nextx - x gives the size of the span.

without z buffer byte I imagine the span logic a lot easier.
it is hassle enough to have all the x management.

Yeah, the logic definitely took some effort to write Smile. Now that I have it, it’s such a shame to discard it Tongue.

hit9918 wrote:

4-5 cycles for external RAM,
it would be funny when nextx is in internal RAM and the rarely acessed color byte on the cartridge.
add 16k to the H register, offset to another 16k page.

Interesting idea if the buffer does not fit in the internal memory!

بواسطة Lord_Zett

Paladin (807)

صورة Lord_Zett

16-07-2017, 18:09

hit9918 wrote:

lord_zett yes, it could make some world, big polygons go very fast. and CLS goes very fast.

would be cool.

بواسطة Grauw

Ascended (10713)

صورة Grauw

16-07-2017, 19:54

bore wrote:

For perspective calculation I used a table with the perspective of a point at 256. ptable[z] = (256 * P)/(z + P) where P is the distance between the screen and the viewers eye in pixels. (You have to guesstimate that value.)
Then then perspective calculation is reduced to x * ptable[z] / 256.

A while back I was looking at this, because why use a 4x4 perspective projection matrix when you could use 4x3 matrices and divide by z? But for correct perspective, you need to divide by the distance to the point, in other words sqrt(x*x + y*y + z*z).

So for a point in the center of your view, the z coordinate will be the correct distance, but for points closer to the edges of the screen the z distance will be too low, so it causes some perspective distortion. The amount of shearing would vary based on the view angle. It may be acceptable under certain conditions?

The perspective projection matrix does yield correct perspective whilst avoiding the square root, through the use of homogeneous coordinates. (To hazard a guess, I imagine the w is determined by rotating the z value to the center of the screen?)

p.s. I reckon the P value can be derived from the view angle and the near clipping plane.

Supposedly a view angle of 90º is a good choice, because the 45º half-angle means that the clipping planes lie directly on x=z and y=z, which makes it easier to do e.g. clipping in world space rather than view space. This can avoid matrix multiplications and the perspective divide for culled polygons.

Quote:

Another way to check what way a surface is facing is to check the sign of its area. For a triangle between the points A B C it would be ((By-Ay)*(Bx+Ax) + (Cy-By)*(Cx+Bx) + (Ay-Cy)*(Ax+Cx))/2, so three multiplications and the last shift can be omitted if you only care about the sign. I've never tried this method so I don't know about the caveats.

That looks similar to backface culling based on the winding order of the polygons using a cross product, although that should require only two multiplications? (Bx-Ax)*(Cy-Ay) - (Cx-Ax)*(By-Ay)… Not sure what is different.

The one caveat that I can think of is that the polygon indices must have the correct winding order (CW or CCW, should be a simple checkbox in 3D modeling software).

---

To throw a link out here, in addition to Michael Abrash’s Graphics Programming Black Book which I think I mentioned earlier, the Hugi programming articles on 3D graphics are also interesting.

بواسطة bore

Master (161)

صورة bore

16-07-2017, 20:23

Grauw wrote:

For a point in the center of your view, the z coordinate will be the correct distance, but for points closer to the edges of the screen the z distance will be too low, so it causes some perspective distortion. The amount of shearing would vary based on the view angle. It may be acceptable under certain conditions?

No distortion. The function just maps 3D coordinates to the front of screen.
http://imgur.com/hoEof7V
It is simply the intersection between the screen and the pixel-eye line.

بواسطة TomH

Champion (343)

صورة TomH

28-11-2017, 20:27

Super-late, but...

Correct perspective is indeed to divide by z alone. That's because you're projecting onto a flat plane in front of the viewer, rather than onto a curved plane. One hand-waving persuasive argument is that if your 3d scene itself has a 2d screen in it, and that screen is parallel to the one you're viewing it on, it should be a rectangle no matter where it is. Also, as a corollary, if all three of (x, y, z) dictated scaling then just transforming the vertices of geometry and drawing straight edges between them wouldn't produce the correct display as straight edges should be mapped as curves.

Even on a stock Z80, the multiplication needn't be a problem for all types of scene as long as you have at least 64kb of RAM. A 3d engine I wrote for another Z80 allowed 3d objects to be anywhere in an 8.8-precision scene, but the geometry of each individually had to be confined into a 1.8 space. So the old x*y = ((x + y) * (x + y) - (x - y) * (x - y))/4 trick involved only keeping a 512-entry x -> x^2 table. By centring that table on address 0, direct LD BC, x / LD HL, y / ADD HL, BC / LD (something), HL-type code gives speedy lookup. Just transform the object centres will more expensive multiplication, then do the vertices local to the object with the fast route and sum the two.

If you have multiple instances of the same object with the same rotation, do the object vertices once, then just vary the centre transforms.

Re: scan buffering, I found that manipulating that quickly cost much more than I was saving in drawing when the output is filled spans — either by way of the linear search to insert or, if turned into an ordinary array for a logarithmic search, from the actual cost of shuffling items over. A less memory-efficient but faster solution if you can accept reverse painters algorithm (so, paint front to back), is inserted into a binary tree. At each node in the tree: (i) clip the span to insert to subtract the node from it; (ii) head left and/or right as desired. So you get a completely logarithmic insert. There's a linear in-order traversal at the end if you want to output as a diff from the previous frame, but it's one linear step per frame, not one per insert. So net for n spans in a line is O(n + n log n), or still O(n log n), whereas with a list — whether linked or linear because of the shuffle step in linear insertion — is a linear step for each insert so it's O(n) n times over, for O(n^2).

On reverse-face detection: if you want to do it ahead of time then a suitable test is (any vector from camera to face) dot normal <= 0. If prefer to defer it until rasterisation, it's even easier: if the first span you would otherwise draw has negative length, stop.

As to run-slice versus Bresenham: I preferred run-slice for anything where x delta > y delta, Bresenham otherwise. That gives the one point you want for each line at each step, though run slice costs an 8x8 divide per line.

If you're interested in demo effects rather than actual games, and don't care about subpixel precision, there are only 49,152 possible lines on a 256x192 display as there is the complete set of lines from (0, 0) to every other point, and then there are translations of subsets of those. Therefore in a 96kb lookup table you can store the result and remainder of every possible 8x8 division, and always perform run-slice with no set-up cost. Actually much less than that because symmetry considerations apply; e.g. if you reduce yourself into 192x192 then the lookup size is trivially 36kb. You'll just probably have to try to squeeze other lookup tables and code into the gaps unless you want to spend absurdly on lookup costs.

بواسطة santiontanon

Paragon (1770)

صورة santiontanon

29-11-2017, 01:51

Just a quick note about "correct perspective". If we are talking about how objects to look to a pinhole camera, then the equations are exactly as grauw says: you need to divide by sqrt(x*x + y*y + z*z), since what you need to do is to divide by the distance the light needs to travel from the object to the camera. In this case, notice that it is not true that a "2d screen" appears rectangular to a camera!! for a pinhole camera, it will not... However, if your camera is not a pinhole, but a plane, then that's different. It all depends on what do you assume your camera to be.

For example, in my Tales of Popolon game, I decided to go with a pinhole camera, since that's more similar to the human eye than a flat plane, and since I'm doing pre-calculated raycasting, there was no extra cost anyway in dividing by sqrt(x*x + y*y + z*z)! As a consequence, as you can see in this image, horizontal "lines" appear as "curves", which is how they actually appear to a pinhole camera if you take a picture in the real world:

صفحة 3/4
1 | 2 | | 4