Revolt Game No Z Buffer Fighting

  1. Revolt Game No Z Buffer Fighting Unblocked
  2. Z Buffer Support
  • Apr 11, 2017 Here's where things becomes slightly more tricky. You have three distinct categories of fighting games to handle. 2D fighting games, 3D fighting games and 3D fighting games which play like 2D fighting games. Street Fighter V, The King of the Fighters XIV, and Mortal Kombat X are shining example of this subset.
  • Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values! To further analyze the z-buffer resolution, let's take the derivative of. with respect.
  • Download free Puzzle, Racing, Match 3, Hidden Objects games. Download Games Online Games. Around The World in 80 Day. Fogg win a bet and travel around the world in 80 days! Around The World in 80 Day. Fogg win a bet and travel around the world in 80 days!
  • Animal Simulator Games let you experience life in the wilderness as a ferocious animal. Go on the hunt for birds or other simulated beasts. Our best online animal simulator games let you explore nature for free. Join a realistic fight for survival here at Silvergames.com or enjoy a quiet life on a farm.

Jan 25, 2010 Track taken from Naked Aggression's s/t 7'EP, released by Campary Records, 1993, from this classic pop-anarcho-punk band from Madison, Wisconsin. LYRICS: (chorus) I want to REVOLT to change REVOLT.

Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on?

You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.

Revolt game no z buffer fighting unblocked

The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2zFarzNear{displaystyle log _{2}{tfrac {zFar}{zNear}}} bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.

While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.

It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.

It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create 'Z fighting' for coplanar primitives. Here are some Drawing Lines over Polygons.

Revolt Game No Z Buffer Fighting Unblocked

Why is my depth buffer precision so poor?

The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.

You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.

To be more specific, consider the transformation of depth from eye coordinates

xe,ye,ze,we{displaystyle x_{e},y_{e},z_{e},w_{e}}

to window coordinates

xw,yw,zw{displaystyle x_{w},y_{w},z_{w}}

with a perspective projection matrix specified by

and assume the default viewport transform. The clip coordinates of zc{displaystyle z_{c}} and wc{displaystyle w_{c}} are

zc=zef+nfnwe2fnfnwc=ze{displaystyle {begin{aligned}z_{c}&=-z_{e}{dfrac {f+n}{f-n}}-w_{e}{dfrac {2*f*n}{f-n}}w_{c}&=-z_{e}end{aligned}}}

Why the negations? OpenGL wants to present to the programmer a right-handed coordinate system before projection and left-handed coordinate system after projection.

and the ndc coordinate:

zndc=zcwc=zef+nfnwe2fnfn=f+nfn+2fnweze(fn){displaystyle {begin{aligned}z_{ndc}&={dfrac {z_{c}}{w_{c}}}&=-z_{e}{dfrac {f+n}{f-n}}-w_{e}{dfrac {2*f*n}{f-n}}&={dfrac {f+n}{f-n}}+{dfrac {2*f*n*w_{e}}{z_{e}(f-n)}}end{aligned}}}

The viewport transformation scales and offsets by the depth range (Assume it to be [0, 1]) and then scales by s = (2n-1) where n is the bit depth of the depth buffer:

zw=s(wezefnfn+0.5f+nfn+0.5){displaystyle z_{w}=s*({dfrac {w_{e}}{z_{e}}}*{dfrac {f*n}{f-n}}+0.5{dfrac {f+n}{f-n}}+0.5)}

Let's rearrange this equation to express ze / we as a function of zw

zewe=fnfnzws0.5f+nfn+0.5=fnzws(fn)0.5(f+n)0.5(fn)=fnzws(fn)f{displaystyle {begin{aligned}{dfrac {z_{e}}{w_{e}}}&={dfrac {dfrac {f*n}{f-n}}{{dfrac {z_{w}}{s}}-0.5{dfrac {f+n}{f-n}}+0.5}}&={dfrac {f*n}{{dfrac {z_{w}}{s}}(f-n)-0.5(f+n)-0.5(f-n)}}&={dfrac {f*n}{{dfrac {z_{w}}{s}}(f-n)-f}}end{aligned}}}

Now let's look at two points, the zNear clipping plane and the zFar clipping plane:

zewe={fnf=n,when zw is 0fn(fn)f=f,when zw is s{displaystyle {dfrac {z_{e}}{w_{e}}}={begin{cases}{dfrac {f*n}{-f}}=-n,&{text{when }}z_{w}{text{ is }}0{dfrac {f*n}{(f-n)-f}}=-f,&{text{when }}z_{w}{text{ is }}send{cases}}}

In a fixed-point depth buffer, zw is quantized to integers. The next representable z buffer depth away from the clip planes are 1 and s-1:

zewe={fn1s(fn)f,when zw is 1fns1s(fn)f=f,when zw is s1{displaystyle {dfrac {z_{e}}{w_{e}}}={begin{cases}{dfrac {f*n}{{tfrac {1}{s}}(f-n)-f}},&{text{when }}z_{w}{text{ is }}1{dfrac {f*n}{{tfrac {s-1}{s}}(f-n)-f}}=-f,&{text{when }}z_{w}{text{ is }}s-1end{cases}}}

Now let's plug in some numbers, for example, n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)

zewe={0.01000015,when zw is 1395.90054,when zw is s1{displaystyle {dfrac {z_{e}}{w_{e}}}={begin{cases}-0.01000015,&{text{when }}z_{w}{text{ is }}1-395.90054,&{text{when }}z_{w}{text{ is }}s-1end{cases}}}

Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values!

To further analyze the z-buffer resolution, let's take the derivative of zewe{displaystyle {dfrac {z_{e}}{w_{e}}}} with respect to zw

dzewedzw=fn(fn)1s(zws(fn)f)2{displaystyle {dfrac {operatorname {d} {dfrac {z_{e}}{w_{e}}}}{operatorname {d} z_{w}}}=-f*n*(f-n)*{dfrac {tfrac {1}{s}}{({tfrac {z_{w}}{s}}*(f-n)-f)^{2}}}}

Now evaluate it at zw = s

dzewedzw=f(fn)1sn=ffn1s{displaystyle {begin{aligned}{dfrac {operatorname {d} {dfrac {z_{e}}{w_{e}}}}{operatorname {d} z_{w}}}&=-f*(f-n)*{dfrac {tfrac {1}{s}}{n}}&=-f{dfrac {tfrac {f}{n-1}}{s}}end{aligned}}}

If you want your depth buffer to be useful near the zFar clipping plane, you need to keep this value to less than the size of your objects in eye space (for most practical uses, world space).

Why is there more precision at the front of the depth buffer?

After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.

As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it's apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.

As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.

In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.

A previous question in this section contains related information.

Fighting

There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options?

The typical approach is to use a multipass technique. The application might divide the geometry database into regions that don't interfere with each other in Z. The geometry in each region is then rendered, starting at the furthest region, with a clear of the depth buffer before each region is rendered. This way the precision of the entire depth buffer is made available to each region.

Assuming perspective projection, what is the optimal precision distribution for the depth buffer? What is the best depth buffer format?

First what is the precision in the x and y directions? Consider two identical objects, the first of which is at distance d in front of the camera and the other is at distance 2*d.Thanx to the perspective projection, the more distant object will be seen as half the size of the other.This means the precision in the X and Y direction with which it is drawn will be half that of the first object (half as many pixels in X and Y direction for the same object size).So the precision in X and Y direction is proportional to 1/z.

Now i will assume a postulate which defines what i consider to be 'the general case':for any given position in the camera space the precision in the z direction should be roughly equal to the precision in the x and y directions.This means if you attempt to calculate the position of an object by its rendered image and the values in the depth buffer, all the 3 components of the position you calculate should bewithin the same error margin.This also means the maximal (camera-space) z difference which can causes z-fighting is equal to the (camera-space) size of 1 pixel at the same distance (in z direction).

From all the above it follows that the precision distribution of the depth buffer should be such that values close to z have approximate precisionof C/z for any given camera space z (within the valid range), where C is some constant.

Lets assume the depth buffer stores integer values (witch means uniform precision over the entire range) and the z coordinate values are processed by a function f(x)before sored to the depth buffer (that is, for given camera space z, to the depth buffer goes the value f(z)).Let's find what is this function. Denote it's inverse with g(x). Denote the smallest difference between 2 depth buffer values with s (this is a constant over the entiredepth buffer range because, as we assumed, it has uniform precision).Then, for given camera-space z, the minimal increment of the camera-space depth is equal to g(f(z) + s) - g(f(z)). The minimal increment is the inversed precision,so it should be equal to z/C.From here we derive f(z) + s = f(z*(1+C)), where s and C are constants. This is the defining equation of the logarithm function.So f(x) = h*log2(x) for some constant h (h depends on C and s, but since C is unknown constant itself, the exact formula is of no use).I prefer log2(x) over ln(x) because in the context of binary computers log2 is more 'natural'. In particular the floating point format stores values that are roughly closeto log2 of the true values, at least when it comes to precision distribution - what we are concerned with now.Thus, assuming the above postulate, the floating point format is nearly perfect for a depth buffer.

There is one more little detail. With standard projection matrix, the values that are output to the depth buffer are not the camera-space z but something thatis proportional to 1/z. The log function has the property that f(1/x) = -f(x), so it is only a matter of sign change.

So the best depth format would be the floating point, but it may be necessary to apply some negating/scaling/shifting to adjust for best precision distribution.glDepthRange() should be enough for this.

Retrieved from 'http://www.khronos.org/opengl/wiki_opengl/index.php?title=Depth_Buffer_Precision&oldid=6423'

Depth precision is a pain in the ass that every graphics programmer has to struggle with sooner or later. Many articles and papers have been written on the topic, and a variety of different depth buffer formats and setups are found across different games, engines, and devices.

Because of the way it interacts with perspective projection, GPU hardware depth mapping is a little recondite and studying the equations may not make things immediately obvious. To get an intuition for how it works, it's helpful to draw some pictures.

This article has three main parts. In the first part, I try to provide some motivation for nonlinear depth mapping. Second, I present some diagrams to help understand how nonlinear depth mapping works in different situations, intuitively and visually. The third part is a discussion and reproduction of the main results of Tightening the Precision of Perspective Rendering by Paul Upchurch and Mathieu Desbrun (2012), concerning the effects of floating-point roundoff error on depth precision.

GPU hardware depth buffers don't typically store a linear representation of the distance an object lies in front of the camera, contrary to what one might naïvely expect when encountering this for the first time. Instead, the depth buffer stores a value proportional to the reciprocal of world-space depth. I want to briefly motivate this convention.

In this article, I'll use d to represent the value stored in the depth buffer (in [0, 1]), and z to represent world-space depth, i.e. distance along the view axis, in world units such as meters. In general, the relationship between them is of the form

where a,b are constants related to the near and far plane settings. In other words, d is always some linear remapping of 1/z.

On the face of it, you can imagine taking d to be any function of z you like. So why this particular choice? There are two main reasons.

First, 1/z fits naturally into the framework of perspective projections. This is the most general class of transformation that is guaranteed to preserve straight lines—which makes it convenient for hardware rasterization, since straight edges of triangles stay straight in screen space. We can generate linear remappings of 1/z by taking advantage of the perspective divide that the hardware already performs:

The real power in this approach, of course, is that the projection matrix can be multiplied with other matrices, allowing you to combine many transformation stages together in one.

The second reason is that 1/z is linear in screen space, as noted by Emil Persson. So it's easy to interpolate d across a triangle while rasterizing, and things like hierarchical Z-buffers, early Z-culling, and depth buffer compression are all a lot easier to do.

Graphing Depth Maps

Equations are hard; let's look at some pictures!

The way to read these graphs is left to right, then down to the bottom. Start with d, plotted on the left axis. Because d can be an arbitrary linear remapping of 1/z, we can place 0 and 1 wherever we wish on this axis. The tick marks indicate distinct depth buffer values. For illustrative purposes, I'm simulating a 4-bit normalized integer depth buffer, so there are 16 evenly-spaced tick marks.

Trace the tick marks horizontally to where they hit the 1/z curve, then down to the bottom axis. That's where the distinct values fall in the world-space depth range.

The graph above shows the “standard”, vanilla depth mapping used in D3D and similar APIs. You can immediately see how the 1/z curve leads to bunching up values close to the near plane, and the values close to the far plane are quite spread out.

It's also easy to see why the near plane has such a profound effect on depth precision. Pulling in the near plane will make the d range skyrocket up toward the asymptote of the 1/z curve, leading to an even more lopsided distribution of values:

Similarly, it's easy to see in this context why pushing the far plane all the way out to infinity doesn't have that much effect. It just means extending the d range slightly down to 1/z=0:

What about floating-point depth? The following graph adds tick marks corresponding to a simulated float format with 3 exponent bits and 3 mantissa bits:

There are now 40 distinct values in [0, 1]—quite a bit more than the 16 values previously, but most of them are uselessly bunched up at the near plane where we didn't really need more precision.

Revolt Game No Z Buffer Fighting

A now-widely-known trick is to reverse the depth range, mapping the near plane to d=1 and the far plane to d=0:

Much better! Now the quasi-logarithmic distribution of floating-point somewhat cancels the 1/z nonlinearity, giving us similar precision at the near plane to an integer depth buffer, and vastly improved precision everywhere else. The precision worsens only very slowly as you move farther out. Facerig avatars.

Game

Z Buffer Support

The reversed-Z trick has probably been independently reinvented several times, but goes at least as far back as a SIGGRAPH ’99 paper by Eugene Lapidous and Guofang Jiao (no open-access link available, unfortunately). It was more recently re-popularized in blog posts by Matt Pettineo and Brano Kemen, and by Emil Persson's Creating Vast Game Worlds SIGGRAPH 2012 talk.

All the previous diagrams assumed [0, 1] as the post-projection depth range, which is the D3D convention. What about OpenGL?

OpenGL by default assumes a [-1, 1] post-projection depth range. This doesn't make a difference for integer formats, but with floating-point, all the precision is stuck uselessly in the middle. (The value gets mapped into [0, 1] for storage in the depth buffer later, but that doesn't help, since the initial mapping to [-1, 1] has already destroyed all the precision in the far half of the range.) And by symmetry, the reversed-Z trick will not do anything here.

Fortunately, in desktop OpenGL you can fix this with the widely-supported ARB_clip_control extension (now also core in OpenGL 4.5 as glClipControl). Unfortunately, in GL ES you're out of luck.

The 1/z mapping and the choice of float versus integer depth buffer are a big part of the precision story, but not all of it. Even if you have enough depth precision to represent the scene you're trying to render, it's easy to end up with your precision controlled by error in the arithmetic of the vertex transformation process.

As mentioned earlier, Upchurch and Desbrun studied this and came up with two main recommendations to minimize roundoff error:

  1. Use an infinite far plane.
  2. Keep the projection matrix separate from other matrices, and apply it in a separate operation in the vertex shader, rather than composing it into the view matrix.

Upchurch and Desbrun came up with these recommendations through an analytical technique, based on treating roundoff errors as small random perturbations introduced at each arithmetic operation, and keeping track of them to first order through the transformation process. I decided to check the results using direct simulation.

My source code is here—Python 3.4 with numpy. It works by generating a sequence of random points, ordered by depth, spaced either linearly or logarithmically between the near and far planes. Then it passes the points through view and projection matrices and the perspective divide, using 32-bit float precision throughout, and optionally quantizes the final result to 24-bit integer. Finally, it runs through the sequence and counts how many times two adjacent points (which originally had distinct depths) have either become indistiguishable because they mapped to the same depth value, or have actually swapped order. In other words, it measures the rate at which depth comparison errors occur—which corresponds to issues like Z-fighting—under different scenarios.

Here are the results obtained for near = 0.1, far = 10K, with 10K linearly spaced depths. (I tried logarithmic depth spacing and other near/far ratios as well, and while the detailed numbers varied, the general trends in the results were the same.)

In the table, “indist” means indistinguishable (two nearby depths mapped to the same final depth buffer value), and “swap” means that two nearby depths swapped order.

Precomposed view-
projection matrix
Separate view and
projection matrices
float32int24float32int24
Unaltered Z values
(control test)
0% indist
0% swap
0% indist
0% swap
0% indist
0% swap
0% indist
0% swap
Standard projection45% indist
18% swap
45% indist
18% swap
77% indist
0% swap
77% indist
0% swap
Infinite far plane45% indist
18% swap
45% indist
18% swap
76% indist
0% swap
76% indist
0% swap
Reversed Z0% indist
0% swap
76% indist
0% swap
0% indist
0% swap
76% indist
0% swap
Infinite + reversed-Z0% indist
0% swap
76% indist
0% swap
0% indist
0% swap
76% indist
0% swap
GL-style standard56% indist
12% swap
56% indist
12% swap
77% indist
0% swap
77% indist
0% swap
GL-style infinite59% indist
10% swap
59% indist
10% swap
77% indist
0% swap
77% indist
0% swap

Apologies for not graphing these, but there are too many dimensions to make it easy to graph! In any case, looking at the numbers, a few general results are clear.

  • There is no difference between float and integer depth buffers in most setups. The arithmetic error swamps the quantization error. In part this is because float32 and int24 have almost the same-sized ulp in [0.5, 1] (because float32 has a 23-bit mantissa), so there actually is almost no additional quantization error over the vast majority of the depth range.
  • In many cases, separating the view and projection matrices (following Upchurch and Desbrun’s recommendation) does make some improvement. While it doesn't lower the overall error rate, it does seem to turn swaps into indistinguishables, which is a step in the right direction.
  • An infinite far plane makes only a miniscule difference in error rates. Upchurch and Desbrun predicted a 25% reduction in absolute numerical error, but it doesn't seem to translate into a reduced rate of comparison errors.

The above points are practically irrelevant, though, because the real result that matters here is: the reversed-Z mapping is basically magic. Check it out:

  • Reversed-Z with a float depth buffer gives a zero error rate in this test. Now, of course you can make it generate some errors if you keep tightening the spacing of the input depth values. Still, reversed-Z with float is ridiculously more accurate than any of the other options.
  • Reversed-Z with an integer depth buffer is as good as any of the other integer options.
  • Reversed-Z erases the distinctions between precomposed versus separate view/projection matrices, and finite versus infinite far planes. In other words, with reversed-Z you can compose your projection matrix with other matrices, and you can use whichever far plane you like, without affecting precision at all.

I think the conclusion here is clear. In any perspective projection situation, just use a floating-point depth buffer with reversed-Z! And if you can't use a floating-point depth buffer, you should still use reversed-Z. It isn't a panacea for all precision woes, especially if you're building an open-world environment that contains extreme depth ranges. But it's a great start.

Nathan is a Graphics Programmer, currently working at NVIDIA on the DevTech software team. You can read more on his blog here.