8 januari 2012

Fog sphere implemented in shader

Fog effects are commonly used at the far horizon (far cut-off plane of the frustum). But local fog effects can also be used for atmosphere. This article is about using fogs defined by a centre and a radius, and how to implement that in the fragment shader. It may seem that fogs are similar to lamps, but there are important differences. Lamps will have a local effect on the near vicinity, while fog will change the view of every ray that pass through the fog cloud. That means different parts of the scene will change, depending on where the camera is.

I am using the fog effect as a transparent object with varying alpha, where the alpha is a function of the amount of fog that a ray pass through. The amount of fog thus depends on the entry point of the ray into the sphere and the exit point, which gives the total inside distance. To simplify, it is assumed that the density is the same everywhere in the sphere. There are 4 parameters needed: the position of the camera V, the position of the pixel that shall be transformed P, the centre of the fog sphere C and the radius of the sphere, r. All coordinates are in the world model, not screen coordinates. For the mathematical background, see line-sphere intersection in Wikipedia. The task is to find the distance that a ray is inside the sphere, and use this to compute an alpha for fog blending.

Using a normalized vector l, for the line from the camera V to the pixel P, the distance from the camera to the two intersections are:

If the value inside the square root is negative, then there is no solution; the line is outside of the sphere, and no fog effects shall be applied.

There are 4 cases that need to be considered:

  1. Camera and pixel are both inside the sphere.
  2. The camera is outside, but the pixel is inside.
  3. The camera is inside, but the pixel is outside.
  4. Both camera and pixel are outside of the sphere.
For the first case, it is trivial to compute the fog covered distance from camera to pixel: "distance(C,P)".

For the last case, with both camera and pixel are outside of the sphere, the distance will be the difference between the two intersections. This is the same as the double value of the square root. There are two non obvious exceptions that need to be taken care of. If the pixel is on the same side of the sphere as the camera, there shall be no fog effect. That means that the fog, for the given pixel, is occluded. The other special case is when you turn around. There would again be a fog cloud if you don't add a condition for it (l·C being negative).

For the two other cases, there is a point inside the sphere, and a distance to one intersection with the sphere. The entry or exit point E can be found by multiplying the unit vector l with the near or the far value value of d, and adding this to the camera position V. Given E, the effective distance can easily be computed to either P or V. The final fragment shader function looks as follows:

// r: Fog sphere radius
// V: Camera position
// C: Fog sphere centre
// P: Pixel position
// Return alpha to be used for the fog blending.
float fog(float r, vec4 V, vec4 C, vec4 P) {
    float dist = 0; // The distance of the ray inside the fog sphere
    float cameraToPixelDist = distance(V, P);
    float cameraToFogDist = distance(V, C);
    float pixelToFogDist = distance(P, C);
    if (cameraToFogDist < r && pixelToFogDist < r) {
       dist = cameraToPixelDist; // Camera and pixel completely inside fog
    } else {
        vec3 l = normalize(P-V);
        float ldotc = dot(l,C-V);
        float tmp = ldotc*ldotc - cameraToFogDist*cameraToFogDist + radius*radius;
        if (cameraToFogDist > r && pixelToFogDist > r && ldotc > 0 && tmp > 0) {
            // Both camera and pixel outside the fog. The fog is in front of
            // the camera, and the ray is going through the fog.
            float sqrttmp = sqrt(tmp);
            vec3 entrance = camera + l*(ldotc-sqrttmp);
            if (cameraToPixelDist > distance(V, entrance)) dist = sqrttmp*2;
        } else if (cameraToFogDist > r && pixelToFogDist < r) {
            // Outside of fog, looking at pixel inside. Thus tmp>0.
            vec3 entrance = camera + l*(ldotc-sqrt(tmp));
            dist = distance(entrance, P);
        } else if (cameraToFogDist < r && pixelToFogDist > r) {
            // Camera inside fog, looking at pixel on the outside
            vec3 exit = camera + l*(ldotc+sqrt(tmp));
            dist = distance(exit, V);
        }
    }
    // Maximum value of 'dist' will be the diameter of the sphere.
    return dist/(radius*2);
}

A test of using a fog sphere. It is clear that rays going through a lot of fog has a bigger fog effect.

Another example, using two fog spheres under ground. The colour of the fog need to be adapted, depending on how dark the surroundings are. It isn't shown above, but when there are overlapping fogs I use the most dominant alpha, not an accumulated value.

The GPU performance cost for a fog can get high if there are many of them. If so, it can be an advantage to use a deferred shader, where only pixels are fog compensated that will be shown.