I am using the fog effect as a transparent object with varying alpha, where the alpha is a function of the amount of fog that a ray pass through. The amount of fog thus depends on the entry point of the ray into the sphere and the exit point, which gives the total inside distance. To simplify, it is assumed that the density is the same everywhere in the sphere. There are 4 parameters needed: the position of the camera V, the position of the pixel that shall be transformed P, the centre of the fog sphere C and the radius of the sphere, r. All coordinates are in the world model, not screen coordinates. For the mathematical background, see line-sphere intersection in Wikipedia. The task is to find the distance that a ray is inside the sphere, and use this to compute an alpha for fog blending.
Using a normalized vector l, for the line from the camera V to the pixel P, the distance from the camera to the two intersections are:
There are 4 cases that need to be considered:
- Camera and pixel are both inside the sphere.
- The camera is outside, but the pixel is inside.
- The camera is inside, but the pixel is outside.
- Both camera and pixel are outside of the sphere.