Been quite busy, hence the late follow up. Spent some time learning how to sample randomly directions and everything. The goal of that was simply to lay ground for direct and global illumination algorithms for more complex effects. Basically, in direct illumination, the complexity will be added by what you do when your camera ray hits the scene. There are a few approach then. The two easiest are the following:
- You can sample the directions in which the reflected ray will be sent, and look if it hits a light. You then assign it a probability corresponding to the surface material, and finally you'll divide the corrected color value by this probability. the color value will be a product of the value of the light, the effect of the surface material, and geometric correction terms.
You get a result like this on a test scene with limited sample number (here it was 64). As you can see, it doesn't perform well on the right. It is this way, because when you shoot the ray, then choose a random point on the larger emitter, you will more easily find a point where the contribution will be negligible, because of geometry properties. It therefore wastes computation resources on useless contribution, and isn't optimal in that type of case.
- You can also sample points on your emitter surfaces, and add the contribution of a ray going from this point to the intersection of the scene and the camera ray. Here you will do the same computation, but with fixed directions. The probability will have to be computed in a different measure to be coherent.
In this method you end up with a lot of noise, and bad results for the smaller emitter. This comes from the fact that the probability to hit it is smaller, and therefore you will waste samples in this opposite case.
The idea behind this and the division by a probability is that you want to integrate over all possibilities the contribution of lights for each directions using the Monte Carlo method. This method approximate the integral of f(x)p(x) by a mean of f(xi) where xi are distributed according to the probability p (I can explain the mathematical details further if that interest you). As this method is a bit slow in terms of converge (O(sqrt(N)) where N is the number of samples), dividing by relevant probability density functions can help convergence if this function approximate well the integrand. That's why we divided by the probabilities in the two method explained.
An interesting thing you can do is to combine those methods by a linear combination of them to get the best of both worlds. This is called multiple importance sampling.
As you can see, here you get the best of both worlds in the same computation time as the other two.
Next will come indirect illumination and why and how photon mapping can be useful, and how it relates in a way to other techniques like normal maps, or ambient occlusion maps.
(PS, what is the best place to link pictures from? (It's very picture heavy and I'm hitting the limit, but I don't want to put those pictures on imgur either ) ).