Jump to content

Computer graphics, my journey to understanding


laminutederire

Recommended Posts

11 hours ago, 0kelvin said:

Does that course include discussing details about APIs or the hardware itself? I mean, the computing power of GPUs is ever increasing, exponentially in some aspects, but it seems that it's never enough. Toy Story, Finding Nemo or Ice Age, at every new incarnation the graphics evolve, but they still require a lot of processing power and hours to render each frame.

It includes details about how to complexify renders, and which data structures to apply to help Having reasonable complexity. Not that much hardware though. Im thinking about how to implement what I have done in openCL to use my GPU to render, so I'll explain if I succed at that goal :)

Graphics complexity evolves with the computing power as well. Like sure, we have found ways to render things faster with acceleration data structure, importance sampling etc, but what we gain in time can be used to render more complicated scenes. Just changing the camera model can slow down renders significantly for instance. Because more realistic lens models ask for more ray paths to converge to a good solution.

 

Edited by laminutederire
Link to comment
Share on other sites

  • 2 weeks later...

Been quite busy, hence the late follow up. Spent some time learning how to sample randomly directions and everything. The goal of that was simply to lay ground for direct and global illumination algorithms for more complex effects. Basically, in direct illumination, the complexity will be added by what you do when your camera ray hits the scene. There are a few approach then. The two easiest are the following:

- You can sample the directions in which the reflected ray will be sent, and look if it hits a light. You then assign it a probability corresponding to the surface material, and finally you'll divide the corrected color value by this probability. the color value will be a product of the value of the light, the effect of the surface material, and geometric correction terms.

veach_ems.png.a80eab5e39842c0b0d0da426a07faa5e.png

You get a result like this on a test scene with limited sample number (here it was 64). As you can see, it doesn't perform well on the right. It is this way, because when you shoot the ray, then choose a random point on the larger emitter, you will more easily find a point where the contribution will be negligible, because of geometry properties. It therefore wastes computation resources on useless contribution, and isn't optimal in that type of case.

- You can also sample points on your emitter surfaces, and add the contribution of a ray going from this point to the intersection of the scene and the camera ray. Here you will do the same computation, but with fixed directions. The probability will have to be computed in a different measure to be coherent.

 

veach_mats.png.4040cda15f419be5e312652a70dfab31.png

In this method you end up with a lot of noise, and bad results for the smaller emitter. This comes from the fact that the probability to hit it is smaller, and therefore you will waste samples in this opposite case.

The idea behind this and the division by a probability is that you want to integrate over all possibilities the contribution of lights for each directions using the Monte Carlo method. This method approximate the integral of f(x)p(x) by a mean of f(xi) where xi are distributed according to the probability p (I can explain the mathematical details further if that interest you). As this method is a bit slow in terms of converge (O(sqrt(N)) where N is the number of samples),  dividing by relevant probability density functions  can help convergence if this function approximate well the integrand. That's why we divided by the probabilities in the two method explained.

An interesting thing you can do is to combine those methods by a linear combination of them to get the best of both worlds. This is called multiple importance sampling.

veach_mis.png.0cf751cf4d5593ccf9dfb74c83a30d51.png

As you can see, here you get the best of both worlds in the same computation time as the other two.

Next will come indirect illumination and why and how photon mapping can be useful, and how it relates in a way to other techniques like normal maps, or ambient occlusion maps.

(PS, what is the best place to link pictures from? (It's very picture heavy and I'm hitting the limit, but I don't want to put those pictures on imgur either ) ).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Mapcore Supporters

    Funds go towards hosting and license costs, Discord server boosts, and more. If you'd like to donate, check out our Patreon announcement.

×
×
  • Create New...