After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.

About the reality...

Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the **photon**, symbolized as a single-point **particle **moving in space. The more photons there are, the more powerful light is. But light is in the same time a **wave**, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:

- Light can
**bounce**and continue its travel to another direction - Light can be
**absorbed**by the matter (and the energy can be transformed to heat) - Light can
**go through**the matter, for example with air or water, some properties might change but it goes through it

And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.

*How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?*

One of the oldest method is still used today because of its accuracy: the **ray-tracing** method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture.

By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by **Alex Roman**, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the **lightmap baking** method.

Lightmap baking

OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!

A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called **Luxels **(they are kind of “lighting pixels”). Each luxel get its 2 own properties: a **color **and a **brightness**. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

You can also see them in-game with the console command **mat_luxels 1** (without and with).

During the compilation process, a program named** VRAD.exe** is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:

- It is
**partially absorbed**in the lightmap grid - A less bright ray
**bounces**from the face

Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.

In case you did not know, you can change the lightmap grid scale with the “**Lightmap Scale**” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black.

Here’s an example:

The maximum amount of bounces can be fixed with the VRAD command ** -bounce X** (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.

Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the** angle of incidence of light**.

Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

*In-hammer view on the left, in-game view on the right*

Light Falloff laws

One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.

At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.

As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

*(R is the radius of the sphere)*

If we integrate that surface area in the previous equation:

With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin.

*So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.*

Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.

This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.

Constant, Linear, Quadratic... Wait, what?

In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:

- x^0 = 1 : degree 0
- x^1 = x : degree 1
- x^2 : degree 2
- x^3 : degree 3
- ...

And

- a0 is the constant named “constant coefficient” (associated to degree 0)
- a1 is the constant named “linear coefficient” (associated to degree 1)
- a2 is the constant named “quadratic coefficient” (associated to degree 2)

Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a *“polynomial of the second degree”* is written:

Then, if we take the expression from the inverse-square law, which was:

With a2 = 1 and **D **being the variable of **distance **from the light origin.

In Source, the constant ♥ is actually the **brightness **(the value you configure here).

It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

Or...

And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables **constant**, **linear **and **quadratic**, for any of your light / light_spot entity in your level.

Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

Another example:

By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, **by default lights in Source Engine follows the classic Inverse-Square law**.

If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.

The following equation is a personal guess of what could be the one used by VRAD:

With **constant**, **linear **and **quadratic **being percentage values. The blue part is here to determine the** brightness to apply**, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin.

The best way to see how this equation works is to visualize it in a 2D graph:

https://www.desmos.com/calculator/1oboly7cl0

This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:

- a0 is the
**Constant**coefficient that you enter in hammer - a1 is the
**Linear**coefficient - a2 is the
**Quadratic**coefficient - B is the
**Brightness**coefficient

In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function **I(D)** is the Intensity function depending on the distance **D**. The drawing of the function is visible in the rest of the webpage.

**Try to interact with it!**

This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this **Constant-Linear-Quadratic Falloff system**, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!

Part Two : link

## Recommended Comments

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Note:Your post will require moderator approval before it will be visible.