Jump to content

leplubodeslapin

Members
  • Content Count

    517
  • Joined

  • Last visited

  • Days Won

    14

Reputation Activity

  1. Like
    leplubodeslapin got a reaction from neptune for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
  2. Like
    leplubodeslapin reacted to Radix for an article, Static Prop Combine in CS:GO   
    Static Prop Combine in Counter-Strike: Global Offensive
    A step by step guide
    thanks to @untor
    What is Static Prop Combine?
    Static prop combine, or informally speaking "autocombine", is a new feature in CS:GO's VBSP.
    It allows VBSP to merge together multiple static props into a single static prop, either automatically or with user-defined rules.

     
    What is static prop combine good for?
    Static prop combine is another feature to optimize your maps. Most people might think that "the less geometry rendered the better". So if you use small props, it's easier to hide what is not visible.
    That's not wrong. But there is a problem:
    In Source, there is one draw call per model per material. And these draw calls are very performance-hungry.
    That's where static prop combine comes into play:
    By combining models sharing the same materials, less draw calls are performed, which greatly helps optimization.
    Valve has stated that Nuke runs 40% faster after they implemented static prop combine.
     
    How do I use static prop combine?
    The static prop combine feature was added in 2016 with the release of the reworked de_nuke. But since then it was not (?) used by community mappers, there are no (?) guides on the Internet except this documentation.
    @untor helped me to make static prop combine feature do its job. So we decided that it's time to publish a step by step guide how to use static prop combine.
    We presuppose that you are already familiar with the creation of props  
    0. Backup your CS:GO folder (optional)
    We do not take responsibility for any damage done to your files. So it's time to backup your game files now if you have not already. In general we recommend to duplicate your "Counter-Strike Global Offensive" folder, so you can use a separate installation of CS:GO for mapping while keeping the other one clean for playing.

    1. Source files
    You must have the source files of the models you want to be combined. Usually 3 files for each prop:
    *.qc reference mesh (supported formats are *.smd, *.dmx and *.fbx) physics mesh So if you want to combine props made by you, you should already have these files.
    If you want to combine props made by Valve, you will need to decompile them first. And then change the names - otherwise, the version of the prop that is packed in the VPK would overwrite your version.
        
    In this guide we will use two different pipe props:

    You can download the example files here (contains the *.qc and *.smd files) :
    example.zip
     
    Browse to "...\Steam\steamapps\common\content\csgo\"
    Create a folder "models". In our example we have another subfolder "example". Save the model source files there:
     

    These are our QCs:
    pipe_straight.qc
    pipe_curved.qc

    Notes:
    Restrictions for the *.qc:
    Only the first $body is recognized. $model is not recognized. $appendsource and $addconvexsrc are not recognized. You can only use $upaxis Z or Y.  
    2. Compile your props
    Your models have to be compiled from this directory now:
    Open your model compile tool (I use Crowbar)
    Then browse to "...\Steam\steamapps\common\content\csgo\models\example\" and compile the QCs.
    The compiled model files should be in "...\Steam\steamapps\common\Counter-Strike Global Offensive\csgo\models\props\example\" now.
     
    3. spcombinerules.txt
    Browse to "...\Steam\steamapps\common\Counter-Strike Global Offensive\csgo\scripts\hammer\spcombinerules\"
     

    There you will find "spcombinerules.txt". In this file the combine rules for Valves props are defined. It is a standard KeyValues-formatted text file. Each entry follows the format below.
    Rename it to "spcombinerules_valve.txt" (or whatever you want) and create a new text file "spcombinerules.txt".
    Then copy and paste the following into "spcombinerules.txt" and save it.
     

    4. Stub QCs
    Stub QCs are QCs which contain a base template for the QCs which static prop combine generates. Generally, they should only include:
    $staticprop $surfaceprop $cdmaterials Any $texturegroups used by the models. Browse to "...\Steam\steamapps\common\Counter-Strike Global Offensive\csgo\scripts\hammer\spcombinerules\qc_templates\".
    In our example we create a new subfolder "example", open it and then create a text file and rename it to "pipe_combine.qc":


    Copy and paste the following into "pipe_combine.qc" and save it:
     
    5. Compile your map
    Add some of our example props to your map in Hammer and compile the map.
    In our example we use the following compile parameters for VBSP
    Compile parameters (full list here) :
    -StaticPropCombine: Merges static props together according to the rules defined in scripts/hammer/spcombinerules/spcombinerules.txt. This lowers the number of draw calls, increasing performance. It can also be used to lower the number of static props present in a map.
    -StaticPropCombine_AutoCombine: Automatically generate static prop combine rules for props that VBSP deems should be combined. Note: This does not write to spcombinerules.txt.
    -StaticPropCombine_ConsiderVis: Instead of using the distance limit, combine all props in the group that share visclusters.
    -StaticPropCombine_SuggestRules: Lists models sharing the same material that should be added to spcombinerules.txt.
    -StaticPropCombine_MinInstances <int>: Set the minimum number of props in a combine group required to create a combined prop. Tip:Valve had this set to 3 for the new Dust 2.
    -StaticPropCombine_PrintCombineRules: Confirm: Prints the combine rules?
    -StaticPropCombine_ColorInstances: Instances of combined props get colored.
    -KeepSources: Don't delete the autogenerated QCs and unpacked model files after finishing.
    -CombineIgnore_FastReflection: Combine props, even if they have differing Render in Fast Reflections settings.
    -CombineIgnore_Normals: Combine props, even if they have differing Ignore Normals settings.
    -CombineIgnore_NoShadow: Combine props, even if they have differing Disable Shadows settings.
    -CombineIgnore_NoVertexLighting: Combine props, even if they have differing Disable Vertex lighting settings.
    -CombineIgnore_NoFlashlight: Combine props, even if they have differing Disable flashlight settings.
    -CombineIgnore_NoSelfShadowing: Combine props, even if they have differing Disable Self-Shadowing settings.
    -CombineIgnore_DisableShadowDepth: Combine props, even if they have differing Disable ShadowDepth settings.
     
    6. Success?
    The combined props look exactly like the single props. So how can you be sure that the static prop combine process was successful?
    - Once the map is compiled, the combined props will be packed into your *.bsp automatically.
    - If you add -keepsources to the compile parameters, you can also find the combined props in "...\Steam\steamapps\common\Counter-Strike Global Offensive\csgo\models\props\autocombine\*name of your map*\"
    and their QCs in "...\Steam\steamapps\common\content\csgo\models\props\autocombine\*name of your map*\".
    - If you add -StaticPropCombine_ColorInstances to the compile parameters, instances of combined props are colored in CS:GO.
     
    7. Additional notes
    Hammer:


    - You can manually disable static prop combine for individual props with the "Disable Prop Combine" keyvalue.
    - Prop scaling (Uniform Scale Override) is not supported yet (?)
    - If the original props don't have a collision model, you will have to set collisions to "Not Solid" in the properties. Otherwise the combined prop will be solid (automatically generated collision mesh; causes problems).
    - If the props differ in specific keyvalues, in most cases the default (e.g. Alpha) or the higher value will be used (e.g. fade distances)
    - Props that differ in the below keyvalues will NOT be combined, unless manually overridden with the appropriate VBSP option:
    Render in Fast Reflections (-combineignore_fastreflection) Ignore Normals (-combineignore_normals) Disable Shadows (-combineignore_noshadows) Disable Vertex lighting (-combineignore_novertexlighting) Disable Flashlight (-combineignore_noflashlight) Disable Self-Shadowing (-combineignore_noselfshadowing) Disable ShadowDepth (-combineignore_disableshadowdepth) - Props that differ in the below keyvalues will NOT be combined:
    Skin Color Disable Flashlight  
    TO DO
    some fps tests with an actual map! which gives better results: " -StaticPropCombine_ConsiderVis" or prop combining based on distances? Is there a console command to display the number of performed draw calls/props? ...  
    ______________________________________________________________________
    Sources:
    https://developer.valvesoftware.com/wiki/Static_Prop_Combine
    https://developer.valvesoftware.com/wiki/QC
    https://developer.valvesoftware.com/wiki/VBSP
     
  3. Like
    leplubodeslapin got a reaction from a Chunk for an article, Source Lighting Technical Analysis: Part One   
    After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.
     
    About the reality...
    Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:
    Light can bounce and continue its travel to another direction Light can be absorbed by the matter (and the energy can be transformed to heat) Light can go through the matter, for example with air or water, some properties might change but it goes through it And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.
    How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?
    One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. 
    By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method.
     
    Lightmap baking
    OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!
    A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

    You can also see them in-game with the console command mat_luxels 1 (without and with).
    During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:
    It is partially absorbed in the lightmap grid A less bright ray bounces from the face Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

    When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

    On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

    On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.
    In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

    But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. 
    Here’s an example:

    The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.
    Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light.

    Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

    In-hammer view on the left, in-game view on the right
     
    Light Falloff laws
    One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

     
    Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.
     
     
     
    At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.
    As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

     
    You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

    (R is the radius of the sphere)
     
    If we integrate that surface area in the previous equation:

    With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. 
    So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.
    Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

    This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.
    This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.
     
    Constant, Linear, Quadratic... Wait, what?
    In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

    Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:
    x^0 = 1 : degree 0 x^1 = x : degree 1 x^2 : degree 2 x^3 : degree 3 ... And
    a0 is the constant named “constant coefficient” (associated to degree 0) a1 is the constant named “linear coefficient” (associated to degree 1) a2 is the constant named “quadratic coefficient” (associated to degree 2) Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written:

    Then, if we take the expression from the inverse-square law, which was:

    With a2 = 1 and D being the variable of distance from the light origin.
    In Source, the constant ♥ is actually the brightness (the value you configure here).
    It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

    Or...

    And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level.
    Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

    Another example:

    By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law.
    If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

    You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.
    The following equation is a personal guess of what could be the one used by VRAD:

    With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. 
    The best way to see how this equation works is to visualize it in a 2D graph: 
    https://www.desmos.com/calculator/1oboly7cl0
    This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:
    a0 is the Constant coefficient that you enter in hammer  a1 is the Linear coefficient a2 is the Quadratic coefficient B is the Brightness coefficient In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. 
    Try to interact with it!
    This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!
     
    Part Two : link
  4. Like
    leplubodeslapin reacted to THE OWL for an article, dz_blacksite - info about use "4wayblend" textures   
    Next tutorial > [dz_bs - info about use "4wayblend" textures #2]

    Hello! On Mapcore for the first time, so I don’t know if you write something like that here. In general, I recently encountered a problem I had been thinking about for a long time, and the solution was the simplest...

    4wayblend textures stumped me and I didn't understand how to work with them. Since I could not find any articles about this issue, I decided to write about them.

    What if the texture details don't work?
    In order for 4wayblend textures to use grass, you must write the path to these details in the map parameters. (The path you must follow is shown below)

    Find a button "Map" on top panel in Hammer World Editor > Further Map Properties > Detail Material file > detail/detailsprites_survival (Put this path inside "Detail Material File")
    (!) To set the grass, use "Paint Alpha"


    How blend multiple textures?
    Go to the settings [displacement] and select there [Sculpt], then find the button [Blend] and there select the desired texture. (Use the left mouse button to paint, and clamping the right one will reduce the radius of the paint area)

    (!) If the camera mode in 3D view is set to "3D Shaded Textured Polygons", then you will not see the drawn part of the texture. Camera mode should be set to "3D Textured" when drawing

  5. Awesome
    leplubodeslapin reacted to Radu for an article, 2018: Mapcore's Year in Review   
    Keeping with tradition, I'd say it's about time we took a look at what our community has achieved throughout the year. If last time I was saying how 2017 was a year of immense growth, then 2018 was surely one of significant change. And it hasn't been without its troubles and anxious moments. No change ever is, but I believe it to be for the best. We've seen some of our friends become parents, change work fields or get their first job in the industry. We've even seen a few pursue their dream projects. And for that, we have to applaud them. It takes courage to keep moving forward and to realise when it's time for something new. In the meantime, I hope this article inspires you and I wish everyone 
    good luck!
     
    2018: Mapcore's Year in Review
     

    SteamVR - Gulping Goat Space Farm
    by @Steve, @marnamai, @The Horse Strangler, @Sersch and others at Scraggy Rascal Studios
    produced in collaboration with Valve
    "Scraggy Rascal has been working with Valve to create all new SteamVR content, we've been given a lot of liberty to create these locations. Our goal was to create interesting and fun locations for the player to explore. These projects, over the last couple months, have been a crash course in Source 2,VR, project management, delivering within deadlines, working together as a team and personal growth. It has been an invaluable experience and great opportunity ... and we're just getting started!" - marnamai
     

    Darksiders III - Art
    by @The Horse Strangler and others at Gunfire Games
    "Probably one of the biggest challenges the artists and designers faced on Darksiders 3 was working with both a platforming and fully connected streamed world. This meant that everything exists all the time. While we streamed levels in and out, areas couldn't intersect and we couldn't do the classic "Small exterior, big interior" swap. This was especially challenging because of how much verticality our design must support. We had a few "vistas", but for the most part every aspect of the level was accessible. If you can see it, you will likely be able to get there, jump on it, fight around it, etc. Fury, the main playable character can double jump, swing, float, glide and even rocket jump over 10 meters high. Personally for me it completely changed how I looked at art filling up a space. Every single mesh we placed impacted design. Art was design, and design was art." - The Horse Strangler
     

    Europa
    by @[HP]
    "Europa is a relaxing narrative experience. The goal with this game is to offer just enough challenge that its rewarding to get from one area to the other for more than just the visuals by using environmental hazards, platforming sequences and light puzzles that you can beat by exploring.The game is split into linear sections and wider areas, that's at the core of the game and as you play, you keep improving your characters moving ability, which will further exploration and give you the ability to solve newer light puzzles. There's none of the typical character upgrading systems, rather, the levels will offer the incremental challenges and the sense of progression. Europa's main focus lies in environmental storytelling and immersing the player in it's universe with passive storytelling, evoking awe and bliss with colorful watercolor-like art and music." - Helder Pinto
     

    Counter-Strike: Global Offensive - Turnpike
    by @Squad
    "For a while the "Highway Restaurant" theme has been sitting in my little Concepts.txt file. When the Wingman Contest was announced, it felt like the perfect opportunity to turn this idea into a map, as its relatively small size would be fitting for the Wingman gamemode. The casual nature of Wingman made me add some elements that I would not normally add to, let's say, a Defusal map, like the TF2-esque team color coding (albeit subtle), the moving vehicles and the silly bomb target. Additionally, since the playable space is (almost) completely indoors, making it nighttime felt right, as it both emphasizes the interiors and makes for an atmospheric blorange background." - Squad
     

    Dying Light - A New Hope
    by @will2k
    "A full-fledged custom single player campaign that ties in to the original story of the main game. It will see the main protagonist, Kyle Crane,leaving the City for the countryside to search for a specific elusive medicinal herb and bring it back to Dr. Camden who believes it could be the cure to the Harran Virus. This campaign is a one man show as I’m doing everything myself: level design, environment art/detailing, story creation, scripting/quest creation, custom dialog, custom audio, custom materials/textures, custom foliage systems, custom brushes for terrain painting/sculpting, lighting, manual nav mesh tuning, scripted NPCs…" - will2k
     

    Prodeus
    by @General Vivi and Michael Voeller
    "Prodeus is the first person shooter of old, re-imagined using modern rendering techniques. Oh, and tons of blood, gore, and secrets. Creating Prodeus has meant a lot to us over the last year. It feels great to finally be doing something for ourselves. It can be pretty ambitious at times since there are just two of us, but I’m confident we can pull it off. Keep an eye out for the end of February for a big announcement." - General Vivi
     

    Counter-Strike: Global Offensive - Ruby
    by @catfood
    "When I was on vacation in Portugal years ago I was so impressed by the city Lisbon that I really wanted to build a map that has the same vibe. At the time I was already working on different projects so I decided whenever I got enough time to work on a map this size I would go back. So early 2017 the moment was finally there, I went back to Lisbon to shoot (~2000) reference photos then made a list of things that are iconic for Lisbon and started working on Ruby. Adding a lot of height differation, warm colors, tile patterns and ofcourse trams was essentiental to get the Lisbon vibe." - catfood
     

    Subnautica
    by @dux, @PogoP and others at Unknown Worlds Entertainment
    "A mix of Survival, story, mystery, resource gathering, base building with some accidental horror and plenty of deep, deep water. We had not long finished up with Natural Selection 2 and were hungry to develop a different kind of game. During development we were (and still are) a small team but the game kept getting bigger and grew into something far larger in scope than originally planned. So we soon realised that what we had could be turned into something really unique if we put our heads down and just cranked on it." - dux
     

    Unreal Tournament 4 - Chamber
    by @Ubuska
    "I used Halo and Warframe artstyle as a reference. The goal of this project was to make fun and cool looking map with 100% custom art that is 100 mb in file size. To achieve that I used several advanced techniques such as custom vertex normals, deferred mesh decals, no bake, tiling base materials and masks. There are basically 5 or so texture maps used in the entire map,  most of the filesize space was taken by lightmaps. I learned a lot doing this project in terms of composition, art direction and optimization. Hope you enjoy this map as much as I do!" - Ubuska
     

    Counter-Strike: Global Offensive - Pitstop
    by @Quotingmc and Quadratic
    "It is not often that CS: GO receives a new game-mode, especially one as competitively focused as Wingman. I was understandably pleased at the announcement of the 2018 CSMapMakers contest for the mode. Pitstop was my entry where I set out to create a thematically bold centre piece for my portfolio. With the help of my teammate Quadratic and support from multiple Mapcore members, I learnt a lot about taking a level from a simple blockout to completion; I can say for certain I’m thrilled with the end result!" - Quoting
     

    Black Mesa - Xen
    by @JeanPaul, Adam Engels and others at Crowbar Collective
    "While building Xen we had to design, iterate, and iterate (then iterate some more). We took what we thought we knew, and put it to the test. We learned how design and scope work together, and how to build momentum as a team. We are extremely proud of what we have accomplished over the year(s)! Despite the long and occasionally frustrating timeline, it has been a real testament to the commitment that this team and this community have for Half-Life." - Adam Engels
     

    Unreal Engine 4 scene
    by @Vorontsov
    "So I decided I would step out of my comfort zone and create a small environment in an engine I've never used before, UE4. Although I think I did a fairly decent job at the time there were ultimately many nuances I could have done better, but that is the artist dilemma. This project taught me the value of properly blocking out your environment, gathering as many references as you can and to have patience and not rush through assets, when breaking any of these rules I was punished for it. Stay tuned for my next project which will be a giant mech, coming soon Valve time TM." - Vorontsov
     

    Counter-Strike: Global Offensive - Opal
    by @MikeGon
    "My goal with this project was to make a fun and compact defuse map, with a simple level flow, ample verticality, and an overlapped layout! I wanted to have interior and exterior, and break the grid a lot, to avoid having that "90 degrees grid" feel in the layout. I needed to have a vista on one side of the map to help with orientation, so I decided to make it a coastal town, inspired by those found on the island of Skopelos, Greece. Expect more updates in the near future, as I'm not yet satisfied with it. Since this is my only CSGO map, I want to put all my time and effort into it, and focus on quality instead of quantity. Thank you everybody for your support and feedback! <3" - MikeGon
     

    Insurgency: Sandstorm - Precinct
    by @Xanthi, @Squad, @Jonny Phive, @LATTEH, @Steppenwolf and others at New World Interactive
    "Precinct, was a fun and challenging map to work on. We decided early on to melt District and Contact two of our very nostalgic maps together into a single large-scale urban environment. The goal was to preserve the nostalgic feeling and at the same time create something unique and fresh not just a 1:1 copy. In the block-out stage we started playing with different terrain heights, which eventually was the key to accomplish our goal. Terrain height was a bit of a trial and error process; I remember driving up a hill and not having enough torque, oops!!" -Xanthi
     

    Counter-Strike: Global Offensive - Killhouse
    by @FMPONE
    "Killhouse showcases brutal duels, player reaction times, and close-quarters combat. A highly vertical layout ensures the sort of unpredictability and replayability ideal for CS:GO’s 2vs.2 "Wingman" game-mode." - FMPONE
     

    Counter-Strike: Global Offensive - Station
    by @Roald and @untor
    "All experiences contribute to where I am at this point. I am just a hobbiest but I think I learned alot about level design just by doing it and enjoying it. Overal my goal is to improve myself on level design, but also enviorment art. I think I archieved a goal on level design and it's now time to continue on enviorment art. This is where untor morozov comes in. I have met untor a while ago. He made this map 'Waterfall' which was pretty populair. I liked his designs and added him as a friend. When I had this wingman map going on with positive feedback I just contacted him again to work on it with me and since this moment we have had a incredible teamwork. I am gameplay orientated and he is art orientated so we were a great couple. We just enjoyed work on this project and respected eachother and had alot of fun." - Roald
     

    The Gap
    by @Yanzl and Sara Lukanc
    "The Gap is a sci-fi thriller first person narrative exploration video game. You play as Joshua Hayes, a neuroscientist trying to figure out what happened, barely remembering anything about his past. It started as a project for our BA thesis and has now grown into a standalone game. It's also my first "real" indie game project, helping me learn a lot about Unreal Engine 4 and game development in general." - Yanzl
     

    Counter-Strike: Global Offensive - Alexandra remake
    by @Serialmapper
    "My first successful map was born 10 years ago for CS1.6. It was done in just 4 days. Since then it has been ported/improved several times on CS:S then finally on CS:GO. It always had a "dust" theme. Initially i wanted to remake it with an "inferno" style but when the new dust2 came i switched the plan to use the new assets. The map was and is frequently played on public servers especially in Eastern Europe so i had plenty of feedback to improve it. For some it's just another "dust" map, but for me it's my dust2." - Serialmapper
     

    Far Cry 5 - Wetland Turmoil
    by @grapen
    "I wanted to try working with location design in an (imaginary) open world game for the first time, so I made this backwater cabin neighborhood. At the time I also wanted to see what the limits were in Farcry Arcade and how far I could push it. The level has fixed spawns (a limitation of the editor), but I toyed with the idea of making it work regardless from which direction the player would have approached it. The pathing and player guidance is more or less shaped like the number eight, with the church acting as an outlook. Your task is to eliminate all the bad guys. In the end I wanted to do so much more, but couldn't due to technical limitations. All in all it was a fun experience to make it." - grapen
     

    Counter-Strike: Global Offensive - Trailerpark
    by @OrnateBaboon and @Skybex
    "We wanted to make a map for CSGO, using a theme that had not been seen in any previous version of Counter-Strike.The map had to incorporate everyday plausibility, provide for enough variety so that things remained visually interesting,  but also be flexible enough to allow for the use of low geometry for easy grenade strategies. Being able to immediately recognize a theme in a map is always important, so with all this criteria in mind, A trailer park fitted the bill perfectly. There is still some way to go before a full release, but 2018 was a great year for progress on this project." - OrnateBaboon
     

    Unreal Engine 4 scene
    by @Corvus
    "I was inspired by games like stalker and the last of us. The goal was to make something photoreal with a lot of foliage. It took a couple of iterations but I think I achieved the goal in the end. While making this project I've had to learn a lot about Speedtree to make all the foliage, it was a really cool experience. Right now I'm in the army so unfortunately I can't make any more scenes right now, but after I'll come back I'll try to make more scenes like that." - Corvus
     

    Overwatch - Busan
    by @Minos, @[HP], @PhilipK, @IxenonI, Phil Wang, Lucas Annunziata and others at Blizzard Entertainment
    "Busan was a challenging map to make. Due to the game having 12 different heroes on screen we have a somewhat limited memory budget for maps, that includes all models, textures, effects, collision data, lighting information, etc... Fitting three radically different areas (Downtown, Sanctuary and MEKA Base) into one single map budget required us to find new ways to optimize our work. In the end, we were even squeezing kilobytes out of collision data to make it all fit, no kidding! But the result speaks for itself, the map was fun to work on and we are very proud of what we accomplished!" - Minos
     

    Counter-Strike: Global Offensive - Highlands
    by @ElectroSheep, @El Moroes and @'RZL
    "We wanted to make a map in Scotland because, thanks to dishonored 2, we were browsing a lot of references froms this area and we really loved it. I also went myself here in holliday after that. We asked one of our close friends to make some special props, like the police van, the taxi, the phonebox and some others. Unfortunatly the hard development of Dishonored 2 put us in a difficult state where we weren't able to work on the map. So we lost motivation. Then RZL contacted us because he didn't want the project to die so we gave him the keys. And RZL became busy too ^^. Life sometime say NO I guess, hehe. Now Highlands Is my only advanced project I still didn't finished and I'm ready to give it a try, I hope." - ElectroSheep
    "Highlands...is this map is a joke? Certainly no but we can say that the development is quite longer than what we expected. Perhaps we learn well how the famous "Valve time" works? :p No seriously I think we can explain that with the motivation. Of course we were motivated to create something cool with this map but with the time and, I think, with what we live in our life we never took the time to do it correctly...I mean we never had a constant rythm on the map. This (and other personal things) led to the current statut of the map; a still "work in progress" map started in 2014. But ElectroSheep came back and his goal is to finish it, and because he's right, I'll come back too to help him. Just, be patient (again) ;)" - El Moroes
     

    Battlefield V - Fjell
    by @Puddy, @Pampers and others at DICE
    "Fjell was an explosive experiment which paired a new Battlefield dynamic, planes and infantry only, with an epic gosh darn mountain top. Tackling this design combination was like dealing with a bear after you've kicked it in the balls. It was a fun challenge and even though its extreme gameplay is quite polarizing when compared to more middle-of-the-road maps, I am happy that we went there!" - Puddy
     

    Counter-Strike: Global Offensive - Iris
    by @BubkeZ and @Oliver
    "Iris was born out of a shared interest in the TV-show "Seinfeld", funnily enough. One day BubkeZ noticed I had changed my Steam profile picture to a photo of "George Costanza" and just like that the wheels were in motion! In the beginning, BubkeZ had the vision of an old city environment with lots of dirty alleyways and brick architecture. We didn't want to fall in the trap of making the map look too bleak, so we came up with the idea of making a mid-century town set in autumn. While the map certainly have visual elements from the 50's, I would say the overall theme of Iris is american auto-industry. Making the old cars was definitely my favorite part of making this map!" - Oliver
     

    Unreal Engine 4 scene
    by @Brightness
    "I have always been a fan of retro and vintage, so this was like a dream to me. After watching the first season of True Detective, I immediately fell in love with the office set and the way the series was shot. I have definitely learned a lot from this project, mostly lighting techniques that can fill your scene with a story. The goal was to recreate their environment in my own style, and I'm pretty satisfied with how it turned out. I definitely wasn't expecting this much of positive feedback and I'm really thankful for this community. I want to do something with the environments, not just as a portfolio piece, but make a short film or make a small adventure game out of them." - Brightness
     

    Counter-Strike: Global Offensive - Insertion 2
    by @Oskmos
    "Being the follow up to the first Insertion it will have the same overall concept with the spawning and open-world like layout. However this time it will be a more urban setting and overall higher quality art assets. I always love to make environments that feels real. And that are familiar. Its all made up. But the details and various elements in Insertion 2 is from my childhood basically. Friends that grew up in the same place I have recognizes it aswell." - Oskmos
     
    _______________________________________________________________________________________________________________________________________________
    _______________________________________________________________________________________________________________________________________________
     
    The Door Challenge

    Submission thread
     
    Articles

    Designing Highly Replayable Stealth Levels for Payday 2

    Level Design in Max Payne: Roscoe Street Station

    Effect and Cause - Titanfall 2 Level Breakdown

    2017: Mapcore's Year in Review
     

    Hurg smiles upon you all!
  6. Like
    leplubodeslapin reacted to General Vivi for an article, The Door Challenge - 2018   
    THE DOOR CHALLENGE!
    I want to start out by welcoming you to the 2ND Door Challenge! It’s been a little over 7 Years since we held the first one! A lot has changed in our industry and new engines have made level design more accessible than ever before. With all the fresh talent coming into our industry, I think it’s important that we challenge ourselves and each other to push our creative thinking.
    This challenge is meant to be for everyone to join in, from your first time level designer to your Senior and Lead Designers! Everyone is in a different place throughout their careers and it’s always fun to hone your skills on one of the most old school puzzles of our time “Get the Door Open!”. The last time we did this we had a fantastic turn out of completed and submitted puzzles! Especially since we are focusing on JUST design and scripting and NOT on art!
    As this is a scripting challenge, you are encouraged to use Dev textures or simple greyscale materials and only what art assets are absolutely necessary to communicate key ideas. The point is to focus on your Scripting / Presentation / Storytelling / Puzzle Making skills.
    Most entries generally took a few days to build from start to finish, so don't sweat worrying about the deadline. If you would like to get a better idea, check out some of the entries from the first door challenge.
    Remembering our Past
    SOLEVAL - First Place
      Magnar Jenssen - Participation
      Robert Yang - Participation
    Jason Mojica - Participation
    Rules and QA
    Build a puzzle and craft a story to creatively open “The Door”! It doesn't matter whether you're entering, exiting, or just moving from one room to another - just get that DOOR OPEN!
    Acceptable Engines : UE4 / Unity / Source SDK
    For UE4 or Unity, you will be REQUIRED to provide an EXE of your game
    For Source SDK, a simple bsp will do with info on the game you built it in. (Eg. Half-life 2, Portal 2, CS:GO, TF2)
    We encourage you DON’T use Art unless needed to sell your idea. Simple meshes / Dev textures / grey textures should do fine.
    You are ALLOWED to use Templates to start yourself off. Example: UE4 has a FPS , Third Person , and VR template.
    You are ALLOWED to use existing scripting/ blueprints or Code to help you make your puzzle.
    You CAN choose - First person, Third Person , Virtual Reality (VR)
    DON’T submit anything larger than 250 mbs , we want simple entries that everyone can download.
     
    The challenge will begin Friday, August 10th, and end Sunday, September 16th at 11:59PM US CENTRAL time (GMT -6)
     
    Must Haves :
    A zip file including your EXE or map file
    2 screenshots of your scene (ATTACHED! This will help us archive our entries for posterity)
    A video showing the puzzle's intended solution (hosted on youtube would be fine)
     
    Optional :
    Full Name (optional)
    Website or Portfolio (optional)
    The original level source (and any other relevant files) for inquiring minds to examine your scripting
     
    Judging : We will start judging the day after closing, Everyone will get 3 votes and then we will vote on the top 3 one week later. Things to think about when judging or making an entry.
    Innovation - More than just a simple Door!
    Theme - How close did you stay to the idea of the challenge
    Readability - Was your idea clear and easy to understand?
    Humor - Did you make someone laugh or enjoy your entry?
    Overall - Wrap everything together! Was it awesome?
    Door - Q: How much Door you got? A: Hell yes
    Prizes
    As with the previous challenge, there will be no prize other than the pride of knowing people thought you were awesome. Woo!
     
  7. Like
    leplubodeslapin reacted to General Vivi for an article, Designing Highly Replayable Stealth Levels for Payday 2   
    The Making of Murky Station: Payday 2
    Payday 2 is a four player cooperative first-person shooter with RPG elements that centers around robbing banks and stealing rare loot. It was released on August 13, 2013 and has since shipped over 50 DLC packs and counting. With a thriving subreddit, it has consistently been in the top ten games played on steam. Today, I wanted to talk about my adventures designing stealth levels for Payday 2 before leaving Starbreeze in January 2018. While parts of this article are specific problems and solutions for Payday level design, I made sure to discuss them in a broader sense. The skill level of this article is for junior to mid-tier level designers, if you are a senior designer some of this article may sound familiar to you.
    I'll start off by saying that Payday's stealth mechanics are not perfect and can be flawed in some areas, but I wanted to focus on the decisions behind the map design, specifically for the heist Murky Station. I'll also break down how we consider using RNG (randomization), and the ways we apply it to objectives and mechanics to keep the level fresh and replayable. This map took 6 weeks to make between 2 people. My partner took the role of Level Builder / Environment Artist and I took the role of Designer / Scripter. Between the two of us, we figured out the scale of the project based on the needs of our studio. The idea was to create a small heist that took around 10-15 minutes to finish with high replayability. There's a lot to go over, so let’s get started!
     

     
    Let's start from the beginning
    Before we start drawing or building layouts, we make the call if we are going to create a Loud level (combat only), Stealth level (avoid combat), or Mixed style map. For the short period of time given to us, we decided to stick to stealth only. Making this decision early on helped us create better movement options for the player and focus our efforts towards balancing patrols and objective placement. We decided that the theme of the level was a small train depot run by a group of mercenaries shipping large weapons. The main objective was to infiltrate the depot and steal an EMP bomb. Keeping the objective simple and intuitive is important in multiplayer games where players can drop in and out of the experience at any point in time.
    We decided to shoot for 10 - 15 minutes of gameplay. Breaking down our main objective into smaller sub-goals that could take about 2 minutes each (this is based on our extensive knowledge of payday 2). It should be noted that this time assessment will change once the player has completed the level a few times. These numbers tend to get cut by a third, or in some cases, by half. With our main objective in mind, we can construct a simple flow diagram for the heist and start to think about possible dynamic and RNG elements that can be used to create a re-playable experience.

    (This is a scripting example from our editor, each entity has it's own function)
    Testing your ideas before scripting them? Wait... What?
    Since 90% of Payday levels are hand scripted, it's important we don't waste time building the wrong things. Testing your objectives and complicated RNG elements has to be fast and efficient. The last thing you want to do is build an entire system and find out it sucks. Most of the time you don't even need animations or even a model to properly test your ideas. At such an early stage some floating debug text will do just fine. You might be asking, what if I don't have debug text or the ability to script? When playtesting levels for Payday 2, a lot of the time we'll get a simple block-out done and then ... here it comes ... pretend we're doing the objectives.
    It might sound crazy (and not everyone can get through it without laughing) but we'll have one of the designers act out the role of Bain, our mission giver, and just spout objectives at us. We'll move through the space and pretend to see guards or hack laptops and delay time based on things we expect to happen. You can basically break down how your systems might work and try out a few possibilities. For example, knowing that you might have two escapes at either side of the map gives you enough knowledge to make pretend decisions. Telling your fellow devs the van is arriving up top and pointing out where to secure loot can help you find out if a location is interesting for the escape or not.
    Even though the artists might giggle, or people from the other teams walking by stop and wonder why they can't see that hoard of enemies. It really works, and can often steer the level in the right direction and prevent us from investing too much time on the wrong objectives. Now, I know this approach won't work for all studios or situations, but all I gotta say is... don't knock it till you try it...  
       
     
    Constructing our Sandbox Layout
    Now that we've pretended to run through our objectives and have gotten used to our basic block-out, let's talk about the layout we built for Murky Station. We went for what i'd like to call "the onion approach", which is pretty much what it sounds like. You'll have multi-layered rings that give you the sense of progression towards the center (or a goal). Essentially, we use the outer layer as the player start and each sub-objective is based inside a different layer until the player reaches the main objective (at the figurative center). This approach is very useful when working with sandbox type levels, especially when the player can virtually go anywhere they want.

    Side Note: We also layer our music track each time a sub objective is finished, creating more suspense and a sense of agency.

    You can see that the outer onion layer is the player spawn (colored green) on the overpass which gives them a full view of the trainyard. From here they can study patrol routes, train-car positions, and possibly objective locations. The overpass can also be used by a player with a sniper rifle to mark guards in the different lanes, helping provide accurate information on guard positions for the players on the ground floor.

    The next layer is breaking into the train yard through a fence around the perimeter. The fence is here to guide the player and give them a visual boundary for the "safe zone" (where no guards patrol). The next layer is searching the train cars to discover where the main goal is hiding, followed by breaking through the vault doors inside of the trains themselves. These onion layers have to be carefully managed to give the proper impression to the player. Too many layers and you might confuse the player or make them forget what they're doing, too few and you might leave them feeling unchallenged or unaccomplished.
     
    Player Mobility is key!
    Mobility is key to providing players opportunities to express themselves and make better decisions while traversing a level. I felt that it was pretty important for Murky Station to allow for different play styles ranging from slow and methodical to fast and dirty. The last thing I wanted was to force players to play a certain way or for the routes to become predictable and linear. In order to do this, I spent the first week of development prototyping and testing out different layout ideas that would maximize paths and choices for the player.
    (Here is a simplified top-down of the routes in the train yard area)


    It became obvious that we would need to allow players to traverse through and under the trains as they cover most of the real estate in the train-yard. Unfortunately the older train assets were not built to go underneath, but lucky for us, the nighttime setting of the level would cover up this fact. There being only 2 of us on this project, I took a crash course in Maya and cleaned up the bottom half of the trains by removing collisions and remodeling them for readability purposes.


     
    The next challenge was to teach the player they could hide under trains and be safe. Payday players haven't been under the trains in any other heist up until this point, so we needed to call attention to that but also show them it was a safe place. Making these spaces dark and in the shadows helped create an illusion of safety but also made it harder for players to find them.
    To help solve this issue we added yellow caution tape as a trim and a dim red light under the wheels to catch the players eye. These combined elements would then be used as visual vocabulary in other parts of the level to teach players something should be explored.


    One of the other ways we added more routes to the level was to build a ventilation system in the lower tunnels. Leveraging the fact that this was a stealth level to create these smaller spaces, especially since they didn't have to accommodate 40+ police officers. The vents allowed players to safely view guard patrols, search for objectives, and move loot. To prototype this, I built a modular vent system using basic mock-up units that allowed for rapid construction and testing. Funnily enough, the first iteration of the vents was too small and caused players’ bodies to clip through the floor. I was able to rework my mock-up units and we settled on standing height instead of a crouching one. Once again we used yellow caution tape as our visual vocabulary to highlight the vent entrance on the wall.
    Modifying the trains and vents is one of the factors that contributed to the map’s success and gave new players more confidence to explore the trainyard and lower claustrophobic tunnels. So now that we've explored the different possibilities for movement and giving the player more choices, it's time to buckle down and get our randomization system built.
     


    Randomizing Objectives to Maximize Replayability
    RNG is one of the core pillars of Payday, so every decision we make is looked at through a lense of RNG. We strongly believe randomization should be meaningful to gameplay and not just added for the sake of it. It’s important to ask questions like: was it worth changing all the cups in your level? Did you gain anything from swapping out all of your cars and buildings? Was creating a third entrance valuable to the level? Maybe one day we'll completely randomize every object in a building down to the smallest cups, but in a game like Payday I personally feel these types of things have diminishing returns and can often ruin a planned design.
    When working with RNG it's important that you ask yourself as many questions as possible to start with a strong foundation, especially if you plan on finishing on time. Something I often see junior to mid-tier level designers forget is to build for scope and set priorities on their objectives. It might sound trivial, but forgetting your priorities can send you down a black-hole that eats away all of your time.
    So how did we go about adding RNG into Murky Station? Breaking down our objectives, we can start to consider what RNG options are available and doable within our one month time frame. I've also labeled them with my personal priorities (low - high).
    Break into the train yard randomize breach locations (low) Locate the Bomb Train randomize train configurations (high) Hack into the train randomize panel to flip sides (low - medium) Open the Vault 4 different vault door / key types (high) Find the Vault keys The map supported up to 40 hiding locations (med - high) Secure the EMP bomb parts 2 escape locations, 1 chosen per playthrough (medium) I focused most of my efforts on randomizing the train configurations, vault doors and key placement. These objectives were critical in influencing how the player would move through the main space and how they could tackle the same area in different ways through multiple playthroughs. In order to accomplish this, I broke down my sub-goals into digestible points of interest and isolated them into their own prefabs (shown below). Doing so allowed me to script one prefab and teleport it to as many locations as I wanted. This approach made the randomization more manageable to script and cut down the amount of bugs that might have formed if I built everything by hand each time.

    Side note: We gave each one of our key / vault prefabs its own unique visual and audio so that players could identify them from a distance or listen if they were close by. Providing them with this level of feedback is critical in helping them make proper decisions while traversing the level.



     
    Now that we have our vault doors and keys figured out, I can begin the planning process of placing them throughout the level. When placing them, each location must meet certain conditions before being finalized. The main goal is to provide the player with a challenge and also encourage them to be creative in tackling the surrounding area. Having designed the layout to have many interesting choke points and traversals, it was fairly straightforward where I could place them. Collecting the keys is one of the more RNG based objectives in Murky Station, sometimes all of the keys are in different corners of the map and other times they are all next to each other. Eventually there was a script clean up to prevent overpowered locations or terrible RNG possibilities, but overall it was a huge success for the level.
    We generally kept the key locations central to the layout and tried not to place them too close to the player’s safe zones. Placing several keys along the outskirts was a nice change of pace from the main lanes, providing a different type of challenge due to the openness of the layout.
    This is what the upper train yard looks like and how the keys are distributed. The lower tunnels have the same amount of keys placed.


     
    We also used the same method for spawning the train interiors and vault doors. By creating one prefab and scripting it four times inside the level (one per vault door type) we were able to randomize the location of the players’ main goal with little effort. The engine also allows us to rotate our prefabs, giving us the option to flip the train interiors.  This added a whole new layer to their configurations, since some of the interior layouts were asymmetrical.
    We ended up with roughly 600 train configurations, 2000 vault door combinations, and 256 sub objective configurations. With 1 of 2 exits being chosen randomly each playthrough, this really changed what types of decisions got made by the players. It also influenced how they would flow through the level and took advantage of their diverse set of movement options.
    On top of that we use non-linear objectives, which basically means you can do multiple objectives at the same time or in some cases, different orders. In Murky Station, players can simultaneously be looking for keys, searching through trains, marking guards from the overpass, and securing extra loot they find. This allows 4 players to comfortably split up to cover more ground and work off each other. A well coordinated team might have two players hacking into the trains to find the EMP bomb, while the others are looking for the vault keys. I find it very important to provide all players an opportunity to contribute towards the main goal.
    Side note: With all of this randomization, you might be wondering how QA can test it all. The short answer: they don’t. We need to build efficiently to insure 90% of the level is solid, and then catch as many edge cases as possible. On the Payday team, the frontline of defense for QA is the designer making the level, It’s our job to test our own work thoroughly! The way the systems above were built would only required 1 prefab to be maintained for each example. This provides us the freedom to go nutty with the customization in the level, knowing it has a low chance at affecting our prefabs. So, as long as we build smart we can cut down the amount things QA needs to test and help speed up production.

    With the objectives off to a good start, let's take a look at how RNG might affect our guard patrols and cameras in the level.
     

     
    Guard Patrols and RNG
    Randomization can have a large effect on how smooth or frustrating a level turns out to be. One of the things we have to keep an eye on when designing stealth levels is frustrating the player through poor patrol placement, amount of guards, and how long they pause at each location. The goal is to create a fun puzzle-like challenge, not a terrible waiting game. Bad RNG might have you sitting in a corner for one minute waiting for the guard to leave, only to have another guard take his place when that minute is up. It's our job as the level designer to help prevent such situations from happening by adjusting our timings, reworking the layout, or possibly the level’s mechanics. This is why it's so important to create a solid base for player movement options from the beginning.
    Since we don't want our guard patrol RNG to get out of hand, we need to be careful about how they flow through a space. Doing this requires it's own personal attention and multiple iterations. Tilt too far in one direction and you'll end up with bare areas that have no guards, tilt too far in the other direction and you'll have too many guards stacked on each other with no wiggle room. The last thing you want is the possibility of a death chain reaction. This is caused when you kill 1 guard, only to have another guard 10 meters away spot that body... forcing you to kill that guard, who eventually gets spotted by the next, ect. In Payday 2, players have a limit of 4 guards they can kill before the alarm goes off (on all difficulties). In our levels, we have to actively manage the amount of crossover between paths and how often guards might meet.
    In the first test pass for Murky Station I ended up with a good amount of coverage for my level, but the downside was that some sections could randomly get 8 guards piled up.  After a bit of playtesting and redesign, I decided to break up my patrols into smaller loops and add more points. This increased the amount of coverage and kept the patrols more consistent. It also lowered the maximum guard stacking to around 4 and drastically reduced the amount of death chain reactions that could happen.

    First pass patrol locations

    Second pass patrol locations

    (the new paths provide the same amount of level coverage with a less chance of guard over-stacking) 
    A fresh take on an old mechanic
    In most of our stealth levels we use random static security cameras to challenge the players’ skill at avoidance or sabotage. The players have multiple mechanics in order to deal with them in a variety of ways, but we hit a brick wall when discussing options for Murky Station. Due to the hallway nature of the layout and the surrounding structures, we were left with very few options when it came to camera placement. With so few options, the cameras would be no longer modifying the level in a positive way. We also found them at odds with the design of the level, since you were supposed to be searching for a specific train car. If we had cameras pointing at it, you would be able to identify it too quickly and negate the challenge of finding it.

     
    So how did we fix these issues? Getting rid of the cameras was not really an option, so we began brainstorming and looking for assets that might be of use. It's important the core camera functionality remain intact and also continue to meet our core pillar of randomization. We discovered an old drone asset for one of the previous levels and began prototyping a few ideas. The design we ended up going with provided us the coverage we needed, while also creating a new challenge for the players to overcome.


     
    Each train can spawn up to two drones, which will then fly around the perimeter of the train and scan for players and bodies. Randomly throughout the level, three to four drones will be activated to begin their scan. The loop takes about 30 seconds before they return to their trains and deactivate. The cycle continues like this every few minutes until the level is finished.
    On harder difficulties, more drones will spawn and they will become indestructible.
    What's great about the drones from a design perspective, is that we can dynamically modify how the level gets played and prevent players from getting comfortable in using the same routes each play-through. Some players will avoid lanes with drones, more skilled players will dodge them using their movement options, and some players might even get trapped and need to think of a new routes. Let's take a look at the patrols and drones in action.
    (This clip is sped up about 8x and set to the hardest difficulty to help illustrate pathing and drone movement)
    Closing thoughts
    Murky Station was such an enjoyable experience to work on that I still play it to this day. When you break down the objectives and how they influence one another in a co-op space, you can begin to see the bigger picture and how a well-planned level with controlled RNG elements can stay fresh and replayable. Experimenting with different types of RNG is something I find very interesting, especially when you combine it with level design. I hope my article gave you some more insight into how we build with RNG and why we consider it one of our core design pillars. If you found this article helpful, let us know in the comment section!
    Thanks for reading, here is my Info :
    Twitter: @generalvivi 
    Email: generalvivi [at] gmail . com
    Website: www.generalvivi.com
    Before you go!
    If you enjoyed this article and would like to hear how we used RNG in other ways, check out Patrick Murphy's article on the Payday 2 level "Hoxton Breakout".
    I also have a  speedrun (1min) of the level for you to check out and a playthrough on the hardest difficulty (10 mins) by one of the pros from the community.  
    Fastest time 2018 (warning to lower volume)
     
    10 min gameplay video showing off a lot of variety in the heist. 
     
  8. Like
    leplubodeslapin reacted to Radu for an article, 2017: Mapcore's Year in Review   
    (New logo by Yanzl)
    I'm sure that by now most of us have our sleeves rolled up and are ready to tackle yet another year, but before we move forward let's take a moment to look back at what 2017 meant for our community. It was a time of immense growth for both professionals and amateurs alike. A time when everyone seemed to have surpassed their former selves. And without slowing down, some have even managed to land their first job in the industry. I don't know what this new year holds, what challenges to overcome will arise, but I know for certain that I'm excited to see everyone become even greater!
     
    2017: Mapcore's Year in Review
     

    Overwatch - Oasis
    by Phillip K, Bram Eulaers, Helder Pinto and others
     

    Dishonored 2: Death of the Outsider - Curator level
    by electrosheep, kikette and others
     

    Payday 2 - Brooklyn Bank level
    by General Vivi
     

    Sniper Elite 4 - Regilino Viaduct
    by Beck Shaw and others
     

    Counter-Strike: Global Offensive - Offtime
    by Squad
     

    Team Fortress 2 - Shoreleave
    Art pass, props and sound by Freyja
     

    Wolfenstein II: The New Colossus - Farmhouse
    Modeled, textured and composed by BJA
     

    Half-Life 2: Downfall
    by marnamai
     

    Counter-Strike: Global Offensive - Studio
    by ZelZStorm, TanookiSuit3 and Hollandje
     

    Portal 2 - Refraction
    by Stract
     

    Counter Strike: Global Offensive - Breach
    by Yanzl and Puddy
     

    Counter-Strike: Global Offensive - Berth
    by grapen
     

    Counter-Strike: Global Offensive - Kaizen
    by Andre Valera and Jakuza
     

    Counter-Strike: Global Offensive - Asylum
    by Libertines
     

    Half-Life 2: Episode 2 - FusionVille: The Shadow over Ravensmouth
    by Klems
     

    Unreal Engine 4 scene
    by Dario Pinto
     

    Counter-Strike: Global Offensive - Grind
    by The Horse Strangler, `RZL and MaanMan
     

    Counter-Strike: Global Offensive - Aurelia remake
    by Serialmapper
     

    Counter-Strike: Global Offensive - Tangerine
    by Harry Poster
     

    Counter-Strike: Global Offensive - Abbey
    by Lizard and TheWhaleMan
     

    Counter-Strike: Global Offensive - Apollo
    by Vaya, CrTech, Vorontsov, JSadones
     

    Counter-Strike: Global Offensive - Sirius
    by El Exodus
     

    Unreal Engine 4 scene
    by Corvus
     

    Counter-Strike: Global Offensive - Subzero
    by FMPONE
     

    Counter-Strike: Global Offensive - Biome
    by jd40
  9. Like
    leplubodeslapin got a reaction from JimWood for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
  10. Like
    leplubodeslapin got a reaction from JimWood for an article, Source Lighting Technical Analysis: Part One   
    After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.
     
    About the reality...
    Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:
    Light can bounce and continue its travel to another direction Light can be absorbed by the matter (and the energy can be transformed to heat) Light can go through the matter, for example with air or water, some properties might change but it goes through it And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.
    How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?
    One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. 
    By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method.
     
    Lightmap baking
    OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!
    A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

    You can also see them in-game with the console command mat_luxels 1 (without and with).
    During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:
    It is partially absorbed in the lightmap grid A less bright ray bounces from the face Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

    When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

    On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

    On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.
    In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

    But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. 
    Here’s an example:

    The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.
    Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light.

    Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

    In-hammer view on the left, in-game view on the right
     
    Light Falloff laws
    One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

     
    Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.
     
     
     
    At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.
    As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

     
    You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

    (R is the radius of the sphere)
     
    If we integrate that surface area in the previous equation:

    With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. 
    So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.
    Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

    This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.
    This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.
     
    Constant, Linear, Quadratic... Wait, what?
    In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

    Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:
    x^0 = 1 : degree 0 x^1 = x : degree 1 x^2 : degree 2 x^3 : degree 3 ... And
    a0 is the constant named “constant coefficient” (associated to degree 0) a1 is the constant named “linear coefficient” (associated to degree 1) a2 is the constant named “quadratic coefficient” (associated to degree 2) Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written:

    Then, if we take the expression from the inverse-square law, which was:

    With a2 = 1 and D being the variable of distance from the light origin.
    In Source, the constant ♥ is actually the brightness (the value you configure here).
    It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

    Or...

    And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level.
    Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

    Another example:

    By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law.
    If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

    You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.
    The following equation is a personal guess of what could be the one used by VRAD:

    With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. 
    The best way to see how this equation works is to visualize it in a 2D graph: 
    https://www.desmos.com/calculator/1oboly7cl0
    This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:
    a0 is the Constant coefficient that you enter in hammer  a1 is the Linear coefficient a2 is the Quadratic coefficient B is the Brightness coefficient In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. 
    Try to interact with it!
    This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!
     
    Part Two : link
  11. Like
    leplubodeslapin got a reaction from JSadones for an article, Source Lighting Technical Analysis: Part One   
    After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.
     
    About the reality...
    Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:
    Light can bounce and continue its travel to another direction Light can be absorbed by the matter (and the energy can be transformed to heat) Light can go through the matter, for example with air or water, some properties might change but it goes through it And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.
    How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?
    One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. 
    By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method.
     
    Lightmap baking
    OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!
    A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

    You can also see them in-game with the console command mat_luxels 1 (without and with).
    During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:
    It is partially absorbed in the lightmap grid A less bright ray bounces from the face Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

    When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

    On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

    On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.
    In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

    But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. 
    Here’s an example:

    The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.
    Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light.

    Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

    In-hammer view on the left, in-game view on the right
     
    Light Falloff laws
    One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

     
    Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.
     
     
     
    At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.
    As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

     
    You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

    (R is the radius of the sphere)
     
    If we integrate that surface area in the previous equation:

    With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. 
    So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.
    Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

    This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.
    This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.
     
    Constant, Linear, Quadratic... Wait, what?
    In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

    Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:
    x^0 = 1 : degree 0 x^1 = x : degree 1 x^2 : degree 2 x^3 : degree 3 ... And
    a0 is the constant named “constant coefficient” (associated to degree 0) a1 is the constant named “linear coefficient” (associated to degree 1) a2 is the constant named “quadratic coefficient” (associated to degree 2) Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written:

    Then, if we take the expression from the inverse-square law, which was:

    With a2 = 1 and D being the variable of distance from the light origin.
    In Source, the constant ♥ is actually the brightness (the value you configure here).
    It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

    Or...

    And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level.
    Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

    Another example:

    By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law.
    If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

    You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.
    The following equation is a personal guess of what could be the one used by VRAD:

    With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. 
    The best way to see how this equation works is to visualize it in a 2D graph: 
    https://www.desmos.com/calculator/1oboly7cl0
    This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:
    a0 is the Constant coefficient that you enter in hammer  a1 is the Linear coefficient a2 is the Quadratic coefficient B is the Brightness coefficient In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. 
    Try to interact with it!
    This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!
     
    Part Two : link
  12. Like
    leplubodeslapin reacted to FrieChamp for an article, Finding your own path as a professional Level Designer   
    The following article contains quotes from interviews with Todd Papy, Design Director at Cloud Imperium Games, Geoffrey Smith, Lead Game Designer at Respawn Entertainment, Paul Haynes, Lead Level Designer at Deep Silver Dambuster Studios and Sten Huebler, Senior Level Designer at The Coalition. A big heartfelt 'thank you' goes out to these guys who took the time out of their busy schedules to answer my questions!
    On the MapCore.org forums many amateur level designers ask for feedback on their portfolios or for advice on how to break into the games industry. But once you have signed your first contract and you have your foot in the door you will realize that this step marks merely the beginning of your journey. It is a winding path with many diverging branches and without much information available on the road ahead. This is the reason why I decided to interview professional designers in Senior, Lead or Director positions to share their personal experiences and advice with others trying to navigate this field. It is worth mentioning that the questions were not selected and phrased with the goal in mind to compile a ‘how to get promoted fast’ guide. Instead I wanted to give level designers insights into the careers of others - who have stood at the same crossroads before - in hopes that they get the information to pick the path that is right for them.
    Hands-On VS Management
    At the beginning of his career, Todd Papy started out as a “designer/environment artist” – a job title that dates back to times when team sizes were much smaller and one person could wear both hats at the same time. As the project complexity and team size grew, he specialized in level design at SONY Santa Monica and worked on the God of War titles. During his time there he moved up the ranks to Lead Level Designer, Design Director and eventually Game Director. From level design to directing a game - a career thanks to careful long-term planning and preparation? “It wasn’t even on my radar” says Todd. “I just wanted to build a game with the team and soak up as much information from the people around me as possible.” 
    So how do level designers feel who step into positions where the majority of their daily work suddenly consists of managing people and processes? Do they regret not doing enough hands-on-work anymore? Todd says he misses building and crafting something with his hands, but instead of going back to his roots, he decided to look at the issue from a fresh perspective: “As a Lead or Director, your personal daily and weekly satisfaction changes from pride in what you accomplished to pride in what the team has accomplished.“ Today Todd is designing the universe of 'Star Citizen' as Design Director at Cloud Imperium Games.
    Geoffrey Smith - who created some of the most popular multiplayer maps in the Call of Duty and Titanfall series and who is now Lead of the ‘Multiplayer Geometry’ team at Respawn Entertainment - says his output of levels remains unchanged thus far, but he can “easily see how being so tied up with managing would cut into someone's hands-on work”. Geoffrey calls for companies to provide the necessary training to employees new to management positions: “Managing people and projects is hard work and is normally a vastly different skill set than most of us in games have. Maybe that is why our industry has such problems with meeting deadlines and shipping bug-free games. A lot of guys work for a long time in their respective disciplines and after many years they get moved into a lead position. They certainly know their craft well enough to teach new guys but managing those guys and scheduling would be something brand new to them. Companies need to understand this and get them the training they need to be successful.” At Respawn Entertainment, the studio provides its department leads with training seminars, which helps the staff immensely, according to Geoffrey.
    Sten Huebler, currently working as a Senior Level Designer at Microsoft-owned The Coalition, in Vancouver, says he definitely missed the hands-on work when he worked in a Lead capacity on 'Crysis' and 'Crysis 2': “I was longing for a more direct creative outlet again. That is why coming to The Coalition and working on Gears of War 4, I really wanted to be hands on again.” To Sten it was the right move because he enjoyed working directly on many of the levels in the game’s campaign and could then experience his fruit of labour with others close to him: "After Gears 4 shipped, playing through the campaign, through my levels with my brother in co-op was a blast and a highlight of my career. He actually still lives in Germany. Being able to reconnect with him, on the other side of globe, playing a game together I worked on...So cool!"

    'Gears of War 4'  developed by The Coaliation and published by Microsoft Studios
    Paul Haynes, Lead Level Designer at Deep Silver Dambuster Studios, encourages designers to negotiate the amount of organizational tasks and hands-on work before being promoted into a position that makes you unhappy: “I always told myself that I wouldn’t take a Lead position unless it could be agreed that I retain some hands-on, creative responsibility, after all that’s where I consider my strongest attributes to lie. I agreed to both Lead positions (Cinematic/Level Design) under that principle - I never understood the concept of promoting someone who is good at a certain thing into a position where they potentially don’t get to do that thing anymore, as they spend all their time organising others to do it. So far I’ve managed to maintain that creativity to some degree, though I would imagine it’s never going to be quite the same as it used to be, as I do have a team to manage now. On the flip side though, being able to control and co-ordinate the level design vision for a project and having a team to support in fulfilling that is quite an exciting new experience for me, so not all the organisation and planning is unenjoyable.”
    Specialization VS Broadening Skillsets
    For the level designers who aren’t afraid of management-related tasks and who are willing to give up hands-on work for bigger creative control, what would the interviewees recommend: specialize and strengthen abilities as an expert in level design further or broaden one’s skillset (e.g. getting into system design, writing etc.)? Paul believes it doesn’t necessarily have to be one or the other: “I think it’s possible to do both (strengthening abilities and broadening skillsets) simultaneously, it would really depend on the individual involved. I would say that a good approach would be to start with the specialisation in your chosen field and then once you feel more comfortable with your day to day work under that specialisation, take on work that utilises different skillsets and experiment to see if you find anything else you enjoy.” He started out as a pure level designer but subsequently held roles that involved game and cinematic design at Codemasters, Crytek and Dambuster Studios. “I’ll always consider myself a level designer at heart”, says Paul, “though it’s been incredibly beneficial for me to gain an understanding of multiple other disciplines, as not only has it widened my personal skillset but it has enabled me to understand what those disciplines have to consider during their day to day job roles, and it has helped me to strengthen the bond with those departments and my level design department as a result.” This advice is echoed by Todd who encourages level designers to learn about the different disciplines as “that knowledge will help solve issues that arise when creating a level.”

    'Homefront: The Revolution' developed by Dambuster Studios and published by Deep Silver
    Sten also gained experience in related disciplines but ultimately decided to return to his passion and do level design. He explains: “It’s a good question and I feel I have been wondering about this myself regularly in my career. I think those priorities might change depending on your current situation, your age, your family situation, but also depending on the experience you gain in your particular field. (…) In my career, I was fortunate enough to try out different positions. For example, I was a Level Designer on Far Cry (PC), Lead Level Designer on Crysis 1 and Lead Game Designer on Crysis 2. Each position had different requirements and responsibilities. As a Lead Level Designer I was more exposed to the overall campaign planning and narrative for it, while on Crysis 2 I was more involved in the system design. However, my true passion is really on the level design side. I love creating places and spaces, taking the player on a cool adventure in a setting I am crafting. My skills and talents also seem to be best aligned on the level design side. I love the combination of art, design, scripting and storytelling that all come together when making levels for 1st or 3rd person games.”
    Picking The Right Studio
    As you can certainly tell by now, all of the interviewees have already made stops at different studios throughout their career. So each one of them has been in the situation of contemplating whether to pass on an offer or put down their signature on the dotted line. This brings up the question what makes them choose one development studio over the other? To Geoffrey it depends on what stage of your career you are in. “If you're trying to just get into the industry for the first time, then cast your net wide and apply to a lot of places. However, ideally, someone should pick a studio that makes the types of games they love to play. Being happy and motivated to work every day is a powerful thing.”
    This is a sentiment that is shared by all interviewees: the project and team are important aspects, but as they have advanced in their career other external factors have come into play: “It’s not just about me anymore, so the location, the city we are going to live in are equally important.” Sten says.
    Paul is also cautious of moving across the globe for a new gig. “The type of games that the company produces and the potential quality of them is obviously quite important – as is the team that I’d be working with and their pedigree.  More and more over the years though it’s become equally important to me to find that balance between work and life outside of it. Working on games and translating your hobby into a career is awesome, but it’s all for nothing if you can’t live the life you want around it.”
    And it is not just about enjoying your leisure time with family and friends, but it will also reflect in your work according to Todd: “If my family is happy and enjoys where we live, it makes it a lot easier for me to concentrate on work.” He also makes another important point to consider if you are inclined to join a different studio solely based on the current project they are working on: “The culture of the studio is extremely important. I consider how the team and management work together, the vibe when walking around the studio, and the desk where I will sit. Projects will come and go, but the culture of the studio will be something that you deal with every day.”

    'Star Citizen' developed and published by Cloud Imperium Games; screenshot by Petri Levälahti
    But it goes the other way around, too: When it comes to staffing up a team of level designers, these are the things that Todd looks for in a candidate: “First and foremost, I look for level designers that can take a level through all of the different stages of development: idea generation, 2D layouts, 3D layouts, idea prototyping, scripting, tuning, and final hardening of the level. People that can think quickly about different ideas and their possible positive and negative impacts.  They shouldn’t get too married to one idea, but if they feel strongly enough about that specific idea they will fight for it. People that approach problems differently than I do. I want people that think differently to help round out possible weaknesses that the team might have.  People who will look for the simplest and clearest solution vs. trying to always add more and more complexity.“
    For lead positions, it goes to show yet again how important a designer's professional network is, as Todd for example only considers people that he already knows: “I try to promote designers to leads who are already on the team and have proven themselves. When I am building a new team, I hire people who I have had a personal working relationship before. Hiring people I have never worked with for such positions is simply too risky.”
    Ups & Downs
    While the career paths of the designers I interviewed seem pretty straightforward in retrospect, it is important to note that their journeys had their ups and downs as well. For instance Geoffrey recalls a very nerve-wracking time during his career when he decided to leave Infinity Ward: “We had worked so hard to make Call of Duty a household name but every day more and more of our friends were leaving. At a certain point it just wasn't the same company because the bulk of the people had left. The choice to leave or stay was even giving me heart palpitations. (…) After I left Infinity Ward, I started working at Respawn Entertainment and by work I mean - sitting in a big circle of chairs with not a stick of other furniture in the office - trying to figure out what to do as a company.” But he also remembers many joyful memories throughout his career: Little things like opening up the map file of multiplayer classic ‘mp_carentan’ for the first time or strangers on the street expressing their love in a game he had worked on. To him, shipping a game is a very joyful experience by itself and the recently released Titanfall 2 takes a special place for him. “The first Titanfall was a great game but we had so many issues going on behind the scenes it felt like we weren't able to make the best game we were capable of. (…) After all the trials and tribulations of starting a new game company, Titanfall 2 is a game I am very proud to have worked on.”

    'Titanfall 2' developed by Respawn Entertainment and published by Electronic Arts
    As a response to the question of what some of the bigger surprises (good or bad) in his career have been thus far, Paul talks about the unexpected benefits of walking through fire during a project’s development and the lessons he learnt from that: “It surprised me how positively I ended up viewing the outcome of the last project I worked on (Homefront: The Revolution). I’d always thought I would aim to work on big, successful titles only, but I guess you don’t really know what’s going to be a success until it’s released. Obviously it was a disappointing process to be part of, and a lot of hard work and effort went into making it, despite the team always knowing that there were some deep lying flaws in the game that weren’t going to be ironed out. We managed to ride the storm of the Crytek financial issues in 2014, coming out on the other side with a mostly new team in place and yet we carried on regardless and managed to actually ship something at the end of it, which is an achievement in itself. I see the positives in the experience as being the lessons I learnt about what can go wrong in games production which stands me in good stead should I decide to take a more authoritative role somewhere down the line. Sometimes the best way to learn is through failure, and I don’t believe I’d be as well rounded as a developer without having experienced what I did on that project.”
    Last Words Of Advice
    At the end I asked the veterans if they had any pieces of advice they would like to share with less experienced designers. To finish this article I will quote these in unabbreviated form below:
    Geoffrey: “I guess the biggest thing for guys coming from community mapping is figuring out if you want to be an Environment Artist or a Geo-based Designer and if you want to work on Single-Player or Multiplayer. Each has its own skills to learn. I think a lot of guys get into mapping for the visual side of things but some companies have the environment artists handle the bulk of that work. So figuring out if making the level look great is more enjoyable to you or thinking it up and laying it out is, will help determine which career you should follow. Other than that, just work hard and always look to improve!”
    Todd: “BUILD, BUILD, BUILD.  Have people play it, find out what they liked about it and what they didn’t.  Build up a thick skin; people will not always like your ideas or levels. Try out new ideas constantly. What you think looks good on paper doesn’t always translate to 3D.  Analyse other games, movies, books, art, etc. Discover what makes an idea or piece of art appeal to you and how you can use that in your craft.”
    Paul: “The games industry is not your regular nine to five job, and everyone is different so it’s difficult to lay down precise markers for success. Different specialisations have different requirements and you can find your choices leading to different routes than your fellow team members. You need to make sure you carve your own path and try everything you can to achieve whatever your personal goals are within the role; success will come naturally as a result of that. You need to be honest with yourself and others, open to criticism and willing to accept change. I’ve seen potential in people over the years hindered by stubbornness, succeeding in the games industry is all about learning and constantly adapting. Also it’s important to keep seeing your work as an extension of a hobby, rather than a job. The moment it starts to feel like a means to an end, you need to change things up to get that passion back.”
    Sten: “I always feel people should follow their passion. I firmly believe that people will always be the best, the most successful at something they love. Of course, it is a job and it pays your bills, but it’s also going to be something you are going to do for gazillions hours in your life, so better pick something you like doing.”
    Written by Friedrich Bode for mapcore.org
    What are your personal experiences? Do you agree with the statements made by the interviewees? Any advice you would like to share with fellow level designers or game developers in general? Let us know in the comments!
  13. Like
    leplubodeslapin got a reaction from Lizard for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
  14. Like
    leplubodeslapin reacted to will2k for an article, Optimizing An Open Map in Source Engine   
    An open map?
    Source engine, which is funnily a Quake engine on steroids (a bit of exaggeration but still), inherited the same limitations of its parents in terms of visibility calculations: BSP and PVS. This fact makes Source, as was Quake engine before, more suitable to rooms and hallways separated by portals where the BSP shines in all its glory.
    Inheritably, Source does not like large open maps where the PVS is of considerable size and the over-rendering is a real issue.
    If you work with Source engine, then you already know the importance of optimization in a large, detailed map. Optimization becomes even more imperative when the said map is open.
    What’s an open map? Good question. The word “open” is an umbrella term to denote any map that does not have traditional hallways and corridors that connect indoors to outdoors. The map is mostly large, outdoors with an unbroken skyline; in other words, the same stuff that source engine nightmares are made of in terms of PVS and BSP.
    In a traditional “hallway’d” map with twisted corridors leading to open areas followed by other hallways, and even if you “forgot” to place hints and areaportals, the geometry itself allows the engine to cut visleaves and limit visibility; granted the visleaves’ cuts will be subpar and messy and the PVS will be in excess, but still, the visibility and fps will be relatively under control. A twisted hallway is a remedy to long sight lines after all.
    In an open map, and without hallways and enough geometry to help the engine, the PVS risks to be huge and the whole map could be rendered at once from any point (over-rendering). We are talking here about a severe fps killer and a potential slideshow on a medium to low range computer. Source does not like over-rendering; I repeat, Source does not like over-rendering.
    I believe a screenshot should be welcome at this stage to illustrate an open map. I’ve chosen a nice medium-size map from CSGO to showcase the issue: de_stmarc.

    The shot is taken in Hammer obviously, and you can immediately see that the skybox is one big unbroken body from one edge of the map to the opposite one. This is the classic definition of open map.
    Let’s see this map in 2D view from the side.

    I have highlighted the skybox in blue so you could see the continuous sky body all over the map. Please note that an open map can have varying skybox shapes but I’ve chosen the simple and classic one to showcase my point where it is easier to see and visualize the concept of open map.
    In contrast, a “traditional” map will have several skyboxes, often not connected directly but rather through a system of indoor rooms or hallways, varying in size and shape.
    I will have my map de_forlorn as example here.

    I have also highlighted the skybox in blue and you can easily notice several skyboxes for CT spawn, T spawn, and Mid/bombsites. These skyboxes are not directly connected to each other but the areas related to them are linked on the lower levels through various indoor locations, some vast (like garage, tunnels…) and some small (like lab hallway…).
    If you are not that comfortable with source optimization or feel that certain terms are alien to you, then please read my previous optimization papers and articles before proceeding further in this article (Previous papers can be found here Source Engine Optimization roadmap).
    The necessary tools
    I’m not revealing a secret when I tell you that the same tools used to optimize any map in Source are exactly the same ones used for optimizing an open map. If you were expecting some magical additional tools, I’m sorry to bust your bubble.
    Since the tools are the same (nodraw, func_detail, props, hints, areaportals, occluders…), it is more about how to use them in open maps that makes all the difference.
    So, how to properly optimize an open map? Well, you could always pay me to do so for you (joking…not…maybe…I dunno!!)
    If the above option is off the table, then read on the rest of this article .
    Horizontal hints
    While in a traditional map one might get away without using horizontal hints, it is virtually impossible to skip them (pun intended) in an open map unless you want to witness single digit fps burning your eyes on the screen. They are of utmost importance to negate the "tall visleaves across the map" issue.
    In a traditional map, even if you bypass adding horizontal hints, the damage in fps will mostly be local since the skyboxes are not connected and areas are mostly autonomous in terms of PVS. In case of my map “Forlorn” and referring to the 2D diagram above, if I remove horizontal hints from CT spawn, then only this area will suffer from tall visleaves and over-rendering. Obviously, this is not cool in terms of optimization, but at least the effect will be somehow restricted to this area only.
    In the case of “Stmarc”, you can certainly see that not including horizontal hints will have tall visleaves seen from across the map as the skybox is one unit. The PVS will grow exponentially and the over-rendering will take its toll on the engine.
    Let’s move on to some screenshots and diagrams, shall we.

    This is our glorious open map in side view. The blue lines denote the skybox, the dark grey one is the ground, and the green rectangles represent solid regular world brushes such as building bases for example. The red starfish little-man-with-arms-wide-open is the player. The orange hollow rectangles denote the various visleaves that the engine would probably create in the map (most go from ground level to skybox level and this is what I refer to as “tall visleaf”).
    If you know your optimization, then you certainly remember that BSP relies on “visibility from a region” approach (for a refresher, please consult my papers Demystifying Source Engine Visleaves and Source Engine PVS - A Closer Look. This simply translates to the following: the player is in visleaf A and visleaf A has direct line of sight to visleaves B, C, D, E, F, and G. The PVS for A in this case would be stored as BCDEFG. Once the engine recognizes that the player is in A, and regardless of the exact position in A, it will proceed to render the whole PVS content. Everything in visleaves BCDEFG will be rendered even though the player is at the extreme end of A and has no line of sight to most of this content.
    You can immediately notice the extent of damage you will inflict on your open map if you neglect adding horizontal hints: excess PVS with additional useless content to be rendered at all times.
    Now that we established the importance of these horizontal hints in open maps, the question remains: where shall I put these hints?
    In the diagram above, the most logical places would be on top of the 3 green rectangles.

    We added 3 horizontal hints (H1, H2, H3) on top of the 3 regular brushes in our map (the hint face neatly resting on the top of the regular brush while other faces are textured with “skip”). This will create more visleaves as can be clearly seen in the above diagram, and vvis will take more time to calculate visibility due to the increased number of leaves and portals but this is done for the greater good of humanity your map’s fps.
    Now the player is in visleaf A1 and the PVS is reduced to (sit tight in your chair) A2, A3, A4, B1, B2, C3, C4, D1, E4, F3. On top of the nice result of a greatly reduced PVS (and therefore content to render), keep in mind that leaves A4, B2, C4, D1, E4, and F3 are mostly empty since they are way up touching the skybox.
    Some folks will start complaining and whining: what the hell dude, I don’t have 3 green rectangles in my map; where would I put my hints?? My answer would be: deal with it!!
    Joking aside, open maps will greatly differ in size, shape, geometry, and layout. What you need to do is choose 1 to 5 common height locations in your map where you would implement these hints. Medium maps with mostly uniform building heights can get away with 1 horizontal hint, while complex, large maps with various building heights can do with 4-5 hints.
    If your map has a hill made of displacements that separates 2 parts of the map, then it is also a candidate for horizontal hints. You just need to insert a nodraw regular world brush inside the displacement to be used as support for the horizontal hint (the same technique can be used if you have a big non-enterable hollow building made mostly of func_detail/props/displacements).
    Vertical/corner hints
    These might not come into play as much as their horizontal siblings, however they could see a growing potential use depending on the map’s layout, geometry tightness versus openness.
    I cannot go through all combinations of open maps obviously to show you how to lay vertical and corner hints; what I will do is choose one diagram representing a typical open map scenario with some scattered houses, streets, and surrounding fields. Once you see how I proceed with these hints, it will become a lot easier for you to implement them in your own map regardless of the differing geometry and layout.

    Here’s our typical map viewed from top with grey lines being map borders, green rectangles being houses (solid world brushes), and our tiny red player at the rightmost part of the map. The map has a main street that goes in the middle between houses but the player is not restricted to this path only.
    The diagram below shows how I would proceed with my hints for such setup.

    This is basically what you get when you give a 5-year-old some crayons.
    Seriously though, I just gave each hint a different color so you could discern them on the spot, otherwise it would be hard to tell where each one starts and ends.
    Most of these hints go from one side of the map to the other while going from ground level to skybox top; don’t be afraid of having big hints that cross your entire map.
    Notice that we have both straight vertical hints (shown from above in the diagram obviously) and corner hints; what I did is that I compartmentalized the map so wherever the player is, chances are they will have the least amount of leaves to render in the PVS (this is just a basic hint system and more fine tuning and additions could be done but you get the gist of it).
    To get more details on hint placement, please refer to my paper Hints about Hints - Practical guide on hint brushes placement
    Areaportals
    If your map has enterable buildings, then it is imperative to separate indoors from outdoors using areaportals; this is top priority.
    Make sure to slap an areaportal on each door, doorway, cellar door, window, roof opening, chimney, etc. that leads inside the house in question.
    What about outdoor areaportals? Good call. In an open map without much regular world brushes to maneuver, it could get very tricky to set up an outdoor areaportal system to separate areas. However, you should always strive to have one, even if it is one or two areaportals across the map. The reason is very simple: the view frustum culling effect, which, coupled with hints, will yield the best results in cutting visibility around the map.
    Continuing with our previous diagram, a simple outdoor areaportal system setup could be as follows (top view).

    This setup will make sure that the map is split into 4 areas and whenever you are in one of them as player, the view frustum culling effect will kick in to cull as much detail as possible from the other areas.
    Let me show you the setup from a side view to make it easier to visualize.

    This is the same areaportal that was closest to the player in the top down view diagram but this time viewed from the side. Unlike hints where it’s fine to have one big hint going across the map, for areaportals, it is best to have several smaller ones that tightly follow the contour of the geometry eventually forming one big areaportal system.
    Another possibility for outdoor areaportal system is to have a combination of vertical and horizontal (yes horizontal) areaportals.
    If your map is a village for example with a highly detailed central square where most of the action takes place, a potential system could be made of several vertical areaportals that sit in every entrance to the square from adjacent streets, and a horizontal areaportal that “seals” the area and works as a “roof”.
    For a practical guide on areaportals placement, please check out my article Practical guide on areaportals placement
    Props fade distance
    This is a really, really important tool when optimizing large open maps. In case you got distracted while I was making the announcement, I’ll go again: props fading is definitely vital when tackling open maps optimization.
    What you need to do is to set an aggressive fade distance for all trivial props that do not contribute to gameplay. Players will look closely at how detailed your map is when they check it out solo on the first run; however, when the action starts and the round is underway, adrenaline, focus, and tunnel vision kick in, and all the details become a blur.
    During an intense firefight, players will not notice small props and details up close, let alone at a distance. We need to use this to our advantage to fade props thus releasing engine overhead; a faded prop is not rendered anymore and engine resources will be freed and allocated elsewhere.
    Your map geometry will dictate the proper fade distances, but as a rough guideline, small props could have a fade distance anywhere from 800 to 1200 units (flower pot on a window sill, small bucket at the back door, a bottle on the sidewalk…), while medium props could do with 1400-1800 range (a shrub, a power box on the wall, an antenna on the roof, wood plank, gutter pipe, fire hydrant…).
    Be very careful though not to prematurely fade critical props used for cover or game tactics (car in the middle of the street, sandbags, stack of crates, dumpster on the sidewalk…).
    Cheap assets
    Many people forget about this technique which is more than needed when it comes to open maps that tend to have larger average PVS than traditional maps.
    I showcased in a previous article of mine the fps cost of cheap and expensive assets (Source FPS Cost of Cheap and Expensive Assets).
    Get in the habit of using the low-poly model version as well as the cheap texture version in the distant non-playable areas and the high unreachable areas where players won’t have much of close contact with the environment. Potential candidates could include a distant field, the unreachable opposite bank of a river, a garden behind hedges/walls, high rooftops, the 3D sky…).
    Fog/Far-z clip plane
    This technique, when correctly used, can provide a big boost to your frame rate as parts of the world beyond the opaque fog won’t be rendered at all.
    For this technique to work properly, your map should have a foggy/rainy/stormy/dusty/hazy/night setting (use as applicable) where a fully opaque fog won’t appear out of place. Obviously, if your map takes place in a sunny and clear day, this technique won’t work much and it will look inappropriate.
    Using this is simple: For example, if your map is set in a rainy and foggy day, you just need to set the fog end distance while having its density set to 1. You will then set the far-z clip plane to something slightly higher than the maximum fog distance (if the fog end distance is 8000 units for example, the far-z could be set to 8200).
    3D skybox
    This is another good technique to reduce engine overhead and the cost of rendering.  
    It is true that the 3D sky is used to expand the limits of your level and decorate its surrounding, however, since it is built at 1/16 scale (and expanded in-game), it is also a nice way to decrease rendering costs. Use this to your own advantage and relocate assets in the non-playable areas with limited player interaction to the 3D sky.
    One thing to keep in mind though, the 3D sky’s visleaf is rendered at all times on top of the PVS in the playable area. Do not go overboard and make an extra complex, highly expensive 3D sky or you would be defeating the purpose of this optimization technique.
    Occluders
    You thought I forgot about occluders? Not a chance as these are the big guns when it comes to large open maps with little world brushes to use for other optimization techniques.
    Let’s clear one thing first; if your map is made mostly of brushwork and displacements with little to no props, then there is absolutely no need to resort to occluders as they’d be totally useless in this case. Only when the map is loaded with models and props in an open setup with little regular world brushes that occluders come to play in force.
    To place occluders, you would search for areas where these occluders could make the most impact (low fps, high traffic, props abundance) since they run in real time and are expensive, otherwise their cost would outweigh their benefit in terms of frame rate variation.
    Remember that occluders rely on the player’s position and field of view relative to the occluder to calculate what gets culled. You need to place them in a way to maximize the number of props to be culled behind them when the player stands in front of these occluders.
    Let’s see some examples.

    We go back to our famous top down diagram; the occluder is dark blue placed on the left wall of the large house while the little black stars represent various props and models. The 2 diagonal black lines denote the player’s FOV relative to the occluder. Anything behind the occluder and within the view frustum will be culled.
    That’s nice; we are able to cull 4 props but is it enough? It is not optimal as we can still do better. What if we move the occluder to the right wall of the house?

    Much better if you ask me. 5 additional props were added to the culling process meaning less overhead and fewer resources to render for the engine. That is why I said earlier it is all about maximizing the impact of the occluder by placing it in a way relative to the player’s position that maximizes the number of culled models.
    Here’s another example (still top down view).

    The player has moved to the middle of the central street, and beyond that L-shaped house is an open field with a lot of props scattered around. One way to implement occluders is as showcased in the above diagram. Notice how I arranged 2 perpendicular occluders along the walls for the maximum occlusion effect as all of these props in the field are not rendered from that player location.
    Another way to arrange occluders in this case would be diagonally across the L-shaped house (split into 2 or 3 occluders if needed to accommodate the nearby geometry; they can be floating without the need to seal an area).
    If you’re feeling brave enough (you should be after reaching this far in this article), you could also add an extra occluder along the wall of the house to the left of the L-shaped house to further enhance the view frustum occlusion effect and cover more props in the field.
    The most common places to add occluders in open maps include a displacement hill that separates parts of the map, a hedge that stands between a street and a field full of props, a floating wall between a house garden and the street, the walls of a large house, the walls of a tall building, a ceiling when it separates multiple levels…
    To read more about occluders placement and cost, please consult my article Practical guide on occluders placement
    In conclusion
    The foundation of optimization in Source engine will be the same whether it is a traditional map or an open one. You will heavily rely on func_detail, nodraw, displacement, props… to achieve your goals but it is the way you use these tools in an open map that makes all the difference.
    One might get away with being a bit sloppy with optimization in a traditional map, however, make no mistake that an open map won’t be any forgiving if you decide to skip a beat in your optimization system.
    Talking about different open maps and formulating varying optimization systems for them could fill articles; I hope this article has shed enough light on the open maps optimization approach to let you easily design a system for your own map.
  15. Like
    leplubodeslapin got a reaction from biXen for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
  16. Like
    leplubodeslapin reacted to FMPONE for an article, 2015: Mapcore's Year in Review   
    (Art by Thurnip)
     
    This overview proves how talented our community is. We share, give feedback and learn from one another. Lots of our members have made it into the game industry and continue to make their mark working for high-profile studios. Our articles were shared around the world and our collaborative CS:GO contest was a huge success. We can only conclude that 2015 was again a stellar year for the Core and we are looking forward to an even better 2016!   
     
    2015: Mapcore's Year in Review
    It was a banner year. Here’s a taste of what our community created:

    Temple of Utu by Minos 

    Corridor by JonnyPhive

    Rails by Deh0lise

    Cold Fusion by Rusk

    Half-Life 2 Scene by Psy

    Resort by 'RZL and Yanzl

    Zoo by Squad and Yanzl

    Santorini by FMPONE and Dimsane

    Corridor by RaVaGe

    Seat by penE

    Half-Life 2 UE4 Corridor by PogoP

    Tulip by catfood

    Volcano by 2d-chris

    Chilly UE4 Scene by TheOnlyDoubleF
    Articles
    High-quality original content:






    Grand Prize Winner Announced


    Hurg Smiles Upon You All!
     
     
  17. Like
    leplubodeslapin reacted to will2k for an article, Viability of Hostage Rescue Scenario in CS:GO   
    This level design article is about the past and the present of the hostage rescue mode in Counter-Strike. Showcasing the inherent issues that accompanied the scenario allowing the bomb/defuse mode to gain traction and popularity. This article will also present what can be done, level design wise, to remedy some of the shortfalls and allow the scenario to be viable.
    A historical background
    Counter-Strike officially started life in June 1999 with the release of beta 1, and it shipped with four maps, that’s right, four whole maps. They were all hostage rescue maps and the prefix used for these maps was cs_ as opposed to the standard deathmatch maps starting with dm_. This prefix was an abbreviation of the game’s name (Counter-Strike) which hints to this hostage-rescue scenario being the only one in the minds of Gooseman and Cliffe, the creators of CS, at the time of launch.
    Fast forward a couple of months, beta 4 rolled out in November 1999 bringing to the table a new scenario, bomb defuse. The new maps carried the prefix de_ and while one would think that the hostage rescue maps would be switched to hr_ prefix, they kept the same prefix which started to be referred to as the “Classic Scenario”. Counter-Strike was built on hostage rescue scenario.
    I started playing CS in beta 2 in August 1999 (I totally missed beta 1, screw me) and maps like Assault and Siege were all the rage at LAN parties. The nearest LAN/internet café was a 5-minute drive from my place, and LAN parties with friends used to be a blast full of shouting, cursing, bluffing, noob-trashing; the standard menu for a CS session. Good times.
    Siege, the oldest CS map (beta 1), and Assault (beta 1.1) were the epitome of the game. You had to dive in as a CT deep into the T stronghold to rescue the hostages and bring them back to safety. These maps were the most played on LANs and embodied the style of early CS gameplay. At the LAN place where I used to wage my virtual battles, Assault equaled CS, literally. A fun fact is that when Dust came out, I started a LAN session with this map and everyone in the room shouted at me: "What the hell is this? We wanna play CS!" For my friends, Assault was CS.
    However, those rosy days for hostage rescue began to turn into grim grey when folks started playing bomb defuse scenario and realized how…fun it was. A map like Dust almost single-handedly pushed the scenario into higher ground with its bright environment/textures, clear/wide paths and its ease of use and noob-friendliness. A year later, around Summer 2000, Counter-Strike was now equivalent to Dust for my friends.
    How did this happen? What went wrong?
    Inherent flaws of hostage rescue
    Hostage rescue is a very delicate and tough scenario for law enforcement operators in the real world. It puts the assailing team at a great disadvantage against heavily-armed barricaded hostage-takers who are probably using civilian hostages as human shields and as a bargaining chip for a later escape.
    As you can deduce, transferring this scenario as realistically as possible into the game will not fare well, and this disadvantage will carry on for the CT team. The problem is only exacerbated when you add the more or less “flawed” game mechanics to the scenario. This is exactly what went wrong with hostage rescue scenario in case you are still wondering about the rhetoric questions at the end of the historical background introduction. The popularity of cs_ scenario started dwindling and the rise of the bomb/defuse scenario only made things worse.
    Almost all the early cs_ maps featured a relatively tiny hostage zone/room having one entryway usually sealed with closed doors that the CT must open to get access inside. This room was typically located behind T spawn which made the area a camping ground and made camping that zone an obvious and rewarding tactic for Ts. The doors having to be manually opened with a loudening sound made things worse and negated any surprise or sneaky rush towards the hostages. A classic example is the hostage area and T spawn in cs_assault.

    I dare not think of how many Ts are camping behind those doors
    Another equally important camp fest occurred in the hostage rescue zone. Early designs made the rescue zone relatively small with one or two access paths that can be defended from one location. If the CT team manages to reach the hostages and rescue them, the Ts could easily fall back to the rescue zone to camp and patiently wait for the CTs to show up. The hostage rescue zone in cs_italy is a nice example to showcase how one T could camp in the southernmost spot in the zone allowing him to monitor both entryways, from market and from wine cellar, within the same field of view. CT slaughter was almost a guaranteed thing to happen.

    A CT will show up any second now; imminent slaughter commencing in ...3, 2, 1
    A third flaw was the hostages themselves. They were difficult to escort and protect and were easily stuck or left behind in various parts of the maps between their initial hostage zone and the final rescue zone. I lost count of how many times I rescued the hostages and ran as fast as I could to the rescue zone, reaching it with a big grin on my face only to turn around and find out that only one or two of the four hostages actually followed me; the others were randomly stuck on a ladder, door frame, window ledge, vent, chair, table…I could go on but my blood is starting to boil just thinking of this.
    To add insult to injury, hostages could also be killed or “stolen” for ultimate trolling. When Ts were stacked on money, they could easily kill all the hostages, basically turning the round to a frustrating terrorist hunt for CTs. In early CS versions, a CT teammate could press the “use” key on a hostage that you were already escorting to steal it. This would leave you helplessly wondering where the hell did the 4th hostage go in case you did not catch the teammate performing the action.
    Lastly, maps themselves contributed to the issues that were piling up against hostage rescue scenario. If you are a CS veteran and you were around the early betas in 1999, you would most certainly remember how quickly hostage rescue maps were pruned from one beta to another; some maps even had a life span of 1 week before being discarded out of the official roster. Most of these early cs maps featured dark, nightly environments that were unfriendly to both newcomers and established players. Other maps had a confusing-as-hell labyrinthine layout that confused even the most great-sense-of-direction players, and made remembering paths nigh impossible. Some of these maps had narrow twisted paths and choke points, vents, and ladders that not only frustrated players (especially CTs) but also made rescuing and escorting the hostages more of wishful thinking. The icing on the cake was the different gimmicks introduced in some maps that made a frustrating gameplay/layout even more annoying: some maps had a machine gun nest in T spawn allowing Ts to master and perfect the art of CT slaughtering while other maps had flammable drums that could be shot and blasted for the ultimate carnage right next to the hostage zone. Good example maps include cs_prison, cs_bunker, cs_iraq, cs_hideout, cs_facility, cs_desert, among many others.
    Meanwhile, bomb/defuse scenario was gaining grounds at an increased rate and before too long, hostage rescue was relegated to a distant second place in terms of popularity among players and level designers alike.
    As a small experiment, I tallied the number of custom hostage and defuse maps submitted on Gamebanana for Counter-Strike Source and Global Offensive. For CS:GO, there are 761 de_ maps against 157 hostage maps while for CS:S, the figures are 4060 de_ for 1244 cs_ maps. The disparity is rather meaningful as the ratio in CS:GO is 4.85:1 while for CS:S the number is 3.26:1. This means that for each hostage map in CS:GO there are almost five maps of bomb/defuse whereas this number drops slightly to almost three maps for CS:S. With CS:GO putting extra focus on competitive gameplay, this ratio is bound to further grow widening the rift between bomb/defuse and hostage rescue maps.
    That’s it? Is it done for cs_ maps? Shall we prepare the obituary or is there a magical solution to breathe some fire and life in them?
    Solutions for viability
    There is a magical solution that involves you transferring a large sum of cash to my bank account, then my “guys” will contact your “guys” to deliver the “solution”. The drop point will be at the…apparently, there has been a mix-up, this is for another “deal” …nervous chuckle.
    Seriously though, while there is no magical solution that will lift hostage rescue onto the rainbow, there are a couple of things that level designers can do to start injecting some momentum to the scenario. Luckily for us, Valve has already paved the way (so these “Volvo pls fix pls” do work after all?). In March 2013, Valve introduced a major CS:GO update that completely overhauled the hostage rescue scenario mechanics and introduced cs_militia as well. The update was a game changer and a much needed tweak towards a better hostage rescue gamemode.
    We now have two hostages instead of four, and the CTs only need to rescue one of them to win the round. Moreover, the hostage does not stupidly follow the CT but instead is carried on the CT’s shoulders. Obviously the movement speed of the CT carrying the hostage is decreased but this “inconvenience” is countered with added bonus round time and the fact that the CT doesn’t have to glance over his shoulders every five seconds to make sure the hostages are still following him (this kind of distraction can prove fatal to the CT escorting the hostages). The hostages’ spawn location is randomized and can be controlled by the level designer. A nice change is that hostages don’t die anymore thus cutting any chance of Ts trolling (you still lose money when you shoot a hostage – shooting a hostage is pretty pointless now akin to shooting yourself in the foot).
    This is all good news if you ask me; hostage rescue is on the right path to become popular and viable again. With Valve doing the first half of the change, level designers have the duty to continue with the second half.
    Hostage defuse?
    As a first suggested solution, let us start treating hostage rescue as bomb defuse. Let’s be honest, bomb defuse works really well, so why not transfer this “experience” into hostage rescue. What we can do is to have a hostage rescue map’s layout mimic one of bomb defuse – that is have two hostage zones that are similarly placed as two bomb sites. We need to start treating a hostage zone like a bomb site with all accompanying techniques of rushing, pushing, faking, peeking, holding, smoking, flashing, etc. The good thing about this is that whatever knowledge, skill, and layout awareness that players have acquired from defuse scenarios will transfer effortlessly to the hostage rescue scenario; you do not need to learn new tactics and strategies. The roles will be inversed: instead of Ts rushing bomb sites and CTs defending, CTs will push hostage zones and Ts will defend and rotate.  
    Sounds logical, right? Some people might argue that having 2 separate hostage zones is not “realistic” and my answer is Counter-Strike was never about realism (carrying and running around with a 7 kg (15.5 lb), 1.2 m (47.2 inch) AWP sniper rifle with 25x telescopic sight, quickscoping and headshotting opponents is the epitome of “realism”). If you want a realistic hostage rescue scenario, then you are better off playing the original Rainbow Six Rogue Spear and SWAT 3 from 1999, or the more recent ARMA and Insurgency for a realistic military setting. I practice what I preach and I already implemented this technique in my last map “cs_calm”. The map was a remake of my CS 1.5 map from 2003 and obviously I made the “mistake” at that time to follow the trend set by official maps of having one hostage zone right behind T spawn. A playtest on Reddit CS:GO servers back in March 2015 confirmed that this setup won’t work well as Ts will inevitably abuse the hostage zone.
    I made some radical layout changes towards T spawn and hostage zone and created two new hostage zones on the upper and lower levels of the map that are connected by a back hallway to allow quick rotations (in addition to the one through T spawn). Obviously, there is no direct line of sight between hostage zones to prevent 1-zone camping. Ts have absolutely no incentive to camp one zone as CTs can reach the other one, rescue the hostage and head back to the rescue zone without being spotted from the other zone. CTs actually have a chance of winning the round by rescuing the hostages.
    I like to believe the new layout worked well. Only time and more hostage rescue maps will tell.

    Layout of the map "cs_calm"
    Rescue zone anti-camping
    We have remedied the hostage zone camping but we still need to tend to the rescue zone camping issue. A solution to this is to have two rescue zones in a similar setup to what is nicely done in cs_office. While Ts can still camp one zone, they risk a big chance of having CTs reach the other rescue zone. Again, CTs will have a viable option to save the hostages without being shredded by camping Ts. If the layout does not allow or facilitate having two rescue zones, then one big rescue zone with multiple entrances (three is a good number) should work fine. The trick here is to have the entrances not easily covered within the same field of view to prevent camping.
    Into the zone
    Just as we established that we should treat hostage zones like bomb sites, it goes without saying that each hostage zone should have at least 2 to 3 entry points. It’s pretty pointless to have only one entrance as this totally defeats the purpose of spreading hostages into two zones. The different entryways should also not be covered within the same field of view of one T; if a T decides to camp the zone, then he should be able to cover two entrances from one point leaving the third one more or less at a dead angle and viable for a CT rush or stealth/sneak surprise. 

    Showcase of Hostage Zone A on the map "cs_calm"
    The above screenshot showcases “Hostage Zone A” in cs_calm. A terrorist will typically camp near the hostage covering the two encircled entrances. The third entrance from upper level denoted by the arrow is not in the direct FOV, and is prone to a surprise attack by CTs that could catch the camping T off guard. If possible, try to spread the entrances on different vertical levels to spice things up and keep Ts on their toes.
    Lastly, it is a good idea to have a connector between hostage zones to allow fast rotations but without having a direct line of sight between hostage zones. We want to make the scenario fairer to CTs but not at the expense of Ts, inadvertently making it unfair for them.
    Conclusion
    Hostage rescue is a fun scenario if you ask me. It had many inherited and added flaws that contributed to its waning but it’s nothing that can’t be reversed. We, as level designers, need to push some changes to put the scenario back on track. What I just showcased in this article might not be the only viable solutions but they certainly are a step in the right direction. Level designers are intimidated by players who shun away from cs_ maps, and this turns into a vicious circle where players avoid hostage rescue maps and mappers in return avoid designing them. We need to break this cycle and designers need to bravely embrace the solutions I presented here or come up with their own solutions. The more cs_ maps that come out and get tested, the more we could validate these solutions as viable.
    In either case, we need to get proactive towards hostage rescue scenario; after all, this is the cornerstone that Counter-Strike was built upon.
  18. Like
    leplubodeslapin got a reaction from Pawl for an article, Source Lighting Technical Analysis: Part One   
    After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.
     
    About the reality...
    Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:
    Light can bounce and continue its travel to another direction Light can be absorbed by the matter (and the energy can be transformed to heat) Light can go through the matter, for example with air or water, some properties might change but it goes through it And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.
    How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?
    One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. 
    By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method.
     
    Lightmap baking
    OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!
    A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

    You can also see them in-game with the console command mat_luxels 1 (without and with).
    During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:
    It is partially absorbed in the lightmap grid A less bright ray bounces from the face Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

    When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

    On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

    On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.
    In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

    But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. 
    Here’s an example:

    The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.
    Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light.

    Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

    In-hammer view on the left, in-game view on the right
     
    Light Falloff laws
    One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

     
    Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.
     
     
     
    At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.
    As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

     
    You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

    (R is the radius of the sphere)
     
    If we integrate that surface area in the previous equation:

    With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. 
    So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.
    Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

    This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.
    This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.
     
    Constant, Linear, Quadratic... Wait, what?
    In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

    Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:
    x^0 = 1 : degree 0 x^1 = x : degree 1 x^2 : degree 2 x^3 : degree 3 ... And
    a0 is the constant named “constant coefficient” (associated to degree 0) a1 is the constant named “linear coefficient” (associated to degree 1) a2 is the constant named “quadratic coefficient” (associated to degree 2) Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written:

    Then, if we take the expression from the inverse-square law, which was:

    With a2 = 1 and D being the variable of distance from the light origin.
    In Source, the constant ♥ is actually the brightness (the value you configure here).
    It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

    Or...

    And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level.
    Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

    Another example:

    By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law.
    If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

    You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.
    The following equation is a personal guess of what could be the one used by VRAD:

    With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. 
    The best way to see how this equation works is to visualize it in a 2D graph: 
    https://www.desmos.com/calculator/1oboly7cl0
    This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:
    a0 is the Constant coefficient that you enter in hammer  a1 is the Linear coefficient a2 is the Quadratic coefficient B is the Brightness coefficient In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. 
    Try to interact with it!
    This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!
     
    Part Two : link
  19. Like
    leplubodeslapin got a reaction from FRAG for an article, Source Lighting Technical Analysis: Part One   
    After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.
     
    About the reality...
    Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:
    Light can bounce and continue its travel to another direction Light can be absorbed by the matter (and the energy can be transformed to heat) Light can go through the matter, for example with air or water, some properties might change but it goes through it And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.
    How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?
    One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. 
    By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method.
     
    Lightmap baking
    OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!
    A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

    You can also see them in-game with the console command mat_luxels 1 (without and with).
    During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:
    It is partially absorbed in the lightmap grid A less bright ray bounces from the face Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

    When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

    On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

    On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.
    In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

    But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. 
    Here’s an example:

    The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.
    Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light.

    Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

    In-hammer view on the left, in-game view on the right
     
    Light Falloff laws
    One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

     
    Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.
     
     
     
    At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.
    As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

     
    You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

    (R is the radius of the sphere)
     
    If we integrate that surface area in the previous equation:

    With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. 
    So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.
    Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

    This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.
    This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.
     
    Constant, Linear, Quadratic... Wait, what?
    In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

    Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:
    x^0 = 1 : degree 0 x^1 = x : degree 1 x^2 : degree 2 x^3 : degree 3 ... And
    a0 is the constant named “constant coefficient” (associated to degree 0) a1 is the constant named “linear coefficient” (associated to degree 1) a2 is the constant named “quadratic coefficient” (associated to degree 2) Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written:

    Then, if we take the expression from the inverse-square law, which was:

    With a2 = 1 and D being the variable of distance from the light origin.
    In Source, the constant ♥ is actually the brightness (the value you configure here).
    It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

    Or...

    And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level.
    Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

    Another example:

    By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law.
    If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

    You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.
    The following equation is a personal guess of what could be the one used by VRAD:

    With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. 
    The best way to see how this equation works is to visualize it in a 2D graph: 
    https://www.desmos.com/calculator/1oboly7cl0
    This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:
    a0 is the Constant coefficient that you enter in hammer  a1 is the Linear coefficient a2 is the Quadratic coefficient B is the Brightness coefficient In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. 
    Try to interact with it!
    This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!
     
    Part Two : link
  20. Like
    leplubodeslapin got a reaction from FRAG for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
  21. Like
    leplubodeslapin got a reaction from Radix for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
  22. Like
    leplubodeslapin got a reaction from Radix for an article, Source Lighting Technical Analysis: Part One   
    After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.
     
    About the reality...
    Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:
    Light can bounce and continue its travel to another direction Light can be absorbed by the matter (and the energy can be transformed to heat) Light can go through the matter, for example with air or water, some properties might change but it goes through it And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.
    How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?
    One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. 
    By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method.
     
    Lightmap baking
    OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!
    A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

    You can also see them in-game with the console command mat_luxels 1 (without and with).
    During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:
    It is partially absorbed in the lightmap grid A less bright ray bounces from the face Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

    When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

    On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

    On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.
    In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

    But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. 
    Here’s an example:

    The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.
    Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light.

    Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

    In-hammer view on the left, in-game view on the right
     
    Light Falloff laws
    One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

     
    Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.
     
     
     
    At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.
    As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

     
    You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

    (R is the radius of the sphere)
     
    If we integrate that surface area in the previous equation:

    With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. 
    So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.
    Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

    This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.
    This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.
     
    Constant, Linear, Quadratic... Wait, what?
    In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

    Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:
    x^0 = 1 : degree 0 x^1 = x : degree 1 x^2 : degree 2 x^3 : degree 3 ... And
    a0 is the constant named “constant coefficient” (associated to degree 0) a1 is the constant named “linear coefficient” (associated to degree 1) a2 is the constant named “quadratic coefficient” (associated to degree 2) Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written:

    Then, if we take the expression from the inverse-square law, which was:

    With a2 = 1 and D being the variable of distance from the light origin.
    In Source, the constant ♥ is actually the brightness (the value you configure here).
    It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

    Or...

    And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level.
    Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

    Another example:

    By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law.
    If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

    You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.
    The following equation is a personal guess of what could be the one used by VRAD:

    With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. 
    The best way to see how this equation works is to visualize it in a 2D graph: 
    https://www.desmos.com/calculator/1oboly7cl0
    This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:
    a0 is the Constant coefficient that you enter in hammer  a1 is the Linear coefficient a2 is the Quadratic coefficient B is the Brightness coefficient In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. 
    Try to interact with it!
    This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!
     
    Part Two : link
  23. Like
    leplubodeslapin got a reaction from sombrewynot for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
  24. Like
    leplubodeslapin got a reaction from sombrewynot for an article, Source Lighting Technical Analysis: Part One   
    After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why.
     
    About the reality...
    Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen:
    Light can bounce and continue its travel to another direction Light can be absorbed by the matter (and the energy can be transformed to heat) Light can go through the matter, for example with air or water, some properties might change but it goes through it And all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes.
    How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings?
    One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. 
    By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method.
     
    Lightmap baking
    OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT!
    A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode.

    You can also see them in-game with the console command mat_luxels 1 (without and with).
    During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face:
    It is partially absorbed in the lightmap grid A less bright ray bounces from the face Here is an animated picture to show how a lightmap grid can be filled with a single light entity:

    When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example.

    On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this.

    On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game.
    In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page.

    But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. 
    Here’s an example:

    The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough.
    Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light.

    Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top.

    In-hammer view on the left, in-game view on the right
     
    Light Falloff laws
    One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no.

     
    Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory.
     
     
     
    At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower.
    As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this:

     
    You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this:

    (R is the radius of the sphere)
     
    If we integrate that surface area in the previous equation:

    With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. 
    So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance.
    Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling).

    This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get.
    This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity.
     
    Constant, Linear, Quadratic... Wait, what?
    In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this:

    Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree:
    x^0 = 1 : degree 0 x^1 = x : degree 1 x^2 : degree 2 x^3 : degree 3 ... And
    a0 is the constant named “constant coefficient” (associated to degree 0) a1 is the constant named “linear coefficient” (associated to degree 1) a2 is the constant named “quadratic coefficient” (associated to degree 2) Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written:

    Then, if we take the expression from the inverse-square law, which was:

    With a2 = 1 and D being the variable of distance from the light origin.
    In Source, the constant ♥ is actually the brightness (the value you configure here).
    It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this:

    Or...

    And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level.
    Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example:

    Another example:

    By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law.
    If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless.

    You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically.
    The following equation is a personal guess of what could be the one used by VRAD:

    With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. 
    The best way to see how this equation works is to visualize it in a 2D graph: 
    https://www.desmos.com/calculator/1oboly7cl0
    This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are:
    a0 is the Constant coefficient that you enter in hammer  a1 is the Linear coefficient a2 is the Quadratic coefficient B is the Brightness coefficient In another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. 
    Try to interact with it!
    This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading!
     
    Part Two : link
  25. Like
    leplubodeslapin got a reaction from seir for an article, Source Lighting Technical Analysis: Part Two   
    This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. 
    Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. 
     
    Examples of application
    Constant falloff
    The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area.
     
     

     
    Linear falloff

    Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example:
     

     
    Quadratic falloff

    This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall:

     
    This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit.

     
    50% & 0% FallOff
    A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances:
    50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity 0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible. The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: 

     
    Models lighting
    An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. 
    The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example:

    If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working.
                
    With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. 
    Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. 

    If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. 
    Two VRAD commands are needed to make the Per-Vertex Lighting work:
    StaticPropLighting StaticPropPolys You have to add them here. You can find more information here.
    Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. 
    This cheap lighting method will replace the per-vertex lighting for 3 types of models:
    For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...) For prop_physics For ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC
    The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. 
    First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. 
    Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light.
     

     
    The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time.

    And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it:

     
    Projected texture and Cascaded Shadows
    Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. 

    The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps.

    When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: 

    Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time.
     
    Conclusion
    I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? 
    I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core!
    // Written by Sylvain "Leplubodeslapin" Menguy
    Additional commands for fun
    Mat_luxels 1                              // Allows you to see the lightmaps grids Mat_fullbright 1                         // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command) Ent_fire env_cascade_light kill  // KILL WITH FIRE the cascade shadows entity Mat_drawgray 1                        // Replace all the textures with a monochrome grey texture, useful to work on your lighting  Mat_fullbright 2                         // Alternative to Mat_drawgray 1 Bonus:
    Mat_showlowresimage 1           // Minecraft mode
×
×
  • Create New...