Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/28/2021 in all areas

  1. Thank you so much for including the math, as I wasn't sure how the engine calculated light spread and diffusion. Even after looking through all sorts of Source articles, your "personal guess" is literally the closest thing I have found, even to this day!
    2 points
  2. Aye, this is still a very impressive read, really well put together.
    1 point
  3. After the announcement of the Reddit + Mapcore mapping contest, the website has welcomed many newcomers. A proof that, even if it is a twelve year old game engine, Source engine attracts map makers, and there are lots of reasons for that. It is common knowledge that technology has moved forward since 2003, and many new game engines have found various techniques and methods to improve their renderings, making the Source Engine older and older. Nevertheless, it still has its very specific visual aspect that makes it appealing. The lighting system in Source is most definitely one of the key aspects to that, and at the end of this article you will know why. About the reality... Light in the real world is still a subject with a lot of pending questions, we do not know exactly what it is, but we have a good idea of how it behaves. The most common physic model of light element is the photon, symbolized as a single-point particle moving in space. The more photons there are, the more powerful light is. But light is in the same time a wave, depending on the wavelengths light can have all kind of color properties (monochrome or combined colors). Light travels through space without especially needing matter to travel (the space is the best example; even without matter the sun can still light the earth). And when it encounters matter, different kind of things can happen: Light can bounce and continue its travel to another directionLight can be absorbed by the matter (and the energy can be transformed to heat)Light can go through the matter, for example with air or water, some properties might change but it goes through itAnd all these things can be combined or happen individually. If you can see any object outside, it is only because a massive amount of photons traveled into space, through the earth’s atmosphere, bounced on all the surfaces of the object you are looking at, and finally came into your eyes. How can such a complex physical behavior from nature be simulated and integrated into virtual 3D renderings? One of the oldest method is still used today because of its accuracy: the ray-tracing method. Just to be clear, it is NOT used in game engines because it is incredibly expensive, but I believe it is important to know how and why it has been made the way it is, since it probably influenced the way lighting is handled in Source and most videogame engines. Instead of simulating enormous amount of photons traveling from the lights to the eye/camera, it does the exact opposite. If you want a picture with a 1000x1000 resolution, you will only need to simulate the travel of 1 000 000 photons (or “rays”), 1 for each pixel. Each ray is calculated individually until it reaches the light origins, and at the end the result is 1 pixel color integrated in the full picture. By using the laws of physics we discovered centuries ago, we can obtain a physically-accurate rendering that looks incredibly realistic. This method is used almost everywhere, from architectural renderings to movies. As an example, you can watch The Third & The Seventh by Alex Roman, one of the most famous CGI videos of all time. And because it is an efficient way to render 3D virtual elements with great lighting, it will influence other methods, such as the lightmap baking method. Lightmap baking OKAY LET’S FINALLY TALK ABOUT THE SOURCE ENGINE, ALRIGHT! A “lightmap” is a grid that is added on every single brush face you have on your map. The squares defined by the grid are called Luxels (they are kind of “lighting pixels”). Each luxel get its 2 own properties: a color and a brightness. You can see the lightmap grids in hammer by switching your 3D preview to 3D lightmap grid mode. You can also see them in-game with the console command mat_luxels 1 (without and with). During the compilation process, a program named VRAD.exe is used. Its role is to find the color and brightness to apply for every single luxel in your map. Light starts from the light entities and from the sky (from the tools/toolsskybox texture actually, using the parameter values that has been filled in the light_environment entity), travels through space and when it meets a brush face: It is partially absorbed in the lightmap gridA less bright ray bounces from the faceHere is an animated picture to show how a lightmap grid can be filled with a single light entity: When you compile your map, at first the lightmaps are all full black, but progressively VRAD will compute the lightmaps with all the light entities (one by one) and combine them all at the end. Finally, the lightmaps obtained are applied to the corresponding brush faces, as an additive layer to the texture used on that face. Let us take a look at a wall texture for example. On the left, you have the texture as you can see it in hammer. When you compile your map, it generates the lightmaps and at the end you obtain the result on the right in-game. Unfortunately, luxels are much rougher, with a lower resolution, more like this. On the left you have a lightmap grid with the default luxel size of 16 units generated my VRAD, a blur filter is applied and you obtain something close to the result on the right in the game. In case you did not know, you can change the lightmap grid scale with the “Lightmap Scale” value with the texture tool. It is better to use values that are squares of 2, such as 16, 8, 4 or even 2. Do not go below 2, it might cause issues (with decals for example). Only use lower values than the default 16 if you think it's really useful, because you will drastically increase your map file size and compilation time with precise lightmap grids. Of course, you can also use greater values in order to optimize your map, with values such as 32, 64 or even 128 on very flat areas or surfaces that are far away from the playable areas. You can get more infos about lightmaps on Valve’s Wiki page. But as we said before, light also bounces from the surface until it meets another brush, using radiosity algorithms. Because of that, even if a room does not have any light entity in it, rays can bounce on the floor and light the walls/ceiling, therefore it is not full black. Here’s an example: The maximum amount of bounces can be fixed with the VRAD command -bounce X (with X being the maximum amount of bounces allowed). The 100 default value should be more than enough. Another thing taken into account by VRAD is the normal direction of each luxel: if the light comes directly against the luxel or brushes against it, it will not behave in the same way. This is what we call the angle of incidence of light. Let us take the example of a light_spot lighting a cylinder, the light will bright gradually the surface - from fully bright at the bottom to slightly visible at the top. In-hammer view on the left, in-game view on the right Light Falloff laws One of the things that made the Source Engine lighting much more realistic than any others in 2004 is the light falloff system. Alright, we saw that light can travel through space until it meets something, but how does it travel through space? At the same brightness, whatever the distance is between the light origin and destination? Maybe sometimes yes… but most of the time no. Imagine a simple situation of a room with 1 single point light inside. The light is turned on, it produces photons that are going in all the directions around it. As you might imagine, photons are all going in their own direction and have absolutely no reason to deviate from their trajectory. At one time, let’s picture billions of photons going in all the directions possible around the light, the moment after, they are all a bit further in their own trajectory, and all the photons are still there, in this “wave”. But, as each photon follows its own trajectory, they will all spread apart, making the photon density lower and lower. As we said before, the more photons there are, the more powerful light is. And the highest the density, the more intense light is. Intensity of light can be expressed like this: You have to keep in mind that all of this happens in 3D, therefore the “waves” of photons aren’t circles but spheres. And the area of a sphere is its surface, expressed like this: (R is the radius of the sphere) If we integrate that surface area in the previous equation: With ♥ being a constant number. We can see the Intensity is therefore proportional to the reverse of the square of the distance between the photons and their light origin. So, the further light travels, the lower is its intensity. And the falloff is proportional to the inverse of the square of the distance. Consequently, the corners of our room will get darker, because they are farther away from the light (plus they don’t directly face the light, the angle of incidence is lower than the walls/floor/ceiling). This is what we call the Inverse-Square law, it’s a very well-known behavior of the light in the field of photography and cinema. People have to deal with it to make sure to get the best exposure they can get. This law is true when light spreads in all possible directions, but you can also focus light in one direction and reduce the spread, with lenses for example. This is why, when Valve decided to integrate a lighting falloff law in their engine, they decided to use a method not only following the inverse-square law but also giving to mapmakers the opportunity to alter the law for each light entity. Constant, Linear, Quadratic... Wait, what? In math, there is a very frequent type of functions, named polynomial functions. The concept is simple, it’s a sum of several terms, like this: Every time, there is a constant factor (the “a” thing, a0 being the first one, a1 the second one, a2 the third one...), multiplied with the variable x at a certain degree: x^0 = 1 : degree 0x^1 = x : degree 1x^2 : degree 2x^3 : degree 3...And a0 is the constant named “constant coefficient” (associated to degree 0)a1 is the constant named “linear coefficient” (associated to degree 1)a2 is the constant named “quadratic coefficient” (associated to degree 2)Usually, the function has an end, and we call it by the highest degree of x it uses. For example, a “polynomial of the second degree” is written: Then, if we take the expression from the inverse-square law, which was: With a2 = 1 and D being the variable of distance from the light origin. In Source, the constant ♥ is actually the brightness (the value you configure here). It is simply an inverse polynomial of the second degree, with a0 and a1 equal to zero. And we could write it like this: Or... And here you have it! This is approximately the equation used by VRAD to determine the intensity of light for each luxel during the compilation. And you can alter it by changing the values of the 3 variables constant, linear and quadratic, for any of your light / light_spot entity in your level. Actually you set proportions of each variable against the other two, and only a percentage for each variable is saved. For example: Another example: By default, constant and linear are set to 0 and quadratic to 1, which means a 100%quadratic lighting attenuation. Therefore, by default lights in Source Engine follows the classic Inverse-Square law. If you look at the page dedicated to the constant-linear-quadratic falloff system on Valve’s Wiki, it’s explained that the intensity of light is boosted by 100 for the linear part of equation and 10 000 for the quadratic part of equation. This is due to the fact that inverse formulas in equations always drop drastically at the beginning, and therefore a light with a brightness of 200 would only be efficient in a distance of 5 units and therefore completely pointless. You would have to boost your brightness a lot in hammer to make the light visible, that's what Valve decided to make automatically. The following equation is a personal guess of what could be the one used by VRAD: With constant, linear and quadratic being percentage values. The blue part is here to determine the brightness to apply, allowing to boost the value set in hammer if it is as least partially using linear or quadratic falloff. The orange part is the falloff part of equation, making the brightness attenuation depending of the distance the point studied is from the light origin. The best way to see how this equation works is to visualize it in a 2D graph: https://www.desmos.com/calculator/1oboly7cl0 This website provides a great way to see 2D graphics associated to functions. On the left, you can find all the elements needed with at first the inputs (in a folder named “INPUTS”), which are: a0 is the Constant coefficient that you enter in hammer a1 is the Linear coefficienta2 is the Quadratic coefficientB is the Brightness coefficientIn another folder are the 3 coefficients constant, linear and quadratic, automatically transformed into a percentage form. And finally, the function I(D) is the Intensity function depending on the distance D. The drawing of the function is visible in the rest of the webpage. Try to interact with it! This concludes the first part, the second part will come in about two weeks. We will see some examples of application of this Constant-Linear-Quadratic Falloff system, and a simpler alternative. We will also see how lighting works on models and dynamic lighting systems integrated in source games.Thank you for reading! Part Two : link
    1 point
  4. I played around in it, and it was really fun! Not to mention, that the graphics were beautiful!
    1 point
  5. Hello everyone. I hope you are doing great. I wanted to show you my first map: https://steamcommunity.com/sharedfiles/filedetails/?id=2459189885 Of course I have to thank and mention Yanzl for the huge help of the open breach props and textures. Also I used a lot of new nuke's props and textures too (very mainstream, I know haha) I tried to make a map fun to play, with 2 bombsites but not big, with many kz jumps and plays (mostly in A bombsite) and balanced between CTs and Ts. I hope you like it, and I'll be reading your comments and opinions. Thanks for your time! Have a nice day.
    1 point
  6. "Get to the choppa!" A man is being held hostage in his mansion located in the Swedish skerries. Secure him and bring him to one of the TWO rescue zones to call in the chopper. Keep the landing zone clear and extract the hostage! Made for the 2020 Source Engine Discord Wingman Contest TEAM Fnugz | Gameplay MadsenFK | Art Andi | Models SHOUTOUTS Yanzl | Mapping hotline ZooL | Grand vscript wizard Terri | The People of Mapcore. Additional Credits Yanzl | Making and sharing the majority of the assets used Skybex | A few assets WORKSHOP LINK MOOD BOARD
    1 point
  7. Solved! thanks to everyone on the discord channel. Soundscapes must now be setup as a soundscapes_mapname.txt file and placed in the scripts folder. Example soundscapes_de_distillery.txt Your level will automagically reach this file now.
    1 point
  8. This is the second part of a technical analysis about Source Lighting, if you haven’t read the first part yet, you can find it here. Last time, we studied the lightmaps, how they are baked and how VRAD handles the light travel through space. We ended the part 1 with an explanation of what the Constant-Linear-Quadratic Falloff system is, with a website that allows you to play with these variables and see how lighting falloff reacts to them. We will now continue with basic examples of things you can do with these variables. Examples of application Constant falloff The simplest type of falloff is the 100% constant one. Whatever the distance is, the lighting has theoretically the same intensity. This is the kind of (non-)falloff used for the sun lighting, it is so far away from the map area, that light rays are supposed to be parallel and light keep its intensity. Constant falloff is also useful for fake lights, lights with a very low brightness but that are here to brighten up the area. Linear falloff Another type of falloff is the 100% linear one. With this configuration, light seems to be a bit artificial: it loses its intensity but goes way further than the 100% quadratic falloff. It can be very useful on spots, the lighting is smooth and powerful. Here is an example: Quadratic falloff This is the default configuration for any light entity in Hammer, following as we said before the classic Inverse-Square law (100% Quadratic Falloff). It is considered to be the most natural and realistic falloff configuration. The biggest issue is that it boosts the brightness so much on short distances, that you can easily obtain a big white spot. Here is an example, with a light distant of 16 units from a grey wall: This can also happen with linear falloff but it is worse with quadratic. Simple solutions exist for that, the most common is not to use a light entity but a light_spot entity that is oriented to the opposite direction from the wall/ceiling the light is fixed to. You can make the opening angle of your light_spot wider, with the inner and outer angle parameters (by default the outer one is 45°, increase that to a value of 85° for example). If needed, you can also add a light with low brightness to light the ceiling/wall a bit. 50% & 0% FallOff A second light falloff system exists, overriding the constant-linear-quadratic system if used. The concept is much simpler, you have to configure only 2 distances: 50 percent falloff distance: Distance at which light should fall off to 50% from its original intensity0 percent fall off distance: Distance at which light should end. Well ... almost, it actually fall off to 1/256% from its original intensity, which is negligible.The good thing with this falloff system is that you can see the 2 spheres according to the 2 distances you have configured in Hammer. Just make sure to have this option activated: Models lighting An appropriate section for models lighting is needed, because it differs from brush lighting (but the falloff stays the same). In any current game engine, lightmaps can be used on models, a specific UV unwrap is even made specifically for lightmaps. But on Source Engine 1 (except for Team Fortress 2) you cannot use lightmaps on models. The standard lighting method for models is named Per-Vertex Lighting. This time, light won’t be lighting faces but vertices, all of the model’s vertices. For each one of them, VRAD will compute a color and brightness to apply. Finally, Source Engine will make a gradient between the vertices, for each triangle. For example: If we take a simple example of a sphere mesh with 2 different light entities next to it, we can see it working. With this lighting method, models will therefore be integrated in the environment with an appropriate lighting. The good thing is that, if a part of the model is in a dark area, and another part is in a bright area, the situation will be handled properly. The only requirement for this is that the mesh must have a sufficient level of detail in it; if there is a big plane area without additional vertices on it, the lighting details could be insufficient. Here is an example of a simple square mesh with few triangles on the left and a lot on the right. With the complex mesh, the lighting is better, but more expensive. If you need a complex mesh for your lighting, you don’t want your model to be too expensive, you have to find a balance. Two VRAD commands are needed to make the Per-Vertex Lighting work: StaticPropLightingStaticPropPolysYou have to add them here. You can find more information here. Another system exists, that is much cheaper and simpler. Instead of focusing on the lighting of all the vertices, the engine will only deal with the model’s origin. The result obtained in-game will be displayed on the whole model, using only what has been computed at the model’s origin location. This can be an issue if the model is big or supposed to be present in an area with lots of contrast in lighting. The best example for that is at the beginning of Half-Life 2 with trains entering and exiting tunnels. We can see the issue: the model is illuminated at the beginning, but when it enters the tunnel it suddenly turns dark. And this moment is when the train’s origin gets in the shadow. This cheap lighting method will replace the per-vertex lighting for 3 types of models: For prop_dynamic or any kind of dynamic models used in the game (NPCs, weapon models in hand, any animated models...)For prop_physicsFor ANY MODEL USING A NORMAL MAP (vertex lighting causes issues with normal maps apparently), EVEN IF USED AS A PROP_STATIC The big problem with these models is their integration in the map, they won’t show any shadow and their lighting will be very flat and boring (because it’s the same used for the whole model). But hopefully there are 2 good things with this cheap lighting method. First, the orientation from which comes light is taken into account, if blue light comes from one direction, therefore all the faces oriented toward this direction will be colored in blue. And if you have different lighting colorations/intensities coming from different sides of your model, they should appear in game. Here is an example of a train model using a normal map with 2 lights on both side. If you look closely, you’ll see some blue lighting on the left, on faces that are supposed to be in the shadow of the blue light but are oriented toward the blue light. The second good thing is that there is still some kind of dynamic per-vertex lighting, but much simpler: it only works with light and light_spot entities (NOT with light_environment), and it just adds some light to the prop, it cannot cast any shadow (it only takes into account dynamically the distance between the light and the vertex). If we use again the high-poly plane mesh we had before as a prop_dynamic, being parented to a func_rotating that ... rotates. Light is dynamically lighting the vertices of the props. There is a limit of 3 dynamic lights per prop, it can’t handle more at the same time. And if you add a normal-map in your model’s texture, this cheap dynamic lighting works on it: Projected texture and Cascaded Shadows Few words to finish the study with dynamic lighting. Projected textures is a technology that appeared with Half-Life 2: Episode Two in 2007, it consists of a point-entity projecting a texture in the chosen direction, with a chosen opening angle (fov). The texture is projected with emissive properties (it can only increase the brightness, not lowering it) and it can generate shadows or not. The great thing with this technology is that it’s fully dynamic, the env_projectedtexture can move and/or aim at moving targets. This technology is used for example on flashlights in Source games. But as usual, there is also a drawback: most of the time you can only use only 1 projected texture at a time, modders can change this value quite easily but on Valve games it is always locked on 1. The cascaded shadows system is only used on CS:GO. The concept is quite similar from a projected texture but it doesn’t increase the brightness, it only adds finer shadows. It is used for environment lighting, using much smaller luxels than for the lightmaps and it is fully dynamic. It starts from the tools/toolsskybox textures of the map and cast shadows if it meets any obstacle. Shadows from the lightmap are most of the time low resolution and the transition between a bright and a dark area is blurry and wide. Therefore, the cascaded shadow will be able to draw a clear shadow around the one from the lightmaps. When an object is too small to get a shadow in the lightmap, it will be visible thanks to the cascaded shadows. There are 3 levels of detail for cascaded shadows on Counter-Strike, you can configure the max distance at which the cascaded shadows will work in the env_cascade_light entity at the parameter Max Shadow Distance (by default it’s 400 units). The levels of detail will be distributed within this range, for example: Since cascaded shadows and projected textures share some technology, you can’t use them both at the same time. Conclusion I really hope you have found this article interesting and learned at least few things from it. I believe most of these informations are not the easiest to find and it’s always good to know how your tools work, to understand their behavior. Source Engine 1 is old and its technologies might not be used anymore in the future, more powerful and credible technologies are released frequently but it’s always good to know your classics, right? I would like to thank Thrik and ’RZL for supporting me to write this article, and long live the Core! // Written by Sylvain "Leplubodeslapin" Menguy Additional commands for fun Mat_luxels 1 // Allows you to see the lightmaps gridsMat_fullbright 1 // Disables all the lighting (= fullbright). On CS:GO, cascaded shadows stay and you should delete them as well (cf next command)Ent_fire env_cascade_light kill // KILL WITH FIRE the cascade shadows entityMat_drawgray 1 // Replace all the textures with a monochrome grey texture, useful to work on your lighting Mat_fullbright 2 // Alternative to Mat_drawgray 1Bonus: Mat_showlowresimage 1 // Minecraft mode
    1 point
×
×
  • Create New...