
Content Count
979 
Joined

Last visited

Days Won
1
Reputation Activity

laminutederire got a reaction from Ynel in What have you purchased recently?
Just bought the Acer XF270HU, 144Hz of IPS 1440p goodness incoming!
On the fence about buying a 1080 ti to push them frames and replace my r9 fury!
Edit: I'm an idiot, it's not [email protected] ready for any game, but it'll do okay for now, especially since I spend most of my time on cs go!
Update: I got the monitor, and tested it. Okay. It's Amazing!
In action in gta v:

laminutederire got a reaction from Ynel in What have you purchased recently?
Bought a Galaxy s8 to replace my s4. It is as fast as a flagship Phone can be, has every feature you need and most of the ones you don't need, but hell does it look damn beautiful compared to other phones! Paid it the price of a s7, and at that price it is a really good phone. Recommending it

laminutederire got a reaction from Ynel in What have you purchased recently?
Congrats! Those cpu are somewhat hard to get in many places!

laminutederire got a reaction from Klems in What's going on with your life?
Haven't been here in a while. However I read an amazing paper on path tracing that made me wanted to share it here since it opens in my opinion a boulevard to true realtime pathtracing for games! Artists will loooooooove path tracing, and so will gamers if we finally manage to make it happen!
Anyway, the paper is on learning the light distribution through a neural network more or less. It works very well and is quite impressive.
(Paper link )
It is an elegant idea that we shouldn't look light as camera dependent but for what it actually is in physics: a vector field (Well in practice it'd be magnetic and electric field, so two fields (but same thing) ). As soon as you have this, all the information you need can be precomputed, you can hope to infer pixel values fast for static scenes, and here on have interactive frame rates when the hardware follows.
Anyhow. Hope it got everyone excited as much as it did for me and have a good day everyone!

laminutederire reacted to blackdog in What are you playing now?
Oh bravo! I want to do that but I end up playing CS…

laminutederire got a reaction from Zarsky in Seeking industry insight on common pipeline bottlenecks
There are people already doing research on that, but: procedural generation of content could benefit a lot from machine learning ( maybe use of the GAN techniques? ). You could also look into deep networks for aspect matching and rendering. (Aspect matching being about finding shader parameters to match the aspect of a material on a picture or more, and rendering about outputting pixels based on some parameters without having to render effectively the scene or one type of effect (there have been recent papers on cloud rendering using NNs and about denoising with neural networks (more useful for path tracers), maybe that could work for faster AA though (if you become millionaires with that,do quote me on that ) ) ).
There is also work on animation and physics simulation, notably fast fluid simulation for games or hair simulation. ( it may be best fit for the first one though).
(This is coming as a computer science and engineering student though)

laminutederire got a reaction from Mia Winters in CounterStrike: Global Offensive
Depends on your initial dpi, I'm at 1 with 800 dpi, which would be 2 with 400
I have the most popular keyboard it seems! But it's understandable since it is one of the cheapest well built mechanical keyboards !

laminutederire got a reaction from JeanPaul in Black Mesa Source
Huge thanks to answer questions like that!
I never would've figured the boost was due to better buffers! It seems to me with the little experience I have that GPU graphics perf are somehow more reliant on how efficiently the memory is used. It does make sense, as the cache structure lead to smaller lower level cache available to each core, but it never stops to surprise me how important it is ( and it pains me as well )
Is it Chetan Jags? If so, I saw an article from him on the initial way it was done, which was informative, so thanks to him for that!
That's a nice work you are doing from the initial limited framework!

laminutederire reacted to JeanPaul in Black Mesa Source
@laminutederire
Quote form our resident shader guy Chetan in response to your question:

laminutederire reacted to blackdog in Overkill's The Walking Dead
As there's more material coming out, makes sense to create a proper topic.
Reveal trailer with Kirkman himself explaining this is a multiplayer coop for teams of four:
First character reveal, thanks to @laminutederire that posted it first
There's apparently a VR experience in development too
Here's a featurette of the team and studio
it is possible to see that UE4 is being used to power this game.
A long interview is also available:
(not sure the contents as I haven't watched it)

laminutederire got a reaction from blackdog in Walking Dead
Don't know if I should make a new thread! I was surprised nobody posted anything about it anywhere here, so:
YouTube link (did not find a way to embed the video through my phone)
So apparently Overkill is doing a Walking dead game, in a left for dead way (4 way coop). Since there are guys from that studio, is the comparison relevant?

laminutederire reacted to [HP] in Cuphead
Played both, loved both. In fact, they're probably my GOTY contenders up there with Zelda BOTW.
If you like old school platformers more specifically run&gun games like Contra, and enjoy the art style, go for cuphead.
If you like SP experiences, Hellblade feels like a fully fledged AAA game to me. Incredible technical achievement such as beautiful shaders and lighting, shame it lacks a bit in gameplay variety.

laminutederire got a reaction from [HP] in Wolfenstein 2: The New Colossus
Seems like the game did not sell very well and that they are trying to drive sales. That's probably why they made a demo available as well. A lot of the mixed reviews come from the overall buggy experience some player had 2, so making a demo available to know whether you can run it properly or not helps. (I personally wouldn't have bought it otherwise since I didn't want to get stuck at a measly 30fps). It can help driving the sales of the older games as well. So it probably made sense for them to do it despite pissing off people.

laminutederire got a reaction from TheOnlyDoubleF in Session  A new skateboarding game
Too bad it doesn't look as smooth as the skate 1 to 3 games used to. Another game EA should have kept on doing if you ask me

laminutederire got a reaction from blackdog in The Olympics Committee officially opens to eSport
There was tiger woods as well, he could still play anyway. Same for Kobe (I think he wasn't convicted as guilty? I'd have to check). That was my point. Unethical behaviors aren't punished in regular sports, so they cannot say e sports aren't suitable because of ethics.

laminutederire got a reaction from Vaya in Computer graphics, my journey to understanding
I finally finished all my renders. Here they are.
This first one is similar to what ambient occlusion is all about (or so I have heard!). It is a computation of average occlusion on the model. It is rendered by sending a ray, checking if it intersects geometry. If it does, you send another ray in a random direction, and you return the color black if the new ray hits something, whit otherwise. The new ray has a specified length, to determine, how far occlusion is calculated. If you put a longer length, more rays will intersect geometry and it’ll get darker. Then you average it out to get that render.
Afterwards, I computed direct lightning with a single light point. The light point class was implemented with power and position arguments which sufficed to render images with it. The power is distributed uniformly on a sphere, therefore we have:
Power (point) = P_lightpoint/ (4*pi*r^2) where r is the distance between thepoint and the pointlight.
The BSDF model was started to be used here, to determine how light interacts with a given material. This model described the ratio of energy transmitted and reflected from the energy which is incident.
From there you get that a simple diffuse model by sending a ray to the scene, look at where it intersects, and then look if it receives light from emitters. If it does, you calculate the incident power, then you compute how much of this power is transmitted to the camera ray direction.
You then end up with such a render:
From there the textures can change the value of the color and power which is redistributed and so can specularities and so on.
The issue with this method is that it requires to send many rays and compute a lot of intersections. We have to optimize the data structures to reduce the render time. There are three main techniques. We can try to send fewer rays, send more generalized rays which will diminish the number of rays required or we can try to compute fewer intersections. We can of course mix a few of those techniques. Sending fewer rays can be done by terminating rays earlier, and by using adaptive sampling. The first method helps not having too many ray bounces, while the second helps not sending rays to specific pixels where there are few changes at each ray tracing there. The area where there are the most significant optimizations that can be done rapidly are the ones related to intersections. It is believed they account for a significant part of the computation time. The complexity for brute force is in O(N_pixel*N_objects).
They all rely on using the geometric properties to simplify visibility and intersection computations. Some techniques are also related to compression of scene file size. As you might already know, when mapping in counter strike, you create bsp files, which are the result of a discretization of the space (hence the need to have clear boundaries of said space (leaks provenance). Using uniform grids built in the space, and by incrementally rasterizing, we can reduce greatly the render time. However, while it is easy to implement, it isn’t suited for nonuniform scenes. We can have different hierarchy of grids (grids into grids to help with that), or different ways of building grids (octrees, kdtrees or bsptrees).
they can be explained like this:
and trees are built like this:

laminutederire got a reaction from AlexM in Computer graphics, my journey to understanding
Hello everyone,
I've started a few classes on computer graphics and animation, from a scientific point of view. Since most people are either hobbyists or artists on this forum, I figured some of you might be interested in learning a few things (or rediscovering some things as I'm learning them as well. I can't post most of my learning materials since I'd get expelled otherwise, but I'll do my best anyway. One class will be about photorealistic rendering and the other one about physically based animations and effects.
On the first one we'll program various effects starting from basic illumination to subsurface scattering, going through reflections and procedural generation of effects, geometry etc. Animation will go over rigid body then soft body simulation, with overviews of hair simulation and maybe machine learning techniques to have fast soft body simulation.
If those subjects interests anyone, welcome here, I'll update this thread with new posts when I'll learn new things during my courses.

laminutederire reacted to 'RZL in Push & Pull  The art of guiding players through an environment
Promoted this thread again since it deserves all the exposure it can get, really a great read.

laminutederire got a reaction from FMPONE in Computer graphics, my journey to understanding
I finally finished all my renders. Here they are.
This first one is similar to what ambient occlusion is all about (or so I have heard!). It is a computation of average occlusion on the model. It is rendered by sending a ray, checking if it intersects geometry. If it does, you send another ray in a random direction, and you return the color black if the new ray hits something, whit otherwise. The new ray has a specified length, to determine, how far occlusion is calculated. If you put a longer length, more rays will intersect geometry and it’ll get darker. Then you average it out to get that render.
Afterwards, I computed direct lightning with a single light point. The light point class was implemented with power and position arguments which sufficed to render images with it. The power is distributed uniformly on a sphere, therefore we have:
Power (point) = P_lightpoint/ (4*pi*r^2) where r is the distance between thepoint and the pointlight.
The BSDF model was started to be used here, to determine how light interacts with a given material. This model described the ratio of energy transmitted and reflected from the energy which is incident.
From there you get that a simple diffuse model by sending a ray to the scene, look at where it intersects, and then look if it receives light from emitters. If it does, you calculate the incident power, then you compute how much of this power is transmitted to the camera ray direction.
You then end up with such a render:
From there the textures can change the value of the color and power which is redistributed and so can specularities and so on.
The issue with this method is that it requires to send many rays and compute a lot of intersections. We have to optimize the data structures to reduce the render time. There are three main techniques. We can try to send fewer rays, send more generalized rays which will diminish the number of rays required or we can try to compute fewer intersections. We can of course mix a few of those techniques. Sending fewer rays can be done by terminating rays earlier, and by using adaptive sampling. The first method helps not having too many ray bounces, while the second helps not sending rays to specific pixels where there are few changes at each ray tracing there. The area where there are the most significant optimizations that can be done rapidly are the ones related to intersections. It is believed they account for a significant part of the computation time. The complexity for brute force is in O(N_pixel*N_objects).
They all rely on using the geometric properties to simplify visibility and intersection computations. Some techniques are also related to compression of scene file size. As you might already know, when mapping in counter strike, you create bsp files, which are the result of a discretization of the space (hence the need to have clear boundaries of said space (leaks provenance). Using uniform grids built in the space, and by incrementally rasterizing, we can reduce greatly the render time. However, while it is easy to implement, it isn’t suited for nonuniform scenes. We can have different hierarchy of grids (grids into grids to help with that), or different ways of building grids (octrees, kdtrees or bsptrees).
they can be explained like this:
and trees are built like this:

laminutederire got a reaction from Terri in Computer graphics, my journey to understanding
I finally finished all my renders. Here they are.
This first one is similar to what ambient occlusion is all about (or so I have heard!). It is a computation of average occlusion on the model. It is rendered by sending a ray, checking if it intersects geometry. If it does, you send another ray in a random direction, and you return the color black if the new ray hits something, whit otherwise. The new ray has a specified length, to determine, how far occlusion is calculated. If you put a longer length, more rays will intersect geometry and it’ll get darker. Then you average it out to get that render.
Afterwards, I computed direct lightning with a single light point. The light point class was implemented with power and position arguments which sufficed to render images with it. The power is distributed uniformly on a sphere, therefore we have:
Power (point) = P_lightpoint/ (4*pi*r^2) where r is the distance between thepoint and the pointlight.
The BSDF model was started to be used here, to determine how light interacts with a given material. This model described the ratio of energy transmitted and reflected from the energy which is incident.
From there you get that a simple diffuse model by sending a ray to the scene, look at where it intersects, and then look if it receives light from emitters. If it does, you calculate the incident power, then you compute how much of this power is transmitted to the camera ray direction.
You then end up with such a render:
From there the textures can change the value of the color and power which is redistributed and so can specularities and so on.
The issue with this method is that it requires to send many rays and compute a lot of intersections. We have to optimize the data structures to reduce the render time. There are three main techniques. We can try to send fewer rays, send more generalized rays which will diminish the number of rays required or we can try to compute fewer intersections. We can of course mix a few of those techniques. Sending fewer rays can be done by terminating rays earlier, and by using adaptive sampling. The first method helps not having too many ray bounces, while the second helps not sending rays to specific pixels where there are few changes at each ray tracing there. The area where there are the most significant optimizations that can be done rapidly are the ones related to intersections. It is believed they account for a significant part of the computation time. The complexity for brute force is in O(N_pixel*N_objects).
They all rely on using the geometric properties to simplify visibility and intersection computations. Some techniques are also related to compression of scene file size. As you might already know, when mapping in counter strike, you create bsp files, which are the result of a discretization of the space (hence the need to have clear boundaries of said space (leaks provenance). Using uniform grids built in the space, and by incrementally rasterizing, we can reduce greatly the render time. However, while it is easy to implement, it isn’t suited for nonuniform scenes. We can have different hierarchy of grids (grids into grids to help with that), or different ways of building grids (octrees, kdtrees or bsptrees).
they can be explained like this:
and trees are built like this:

laminutederire got a reaction from mryeah in Computer graphics, my journey to understanding
I finally finished all my renders. Here they are.
This first one is similar to what ambient occlusion is all about (or so I have heard!). It is a computation of average occlusion on the model. It is rendered by sending a ray, checking if it intersects geometry. If it does, you send another ray in a random direction, and you return the color black if the new ray hits something, whit otherwise. The new ray has a specified length, to determine, how far occlusion is calculated. If you put a longer length, more rays will intersect geometry and it’ll get darker. Then you average it out to get that render.
Afterwards, I computed direct lightning with a single light point. The light point class was implemented with power and position arguments which sufficed to render images with it. The power is distributed uniformly on a sphere, therefore we have:
Power (point) = P_lightpoint/ (4*pi*r^2) where r is the distance between thepoint and the pointlight.
The BSDF model was started to be used here, to determine how light interacts with a given material. This model described the ratio of energy transmitted and reflected from the energy which is incident.
From there you get that a simple diffuse model by sending a ray to the scene, look at where it intersects, and then look if it receives light from emitters. If it does, you calculate the incident power, then you compute how much of this power is transmitted to the camera ray direction.
You then end up with such a render:
From there the textures can change the value of the color and power which is redistributed and so can specularities and so on.
The issue with this method is that it requires to send many rays and compute a lot of intersections. We have to optimize the data structures to reduce the render time. There are three main techniques. We can try to send fewer rays, send more generalized rays which will diminish the number of rays required or we can try to compute fewer intersections. We can of course mix a few of those techniques. Sending fewer rays can be done by terminating rays earlier, and by using adaptive sampling. The first method helps not having too many ray bounces, while the second helps not sending rays to specific pixels where there are few changes at each ray tracing there. The area where there are the most significant optimizations that can be done rapidly are the ones related to intersections. It is believed they account for a significant part of the computation time. The complexity for brute force is in O(N_pixel*N_objects).
They all rely on using the geometric properties to simplify visibility and intersection computations. Some techniques are also related to compression of scene file size. As you might already know, when mapping in counter strike, you create bsp files, which are the result of a discretization of the space (hence the need to have clear boundaries of said space (leaks provenance). Using uniform grids built in the space, and by incrementally rasterizing, we can reduce greatly the render time. However, while it is easy to implement, it isn’t suited for nonuniform scenes. We can have different hierarchy of grids (grids into grids to help with that), or different ways of building grids (octrees, kdtrees or bsptrees).
they can be explained like this:
and trees are built like this:

laminutederire got a reaction from Radu in Computer graphics, my journey to understanding
I finally finished all my renders. Here they are.
This first one is similar to what ambient occlusion is all about (or so I have heard!). It is a computation of average occlusion on the model. It is rendered by sending a ray, checking if it intersects geometry. If it does, you send another ray in a random direction, and you return the color black if the new ray hits something, whit otherwise. The new ray has a specified length, to determine, how far occlusion is calculated. If you put a longer length, more rays will intersect geometry and it’ll get darker. Then you average it out to get that render.
Afterwards, I computed direct lightning with a single light point. The light point class was implemented with power and position arguments which sufficed to render images with it. The power is distributed uniformly on a sphere, therefore we have:
Power (point) = P_lightpoint/ (4*pi*r^2) where r is the distance between thepoint and the pointlight.
The BSDF model was started to be used here, to determine how light interacts with a given material. This model described the ratio of energy transmitted and reflected from the energy which is incident.
From there you get that a simple diffuse model by sending a ray to the scene, look at where it intersects, and then look if it receives light from emitters. If it does, you calculate the incident power, then you compute how much of this power is transmitted to the camera ray direction.
You then end up with such a render:
From there the textures can change the value of the color and power which is redistributed and so can specularities and so on.
The issue with this method is that it requires to send many rays and compute a lot of intersections. We have to optimize the data structures to reduce the render time. There are three main techniques. We can try to send fewer rays, send more generalized rays which will diminish the number of rays required or we can try to compute fewer intersections. We can of course mix a few of those techniques. Sending fewer rays can be done by terminating rays earlier, and by using adaptive sampling. The first method helps not having too many ray bounces, while the second helps not sending rays to specific pixels where there are few changes at each ray tracing there. The area where there are the most significant optimizations that can be done rapidly are the ones related to intersections. It is believed they account for a significant part of the computation time. The complexity for brute force is in O(N_pixel*N_objects).
They all rely on using the geometric properties to simplify visibility and intersection computations. Some techniques are also related to compression of scene file size. As you might already know, when mapping in counter strike, you create bsp files, which are the result of a discretization of the space (hence the need to have clear boundaries of said space (leaks provenance). Using uniform grids built in the space, and by incrementally rasterizing, we can reduce greatly the render time. However, while it is easy to implement, it isn’t suited for nonuniform scenes. We can have different hierarchy of grids (grids into grids to help with that), or different ways of building grids (octrees, kdtrees or bsptrees).
they can be explained like this:
and trees are built like this:

laminutederire got a reaction from borgking in Computer graphics, my journey to understanding
Hello everyone,
I've started a few classes on computer graphics and animation, from a scientific point of view. Since most people are either hobbyists or artists on this forum, I figured some of you might be interested in learning a few things (or rediscovering some things as I'm learning them as well. I can't post most of my learning materials since I'd get expelled otherwise, but I'll do my best anyway. One class will be about photorealistic rendering and the other one about physically based animations and effects.
On the first one we'll program various effects starting from basic illumination to subsurface scattering, going through reflections and procedural generation of effects, geometry etc. Animation will go over rigid body then soft body simulation, with overviews of hair simulation and maybe machine learning techniques to have fast soft body simulation.
If those subjects interests anyone, welcome here, I'll update this thread with new posts when I'll learn new things during my courses.

laminutederire got a reaction from Sigma in Computer graphics, my journey to understanding
I called them then sent an email. One guy was pretty excited with the idea and passed on the email to his colleague, so I'm waiting on their answer
My first render which half failed:

laminutederire got a reaction from grapen in Computer graphics, my journey to understanding
Hello everyone,
I've started a few classes on computer graphics and animation, from a scientific point of view. Since most people are either hobbyists or artists on this forum, I figured some of you might be interested in learning a few things (or rediscovering some things as I'm learning them as well. I can't post most of my learning materials since I'd get expelled otherwise, but I'll do my best anyway. One class will be about photorealistic rendering and the other one about physically based animations and effects.
On the first one we'll program various effects starting from basic illumination to subsurface scattering, going through reflections and procedural generation of effects, geometry etc. Animation will go over rigid body then soft body simulation, with overviews of hair simulation and maybe machine learning techniques to have fast soft body simulation.
If those subjects interests anyone, welcome here, I'll update this thread with new posts when I'll learn new things during my courses.