Finally here we are at chapter seven! I have been looking forward to implementing shadows in the Derydoca Engine for quite some time, and this is exactly what chapter seven of OpenGL 4.0 Shanding Language Cookbook focuses on. The chapter begins by implementing shadow maps with no filtering. That leaves you with extremely aliased shadows that leave a lot to be desired. Luckily the book quickly remedies that with examples of different filtering techniques to achieve a shadow effect that is more pleasing to the eyes.
One of the challenges I had when implementing these features is that the book implements all of the shaders with the assumption there is only a single light source. I’ve implemented support for multiple lights early on in the engine, so I had to make modifications to the shaders to accommodate this. Also, the author tends to bundle all possible shader paths into a single shader and switches rendering paths via subroutines. In my code, I have been separating shaders into their own distinct shader where it makes sense. I needed to do this because that strategy doesn’t scale nicely. I am also not a fan of it because it adds confusion to what uniforms, variables, and functions are actually needed to get the job at hand done. For that reason, if you are to check out my code, you will notice they look fairly different than how they appear in the book. As always, you can view all of the code on the project’s GitHub page and the commit hash at the time of this writing is 0aaae1d0a418214a41b6b748081a544f4fd5a2af.
Here is our first taste of shadow mapping. In order to implement this into the engine I needed to extend the Light class as well as some related classes. This is because the light object will need to render the scene from its point of view and store it into a texture that can be supplied to other shaders that want to display shadows. I won’t go into too much details here as you can see the changes I made in GitHub and the code isn’t that different than what is in the book.
The general flow of the code is as follows. First, when a camera begins to render, the lights in the scene all render to their shadow maps. These shadow maps are a single texture rendered from a perspective matrix defined in the light object. This texture only contains depth information and there is no color buffer associated with it. The fragment shader code is literally empty because the fixed function rendering pipeline automatically writes to the depth buffer for us. From there, the main render target’s buffer is cleared. Once that is done, all game objects are rendered recursively in the scene. When a MeshRenderer component is rendering, it binds the material for the object, all properties related to lights are bound to the shader, the mesh is drawn, and lastly all of the material’s uniforms are un-bound from the shader.
The shader then renders the scene similarly to any other of our shaders with the exception of calculating the shadow amount and multiplying the diffuse and specular color by it which gives us our shadows. In fact, the method of calculating the shadow amount is the meat of what the subsequent shaders modify to achieve a more realistic effect.
Below you can see a closeup shot of what these shadowmaps look like. As you can tell, they are extremely aliased.
In order to smooth out the edges of the shadows, we will need to modify our method of getting the shadow influence when rendering the influence of a light. First of all, the texture needs to be set to use linear filtering. Nearest filtering is used in the previous example. Linear filtering tells the GPU to give you a value that is linearly interpolated between the neighboring texels when you are sampling between shadow-map texture texels. This change will give slightly blurred shadow edges which is nice, but we want to take it one step further.
The next step is to get the average value of the neighboring pixels. In the case of the book, we get all shadowmap texels diagonal to the texel we are sampling. So in this example we have a total of four shadow map texture lookups for each shaded fragment. We then take the shadow value we calculated and finish rendering the fragment as we did in the first example.
Here is a close up of what this type of shading looks like. It is better, but still not ideal.
Random Sampling Shadows
The last method explored in the book for improving the look of shadows is using random sampling for your shadows. Again, this ultimately only changes the method of calculating the shadows influence on each fragment.
Random sampling is a method that starts by sampling the shadow map at a certain distance away from the fragment we are shading. The shader will sample N number of fragments and average them together. If the average is 1.0, then the shader exits early and assumes that the fragment is fully lit. Conversely, if the average is 0.0, it again exits early and assumes the fragment is fully in shadow. However, if the value is neither 0.0 or 1.0, the shader then samples the shadow map at a smaller radius, averages the values and continues this process N number of times.
The distance vectors are actually computed on the CPU with random number generators and then gets uploaded to the shader in the form of the texture.
With this technique, you can specify a softness of the shadow by way of supplying a single float value to the shader. In the case of the Derydoca Engine, this value exists in the serialized level file under the properties of the light. The variable has the name of shadowSoftness.
In the screenshot below you can see how this looks. I have used the shadow softness value of 0.003 here. You can see that this creates a noisy texture. This is simply because the points that we are sampling are not uniformly spaced because they are generated with a random number generator.
Adding Ambient Occlusion
This last example doesn’t really modify the way shadows are shaded, but it uses a texture to approximate ambient occlusion (AO). If you don’t know, ambient occlusion is a way of modeling how light behaves in corners. The more acute the angle of the corner, the less light is bounced into it.
In this example, I once again decided to use our pointing squirrel model. I opened up the model in Blender and used it to generate the ambient occlusion map. I then saved it as a PNG file and brought it into the project to be treated as any other texture.
In the shader, I consume this texture and multiply the diffuse color by the ambient occlusion value. It is pretty much that simple, but it definitely improves the look of the model. You can see what this looks like in the screenshot below. On the left is the model without AO, and the right is with AO.
Even though I am done with chapter seven, I am not done developing shadows in the engine. For starters, essentially all of these implementations operate more as a spotlight than a point light. This is because it is using a single texture to approximate a frustum of light. Ultimately this logic would be moved exclusively to spotlights and I would use something like a shadow cubemap to allow light to be projected in all directions. I also need to support directional lights. And there are many other features for lights that could be implemented down the road.
What are some of your favorite lighting features in games or other game engines? Let me know in the comments below!