
The last couple of weeks I have been working on getting a basic diffuse shader working in the game engine I am developing (Derydoca Engine) and only now do I have it all functioning properly. This isn’t my first time implementing a diffuse shader, but it was the first time I implemented one in an engine I built from the ground up. One simple mistake in my matrix calculation compounded with limited debugging of GLSL code could have (and probably did) cause me quite the headache. I say “probably” because I am not sure what problems I had in my original implementation, but nevertheless there was something. Ultimately what had me straighten all of this out was a process of simplifying and building tools which is mainly why I wanted to write this post.
If you recall in my previous blog post, the scene I was working in consisted of three squirrels rotating around one another above a grassy terrain. When I started implementing the diffuse shader, I decided to just change all the shaders in the scene to the diffuse shader and everything would just work out. Of course that wasn’t the case and I was left with a purple and green psychedelic shader that looked neat but was nowhere near what I was expecting. I continued to work within that scene thinking the problem would become obvious and I would solve it soon after. After beating my head on a wall repeatedly (figuratively, but I kinda felt like doing it IRL) I decided to implement a simple scene class where I could encapsulate this scene and build a simplified scene so I could discover the issues in a more “sterile” environment.
This new scene helped me in two major ways. One reason is that compared to the previous scene, there were no primitive objects that I could use as a baseline. It is much easier to understand how a cube or sphere should perform with a diffuse shader than a squirrel model or terrain. Secondly it allowed me to put only the essential objects and components in the scene so I can ignore all other noise. What I ended up with was a sphere in a skyboxed area (which realistically didn’t need to be there), a singular point light positioned so the light should reflect off of the sphere’s surface, and a camera that I could control with the mouse and WASD keys.
With a simpler scene it was easier for me to visualize what should be happening in the vertex and fragment shaders, however, no matter my best attempts, each iteration of code would take me about 10 or so seconds to compile and evaluate the scene. I eventually wrote a component that would monitor the file system for any modifications to my shader source code, recompile it, and reapply it to the sphere. This was the real key to solving my issue because my 10 second iteration time turned around to less than 10 milliseconds. As soon as I saved my shader, I would see that change in real time and be able to understand exactly what the shader was doing. One of the major problems I ran into was that a matrix uniform variable was not being set and GLSL was defaulting it to all zeroes which cause all of my normals to point away from the scene’s origin. It was all from a simple typo and I feel silly to have missed, but that is all it takes to slow down your progress to a crawl.
All of this code is available on the project’s GitHub page if you want to see how I pulled it all off. It’s not rocket science, but it is still science! So, remember, if you are in doubt, simplify your problem and build some tools and you will solve even the most stubborn problem!