I just finished going through all of the examples of chapter 6 of OpenGL 4.0 Shading Language Cookbook which introduces the geometry and tessellation shaders and some of the ways you can use it to your advantage. I have always been interested in geometry and tessellation shaders, so naturally I was excited I finally made it this far in the book. This chapter has given me plenty of ideas that I want to explore when I have some more time to play around with shaders. You may see some tutorials using some of these techniques in the near future. This chapter had me implement many new features in Derydoca Engine and you can see my progress on the GitHub project page. The commit hash at the time of this writing is d9591ef6d9ffeb54ef347446be3be7ebec278139.

Point Sprites

Point sprites are often represented as a flat image displayed at a point in space. Many times, the image is oriented to always face the camera. These type of sprites are called billboard sprites. I decided to implement point sprites in my engine in the form of a simple particle engine that I could expand upon later. The book does not ask you to do this at all, but I like to think a little ahead.

My particle system allows you to define a cubic volume in space that you want particles to appear in as well as the number of sprites to generate. The particle system will then create a list of randomized points within the bounding volume. This is then sent to the GPU in the form of a vertex buffer. From there, the shader takes these points, passes each point to the geometry shader where two triangles forming a quad are generated. These triangles also contain UVs that allow a texture to be mapped and displayed on it.

I extended the shader to use the discard keyword for cutout sprites.

Wireframe Rendering

When developing a game or game engine, at some point it will become useful to visualize a model’s geometry. Rendering the wireframe of your meshes is an excellent way to debug some render glitches in your game. In this example, we use the geometry shader to generate extra data for the fragment shader that enables the fragment shader to draw the lines bordering each triangle.

If you recall, the vertex shader only has access to a single vertex and the fragment shader only has access to a single fragment. With only that information, there is no way to determine if you are rendering a pixel near the border of a triangle. The geometry shader has access to the entire primitive, which in this case is a triangle. In our example, the geometry shader passes a vec3 object around named GEdgeDistance which contains information to determine when a fragment is near an edge. Each component stores the calculated distance to the opposite edge. For instance, GEdgeDistance of the first vertex of a triangle stores the distance from the first vertex to the edge defined by the second and third vertices. When GEdgeDistance is passed through to the fragment shader, it is interpolated like any other vertex attribute.

Then, the fragment shader gets the smallest distance from each component in GEdgeDistance and compares it against a uniform float value which defines the width of the lines to render. All fragments that exist below that threshold are given the color of the wireframe mesh. The other fragments are allowed to be shaded normally. In my shader, I opted to shade the object with simple phong shading.

You can set whatever wireframe color you want. I have chosen a deep green for this example.

Geometry Outlines

In some cases, rending a solid line around an object can be desirable. This can be used to highlight an object of interest, or contribute to a cell-shaded effect. There definitely are many ways to solve this problem including using post-processing. This book decided to implement it by generating long thin quads aligned to the silhouette of the mesh to generate an outline effect. I think this particular technique has more limitations than benefits, but it does introduce some interesting new techniques that can be used to generate other effects like god rays.

The biggest hurdle I had with implementing this was getting the model’s buffer data in a format that is needed to achieve this effect. Historically, my index buffer was stored in a format where each set of three indices defined a triangle. This technique requires the indices of the adjacent triangles to be packed in as well which ultimately takes a set of six indices to define a triangle. I was slightly disappointed to find out that AssImp does not come with a method to load adjacency data out of the box especially considering I have seen code from other libraries that look like they do this. After some digging, I figured out how to do this myself. You can see my implementation in the MeshAdjacencyCalculator class.

So, now that we have the adjacency data, you might be asking how we would use that to generate our outline. In the geometry shader, we can use all of this data to determine which edges are connected by one triangle that is forward-facing and the other is backward-facing. When the neighboring triangles oppose each other, we know the edge is part of the silhouette. In that case, we generate a quad oriented against that edge and scaled out to the thickness of our line. From there, additional information gets attached to the vertices which the fragment shader can use to determine if it is rendering a quad in the outline vs the actual model and shade it accordingly.

One of the downsides of this technique is that when a line is drawn on a convex edge, the outline edges visibly separate from one another which breaks the illusion. This can be mitigated by extending the quads slightly past the edge of the neighboring triangle. This also has a downside when you extend it too far, it will poke out from the model and break the illusion again. Lastly, when you rotate around an object, the quads seem to pop in and out quite abruptly. You can get some good static shots though.

If you look closely at the ears, you can see the line separation on the extremely convex curves.

Tessellated Curve

GPUs are designed to render triangles and lines very well. Triangles and lines are great because they can be used to approximate many shapes. There is one thing that triangles and lines are lacking, and that is curved sides. Clearly a triangle with a curvy side would not be considered a triangle anymore. So then how do we render smooth curved lines and objects? We simply generate more triangles or lines to better approximate the target shape. The more you generate, the smoother the object becomes.

The simplest curvy object to render is a curved line. In this section, we rendered a bezier curve. Bezier curves are curves defined by position vertices and control points. The position vertices define the start and stop of a curve, and the control points influence how the line bends in between the start and stop points of the line. In this example, we have two vertices and two control points that generate an ‘S’ curve.

This example introduces you to the concept of the tessellation control and tessellation evaluation shaders. These shaders are needed if you are going to be tessellating anything on the GPU.  The tessellation control shader ultimately determines how many subdivisions the resulting object gets and the tessellation evaluation shader determines where each intermediate vertex gets placed.

The curve might be hard to make out, but it is there in all of its curvy glory.

Tessellated Quad

We then took the previous example one step further to the tessellation of a quad. There isn’t much more really to say that was not covered in the previous example. In this example, you can see how the triangles are generated in higher tessellation levels.

The vertical axis increases the outer level tessellation where the horizontal axis increases the inner tessellation level.

Tessellated Teapot

It seems to be a right of passage for every game engine to eventually render the Utah Teapot and the Derydoca Engine is no exception to this rule. The Utah Teapot is defined by a series of tessellated quads that allow you to tessellate the object to any level you desire.

In order to implement this into the engine, I had to build out several new objects. First, I created the BezierPatchMesh object. This is an object that contains all of the information of the mesh. I then had to find a file that had the patch data for the teapot. Unfortunately the book did not supply a file for this. I eventually found this website which had the data in a plain text file. With that, I wrote a simple file loader that took the file’s data and created a BezierPatchMesh object.

When I finished ironing out all of the kinks with that, I had to create a new game component that could render a bezier patch mesh to the screen. I came up with the TessellatedMeshRenderer which does just that. It also can define what level you want to tessellate the teapot to. That value then gets pushed to the shader which tessellates the patches in a similar manner to the previous example.

This technique is great for smooth surfaces because you can dynamically render smoother objects based on some criterion such as camera distance or hardware limitations.

These are all the same model file with different tessellation levels.

Depth Based Tessellation

This final example takes the previous example one set further. Essentially, each patch evaluates how far away it is from the camera and tessellates to an amount relative to the distance. This is a great way to make a dynamic LOD system so you can render high amounts of detail near the camera and less detail in the distance in order to save on performance.

All three objects are identical. Notice how some patches on the same model are more dense than the neighboring patches.

 

I would love to see what you have done with geometry and tessellation shaders in the comments below. The geometry shader opens up the door to many new effects in real-time rendering. Go out there and make something amazing with it!