Mastering Tessellation Shaders and Their Many Uses in Unity

NedMakesGames
27 min readNov 24, 2021

--

Video version of this tutorial.

Hi! I’m Ned, and I make games! In this Unity graphics programming tutorial, I’ll introduce tessellation shaders, advanced shaders which can subdivide triangles, adding details and smoothing out blocky models. Use them for automatic level of detail, procedural models, or height map based terrain. Keep watching to learn how to add tessellation to any shader yourself!

Thank you Crubidoobidoo and all my patrons for helping make this tutorial possible!

This wireframe shows the actual geometry of the mesh. The left and right are the same model. As the camera gets closer, a tessellation shader adds vertices and smooths the model’s silhouette.

This tutorial was tested in Unity 2020.3, 2021.1 and 2021.2. I will be using Universal Render Pipeline for all examples; however, none of the techniques in this tutorial are URP specific. Furthermore, I will not use the shader graph in this tutorial, as even in HDRP it does not support all required techniques. If you’re unfamiliar with HLSL shaders in URP, I’m writing a tutorial about that, so check back soon.

Procedural terrain rendered with tessellation and a noise-based height map.

This tutorial will explain what tessellation shaders are and how to write them in HLSL. I will demonstrate several methods to optimize tessellation, control the amount of subdivision per triangle, smooth a model’s geometry and silhouette, and add details with height maps and procedural techniques. This tutorial aims to explain the topic of tessellation, not so much create a finished shader. Nevertheless, I have provided an example shader here.

A deformable plane implemented with tessellation.

As mentioned, this tutorial is more advanced and will expect you to know how to write shaders in HLSL. In addition, it will make use of vector math, so brush up on vector dot and cross products, as well as vector projection and reflection.

A flat plane deformed by a height map texture.

The Structure of a Tessellation Shader

In mathematics, tessellation is the process of fitting shapes together to form a larger surface; however, in graphics programming, it usually refers to subdividing a shape into smaller pieces. Tessellation shaders do just that. Put a mesh in, and the GPU subdivides all its faces, adding many more vertices.

Why is this useful? Well, you can take these extra vertices and move them around, smoothing out jagged, low-poly edges and adding fine position details not possible otherwise. This enables a kind of reverse workflow for LoD, where you can create low poly models and use tessellation to add complexity.

Left with no tessellation, right with tessellation. Notice the smoother silhouette on the right.

Tessellation is also great for terrain. Use a flat plane mesh and tessellate it, adjusting heights with a heightmap. This allows you to adjust the shape using only a texture and render high detail only where the camera can see it. Generate a heightmap procedurally to visualize mathematical surfaces, like SDFs, or easily animate a mesh.

Keep in mind that tessellation shaders have a hefty performance cost. The good news is they’re generally cheaper and have better hardware support than their cousins, geometry shaders. Plus they allow you to keep your mesh assets simpler — which can help with batching. If you’re worried about performance, be sure to test things out before committing to tessellation.

Now, how do you add tessellation to a shader? Tessellation shaders have two additional programmable stages, similar to the vertex and fragment stages. These are called the hull and domain stages. They run in between the vertex and fragment stages, and together with an unprogrammable stage called the tessellator, control how the mesh is subdivided and refined.

The hull function receives data in the form of “patches,” which are simply lists of vertices. What relation these vertices have to one another is configurable; however, in this tutorial I will always work with triangles.

So, the patch is an array of three vertices that make up a triangle on the mesh. The array contains output data from the vertex function corresponding to each vertex.

Besides the patch, the hull function also receives an index, specifying which vertex in the patch the hull function must output data for. It runs once per vertex in the patch and can look at all vertices in the patch to produce a new data structure for later on in the chain.

The hull stage is unique in that it also has another function that runs in parallel: the patch constant function. This separate function runs once per patch, so it’s very useful to calculate data that’s shared between vertices in a triangle. It also must output tessellation factors, which determine how many times to divide the patch. We’ll talk about these extensively later on.

So to summarize, the hull stage is made up of two functions: the hull function and the patch constant function. They receive a patch, which is a collection of vertices usually forming a triangle. The hull function runs once per vertex in a patch, while the patch constant function runs once per patch and must output tessellation factors.

Next up, a non-programmable stage called the tessellator runs. This takes patch data and the tessellation factors generated during the hull stage to subdivide each patch. The tessellator generates something called barycentric coordinates for all vertices in this new mesh.

Barycentric coordinates are an easy way to describe a point inside a triangle. Any point can be calculated as a weighted average of the three corner points, and barycentric coordinates are the weights in that formula. They’re usually given as a float3 vector, and the three components always sum to one. Besides position, we can also use barycentric coordinates to calculate normals, UVs, and more for any point in terms of the triangle corners.

The barycentric coordinates for point P in terms of triangle ABC are (0.55, 0.31, 0.14)

This brings us to the second programmable stage involved in tessellation, the domain stage. It consists of one function, the domain function, which runs for each vertex on the tessellated mesh. Its job is to output the final data for a vertex. To do so, it receives the barycentric coordinates for a vertex and it’s originating patch. This includes all data generated by the hull and patch constant functions.

The domain function is where a lot of your logic will go. Much of what you’d usually do in the vertex stage should be calculated here instead, including clip space positions. Crucially, you can reposition vertices in the domain stage, something essential for most of tessellation’s use cases.

If you have a geometry function, it would run after the domain stage. But, usually, the rasterization and fragment stages run next.

To summarize, the vertex stage runs first. The hull stage receives information about triangles on your mesh, called patches, and decides how to subdivide them. The tessellator does the heavy lifting, subdividing the mesh; while the domain stage prepares vertices in the tessellated mesh for the fragment stage, deciding where each appears on screen.

Let’s pivot to writing code! You can register hull and domain functions similarly to the other programmable stages, using a #pragma directive. Note that tessellation shaders require shader target 5.0, so adjust that or Unity will complain.

The vertex function is now pretty plain. It simply converts positions and normals to world space. The output structure will be fed into the hull stage, and can contain basically any data you’ll need later in the pipeline. Notably, the POSITION semantic is forbidden, use the INTERNALTESSPOS semantic instead.

The hull shader’s signature looks like below. It has several C#-attribute-like tags. domain determines the input patch type, while outputtopology and outputcontrolpoints determine the output patch type. I’m always going to use triangles in this tutorial, so these will remain fixed. patchconstantfunc registers the patch constant function, and partitioning tells the tessellator which algorithm to use to subdivide triangles. Keep this in mind for later.

The function itself receives the input patch using a special InputPatch construct. The vertex function output structure and number of vertices in the patch go inside the angle brackets. You can access each structure in the patch like you would an array.

The hull function also receives the vertex index with the SV_OutputControlPointID semantic. It signals which vertex in the patch to output data for. Finally, don’t forget to set the return structure type. In this example, it’s the same as the vertex output structure, but it could be unique. There are no required fields in this data, but once again, use INTERNALTESSPOS instead of POSITION.

In this example, the hull function body is extremely simple, simply returning the correct vertex inside the patch.

The patch constant function has a much simpler signature. It receives the input patch similarly to the hull function but outputs its own data structure. This structure should contain the tessellation factors specified per edge on the triangle using SV_TessFactor. Edges are arranged opposite of the vertex with the same index. So, edge zero lies between vertices one and two.

There’s also a center tessellation factor, tagged with SV_InsideTessFactor. Soon, we’ll visualize how these factors affect the final tessellation pattern, but for now, realize that an edge factor is the number of times an edge subdivides, and the inside factor squared is roughly the number of new triangles created inside the original.

The patch constant function can also output other data, but it must be tagged with a semantic like always. The special BEZIERPOS semantic is useful since it can take the form of a float3 array. Later, we’ll use it to output control points for a Bézier-curve-based smoothing algorithm, but you can use it to store anything you need.

With that, we’re done with the hull stage. Let’s move to the domain stage.

The domain function also has a domain attribute, which should match the hull function’s output topology — triangles in this case. As arguments, it receives the output of the hull function arranged into a patch, as well as the output from the patch constant function. Finally, it receives the barycentric coordinates of the vertex to work with, tagged with SV_DomainLocation.

The output structure is very similar to what you’d output from a vertex function. It should contain a clip space position (unless you’re using a geometry stage), as well as any fields the fragment function needs for lighting.

Also, notice the BARYCENTRIC_INTERPOLATE macro. It’s really handy to interpolate any property in the patch structure!

And that’s it for the general structure. Let’s take a closer look at partitioning modes and tessellation factors. I’ve created a simple tessellation shader to test this stuff out. It has a property to assign tessellation factors as well as one to switch between partitioning modes using a keyword.

Create a material for it and add it to a mesh. To visualize the tessellation, be sure to set the scene render mode to shaded wireframe. Then, play around with the factors.

You’ll see the edge factor corresponds to roughly the number of times edges will be subdivided, while the inside factor adds complexity to the center. Also notice that setting any factor to zero or less will cause the mesh to disappear. This becomes important later.

Now try setting a factor differently for each edge. When we try more complicated algorithms, It’s important that edges on adjoining triangles have the same tessellation factor. If not, you can get little holes in the mesh where vertices don’t match up. To ensure this doesn’t happen, try restricting edge factors to depend only on vertices connected to that edge.

You may have noticed some commented out properties in the shader. Uncomment those and change your patch constant function slightly. Is your mesh flickering, even with positive factors? Why?

There’s an oddity with the way the shader compiler handles tessellation factors. In a bid to speed things up, the compiler sometimes splits the patch constant function and calculates each factor in parallel. This sometimes causes weird issues. If you look in the Frame Debugger, you’ll see the compiler stripped two edge factor properties from your shader, making them always equal zero!

Look under the “Floats” section. No EdgeFactor2 or 3 in sight.

You can fix this by using a vector property with each component specifying one edge’s factor, so the compiler doesn’t strip properties it thinks are unused. In general, if your tessellation factors are acting strange, try rewriting this section of the patch constant function.

The partitioning modes are all interesting. The integer mode divides a number of times equal to the ceiling of the tessellation factor. It has a generally nice pattern! But, if you need tessellation factors to smoothly transition, the fractional_odd and fractional_even modes handle that. They’re so named because they only fully subdivide on either odd or even numbers, which is easier to understand seeing them in motion. A quirk of fractional even is that it always subdivides at least once, since two is the lowest factor it can handle.

fractional_odd partitioning with tessellation factors 1.5, 2, 2.5 and 3.

The pow2 mode seems to be identical to the integer mode, at least on my machine. I would have guessed it only subdivided when a factor is a power of two. Let me know how it works for you!

All factors set to 3.5. Top row: integer, pow2. Bottom row: fractional_even, fractional_odd.

Optimizing with Culling

Tessellation can be expensive! But, there are a few ways we can speed it up. Since tessellation happens before the rasterization stage, it cannot take advantage of the automatic frustum and winding culling that happens there. Thankfully, we can implement it ourselves and avoid tessellating triangles that will just be thrown out.

It’s easy to cull a triangle in the patch constant function. Just set the tessellation factors to zero, and the tessellator will ignore that patch.

Let’s tackle frustum culling, where we test each point of the triangle to see if they’re all out of view. To do that, we can use the clip space positions of the triangle corners. Be sure to calculate it in your vertex function and pass it to the hull stage.

Above the patch constant function, write this function to test if a patch should be culled, passing the clip space positions of the triangle. Return false for now.

Above that, write IsOutOfBounds to check if a point is outside the bounds defined by upper and lower vectors, and ShouldFrustumCull to calculate those bounding vectors.

In clip space, the W component contains the outer bounds of the viewing frustum (camera viewable area), so we can use that to create the bounding vectors. The logic slightly differs between graphics API, since some anchor the viewing frustum and zero and some at negative W. Luckily, Unity provides a constant with the correct value.

Returning to ShouldClipPatch, call ShouldFrustumCull on each point. If they’re all true, the triangle is entirely outside the viewing area and should be culled.

Moving on to winding culling, also known as backface culling, we need to calculate which side of the triangle is facing the camera, culling if the back side of the triangle is visible. Do that calculating a normal vector for the plane containing the triangle and testing if it’s roughly pointing towards the camera.

To find the normal vector, we need two vectors tangent to the plane — vectors pointing between two triangle corners will do nicely. Their cross product is the normal.

Since we’re working in clip space, we need to “normalize” the position and apply perspective by dividing by the W component. This gives rough screen space positions.

Use the dot product of the view direction and triangle normal to find if they’re roughly pointing in the same direction. However, since the camera points along the z axis in clip space, we can simplify everything to a comparison of the normal’s z coordinate.

I said the camera points along the Z axis, but which way? Turns out, this depends on your graphics API. Usually the view direction is in the negative z direction; however, it’s flipped in OpenGL. Use a keyword to apply the correct comparison either way.

Finally, in ShouldClipPatch, call ShouldBackFaceCull. In the patch constant function, if ShouldCullPatch returns true, set all edge factors to zero.

In Unity, you might notice the shader culls some faces of your mesh when it shouldn’t. Even if you don’t see it now, you certainly will when adding vertex displacement later on.

Some triangles culled incorrectly near the bottom of the image.

Add some leeway to these calculations by introducing frustum and winding cull tolerance properties. For frustum culling, add the tolerance to each bound, while for winding culling, compare with the tolerance instead of zero. Adjust these as needed while adding features!

Dynamic Tessellation Factors

Another way to optimize tessellation is to lower factors when and where a mesh doesn’t need to be subdivided. There are a few ways to go about this. Say we’re working with a mesh that has some large faces but many smaller ones — we really only need to tessellate the large faces. One way to do this is to calculate tessellation factors proportionate to edge length.

Above the patch constant function, define this function to calculate the tessellation factor for an edge bound by two vertices. Pass the world space positions of each vertex as well as a scale and bias value. Set the factor to the scale plus the bias, making sure the result is never less than one (so it doesn’t get culled). This creates constant edge factors.

A test scene with all factors set to 1.

Now, to add world space edge length, set the factor to the distance between the vertex positions divided by the scale. In this scheme, the edge subdivides while aiming to keep divided edge lengths roughly equal to the scale value.

Add shader properties for the scale and bias values. Back in the patch constant function, call EdgeTessellationFactor for each edge, passing in the new properties and the appropriate vertex positions. Remember, edges are arranged across from the vertex sharing it’s index in the array.

The inside factor should be the average of all edge factors. This code worked fine for me, but if you’re seeing weird or inconsistent inside edge factors, it’s probably due to the compiler. Just call EdgeTessellationFactor again instead of using the previously calculated values.

This version is only needed if the previous doesn’t work well for you.
The same test scene with world length tessellation factors. Scale is 0.5.

OK, neat! But maybe we can tessellate based on an edge’s length in screen space? Due to culling, we already have clip space positions, so it won’t be difficult to do.

In EdgeTessellationFactor, add arguments for each vertex’s clip space position. Then, calculate the factor using the clip space positions, making two adjustments. First, apply perspective by dividing the positions by their w component. Next, multiply by _ScreenParams.y, which contains the height of the screen in pixels. Now, we can specify scale in pixels!

In the patch constant function, pass the clip space positions along with the world space positions.

The same scene with screen space length tessellation factors. Scale is 100.

This looks good too, but sometimes not quite right. What if we used the distance to the camera?

In EdgeTessellationFactor, find the length, in world space, between the two vertices. Then, calculate the distance from the center of this edge to the camera. The camera position is different in the various render pipelines, but get it with GetCameraPositionWS in URP. Divide the length by the scale multiplied by the distance to the camera, lowering the scale when close to the screen, subdividing more. I preferred the effect with a quadratic curve, but that’s up to you.

The test scene with camera distance tessellation factors. Scale is 0.02.

This final approach usually gives me the best results, but your mileage may vary. Use a keyword to switch between algorithms, if you’d like!

These heuristics try to guess the appropriate tessellation factors, but if you have an idea of how the mesh should tessellate, try storing tessellation factor multipliers in the mesh’s data. This is useful if you have an area with large flat faces where you’ll never need to add detail.

For demonstration purposes, I’ll store these multipliers in the green channel of the mesh’s vertex colors, but texcoords also work well. In blender or your modeling program of choice, paint the area you don’t want to tessellate black.

In your shader, pass the vertex colors down to your hull input structure. In the patch constant function, calculate a multiplier for each edge by averaging the green channel of connecting vertices and pass it as a new argument into EdgeTessellationFactor. Multiply it into the final calculation.

The same test scene with the middle flat quad untessellated due to having black vertex colors.

Here’s another useful technique when using some type of deforming force field or SDF. In this example, I deform this plane based on proximity to the little spheres. We know that if a vertex is far enough away from all spheres, it shouldn’t deform, so we don’t need to tessellate connected triangles.

For now, ignore the actual deforming logic, we’ll come back to that later. Focus on calculating the tessellation factors.

In your patch constant function, evaluate the deformation amount for each vertex. If it results in deformation (the value is positive in this example), then pass a multiplier of one.

Silhouette Smoothing

An easy way to add detail to a mesh is through high resolution textures. For instance, normal maps vary normal vectors per-pixel which affect the apparent shape of a surface. However, this technique does not really change any geometry. Nowhere is this more apparent than on a mesh’s silhouette. Zoom up close and even 4K textures can’t hide jagged and pointy edges. In this section, I’ll describe a few algorithms to smooth mesh geometry using tessellation!

This sphere has smooth shading, though its jagged silhouette is very apparent.

All of these strategies involve offsetting vertices in the domain function. Through simple barycentric interpolation, all new vertices are limited to the original triangle’s plane. However, what if we used the corners’ normal vectors to construct a curved triangle?

The simplest technique to achieve this is called Phong tessellation. You might have heard of Phong shading, which is the smooth shading technique of linearly interpolated normal vectors. Phong tessellation tries to recapture that simplicity and efficiency in positioning tessellated points.

It works like this. First, calculate the flat barycentrically interpolated position for a point. We’ll use (1/3, 1/3, 1/3) in this example.

Then, imagine three tangent planes emanating from each triangle corner, normal to the corner’s respective normal vector.

Next, project the flat position onto each of these planes, which is equivalent to finding the nearest point on the plane.

Flat (barycentrically interpolated) position and one corner’s tangent plane.
The position projected onto the tangent plane.
The position projected onto each corner’s tangent plane.

Finally, compute the barycentrically interpolated position again, but using the three projected points.

The math is not too complicated. We already know how to deal with barycentric coordinates. And, to project a point onto a plane, find the difference between the point and any other on the plane — the triangle corners work! Then, project that vector onto the plane’s normal vector and subtract the result from the original point. Here’s the algorithm for Phong tessellation!

Now update your domain function. Make sure to use this new adjusted position when calculating clip space.

The cat on the right has Phong tessellation enabled.

Try it out on a model! At first, it may look a little too puffed up. We can improve it by adding a smoothing factor property. Interpolate between the flat and Phong position using this factor, which can help quite a bit.

The cat on the right has a smoothing factor of 1/3.

Some models may need a little touching up as well. If your model has sharp edges, try adding edge loops very close to the sharp edge, making long, thin faces. Looking at the Phong tessellation algorithm, if the normal vectors of each vertex are very close to parallel, the Phong position will be very close to the flat position.

This sword model split open due to long, crisp edges and discontinuous normal vectors. The version on the right corrected the issue by adding an extra edge loop near the crisp corner.
Adding the edge loop in Blender.

Another technique you can try is baking smoothing factors into your mesh’s data, for example, in the red channel of it’s vertex colors. Simply paint the red channel black in areas you don’t want to bend. Pass the vertex colors all the way down to your domain function, calculate the barycentric interpolation of the red vertex color channel, and multiply it with the smoothing factor.

Phong tessellation gives pretty good results and is also cheap, all things considered. However, if you need higher quality smoothing, there’s another option: PN Triangles. This technique constructs curved triangles similarly to Bézier curves! It’s quite a bit more expensive than the Phong method, but let’s try it out.

We can save some time by precomputing Bézier control points for use when positioning tessellated points in the domain function. Control points are constant per triangle, so the patch constant function is perfect. We’ll make use of ten control points: the triangle corners, a pair along each triangle edge, and one in the triangle center.

The ten Bézier control points in their original positions.

Let’s take a look at calculating each control point. The corners can remain as they are. They’ll help ensure the triangle never escapes its original position too much. For the edge pairs, use a similar algorithm to Phong tessellation. Take this peachy point ⅓ along the edge from corner A to B.

To calculate it’s position, first project B onto the plane defined by A’s normal.

Then, take the average of this new point and A, weighing A twice.

For the other point on this edge, do the same operation, mirroring A and B.

Continue with the other two edge pairs.

Finally, for the center point, find the average of the edge pair control points, which I’ll call E.

“E” is hovering above the triangle center. It’s the average of the edge control points.

As well as the average of the triangle corners, T. The center control point is E plus the difference of E and T halved, which gives a nice rounded center.

The center control point has risen into position. It equals E + (E - T) * 0.5

Using these control points, it’s possible to compute any point on this bendy triangle using barycentric coordinates.

This is the formula — which looks similar to that of a cubic Bézier curve. Notice how the barycentric coordinates appear in terms with their corresponding corners — the center point being an even combination of each. If you’d like to learn more about Bézier curves, I’ve linked some excellent resources in the foot notes.

To program this, store the control points in the patch constant output struct, using the BEZIERPOS semantic. Tag a seven element float3 array with it in the patch constant output structure. Why only seven? The patch contains the triangle corner positions, so there’s no reason to waste the memory.

CalculateBezierControlPoints calculates Bézier control points using the algorithm described earlier. Call it in the patch constant function, but only if the triangle isn’t culled.

Calculate the final point in the domain stage. CalculateBezierPosition implements the Bézier curve calculation using the control points from the hull stage. I’ve also added an interpolation between the curved position and the flat position, like in the Phong tessellation function.

Substitute this function for the Phong tessellation function in your domain function.

Left is Phong with 1/3 smoothing, right is PN triangles with 2/3 smoothing.

In Unity, you’ll see it does give nice results, usually slightly better than Phong tessellation, especially at higher smoothing values. It’s up to you if the added complexity is worth it!

Throughout all this, we haven’t touched the normal vectors. Interpolating normal vectors linearly is usually OK, but if your mesh has many divots and inflections, shading might be improved by interpolating normals quadratically!

Use another Bézier-curve-like algorithm for this. It might seem strange to use a Bézier-curve for normal vectors, but as long as we normalize the final result, it works just fine. Quadratic Bézier curves only need three control points, so place one in the middle of each triangle edge.

Again, the triangle corners will retain their original normal vectors. To compute a control vector for the point halfway between corners A and B, follow these steps.

First, find the average normal of A and B.

Second, construct a plane perpendicular to the edge connecting A and B.

Finally, reflect the average vector across the plane.

Notice that when normals are similar but slanted relative to the triangle plane, the control normal points in the opposite direction. This creates bumpy shading, as if the surface is warping.

Calculate the control vectors for the remaining edges.

To add this to your shader, first add three more slots in the Bézier control point array. Then, call CalculateBezierNormalPoints in your patch constant function, which implements the formula explained above.

In your domain function, calculate a quadratic Bézier similarly to the position. Apply the smoothing factor to interpolate with the flat normal, and be sure to normalize the final result!

There’s one other thing to consider, the tangent vector. It must always be perpendicular to the normal, but if we change the normal vector, it might not be. To fix this, find the barycentrically interpolated tangent vector and take its cross product with the barycentrically interpolated normal. Then, take that vector’s cross product with the smoothed normal. This resulting tangent vector is once again orthogonal to the normal vector, as well as the original bitangent. This should preserve tangent space.

The left has linearly interpolated normals, while the right has quadratically interpolated normals. Notice the details around the eyes and nose?

And that brings us to the end of this section on silhouette smoothing and Bézier triangles! This is the real magic behind tessellation and makes it a powerful tool combined with appropriately designed models. Experiment and see what you can create!

Working with Height Maps

Another of tessellation’s most common uses is adding extra geometric details to a mesh. Say you have a rough surface with many bumps. Traditionally, an artist would use a normal map to approximate the lighting such a bumpy surface would receive. The model itself is not actually bumpy, as you can clearly see if you view the surface’s profile or shadow. With tessellation, it’s possible to modify the mesh to more closely resemble a complex surface.

On the left is a texture applied to a flat plane. On the right, the plane is offset using tessellation and a height map.

The most common way to do this is with a height map. Also known as bump maps, these grayscale textures encode height offsets in its color data. The idea is simple, read a height from the texture and offset vertices along their normal vectors by this height.

To implement this in a shader, add a texture property for a height map, then sample it in the domain function. Remember SAMPLE_TEXTURE2D is only available in the fragment stage, due to partial derivatives; use SAMPLE_TEXTURE2D_LOD here. Accordingly, you can turn off mipmaps for height maps used in this way.

Regardless, add the sampled value to the vertex’s world position by using it to scale the normal vector. You can combine this with the smoothing techniques discussed above, or just use the flat interpolated position and normal. Either way, it’s that simple. Add an altitude property to adjust the height!

You might notice if you use a height map without a matching normal map that the mesh will look quite flat. This is because the height map does not affect normal vectors. Although I would still recommend using a normal map, it is possible to calculate lower quality tangent space normal vectors from a height map alone.

The left side has no normal map, so normal vectors always point straight up.

In this case, turn mipmaps back on for your height map, since we’ll sample it for normal vectors in the fragment stage. In the shader, add a _MainTexture_TexelSize variable, which holds the size of one texel, or pixel on a texture, in UV units. Make sure the name matches a texture in your shader and Unity will “automagically” calculate and set it for you!

Write GenerateNormalFromHeightMap, which samples the height map in each neighboring pixel around a given UV coordinate. From this, we can calculate a tangent space normal with a little algebra. Divide the change in height of two pixels across from one another with the change in UV space. This gives us the slope in the U and V directions, which correspond to the X and Y components of the tangent space normal.

Multiply the XY components with a _NormalStrength scaling factor to adjust the overall strength of this improvised normal map; then normalize the final result. Convert this to world space like any other tangent space vector.

You’ll need to adjust the normal strength until you get something that looks good. Even then, you might notice these normals are not quite as detailed as a normal map. It can’t really be helped without taking more texture samples, which really start to add up. For this reason, only use this technique if you’re in a pinch. There are many free tools online which can generate normal maps from height maps anyway.

Height maps can also take the form of a function, like Perlin noise or an SDF. In these cases, evaluate the height function instead of sampling a height map. Here, we’re forced to calculate a normal vector. It can be a little tricky to figure out the math, and it differs per function, but on the plus side, they’re often mathematically exact.

In this example, I created a heightmap from Perlin noise. This has well defined partial derivatives, so I was able to calculate the normal vector like so. If you scale the noise in any way, be sure to also scale the resulting normal vector. Just remember that normals must be scaled inversely to geometry — meaning divide instead of multiply!

You could calculate the normal along with the position in the domain function and just use the interpolated value in the fragment function. However, you could also calculate the normal in the fragment function for a nicer, more detailed result.

On the left, normals are calculated per-domain-vertex, on the right, per-fragment.

In this next example, I created a simple SDF, or distance function, from points centered on these three spheres. When the SDF passes a threshold at a point on the mesh, I deform the point backwards. To calculate the normal in this situation, I used another trick.

Create two new points offset slightly from the original point along the tangent and bitangent vectors. Calculate the SDF at all three points and apply the offset. Then, form a triangle with these deformed points and calculate the normal vector of the plane containing it, using the cross product. Use that for lighting! This method is usually quite good for continuous functions determined solely by position!

Closing Remarks and Special Thanks

As you can see, tessellation is a complicated subject, but it opens up many doors in the world of graphics programming! Although it can be expensive, well-placed tessellation can really polish up your models and give your game that final push over the finish line!

I hope I have shown off interesting uses of tessellation, including early culling, per-triangle math, smooth silhouettes, level of detail, quadratic normal vectors, real height maps, and procedural geometry. Personally, I will be using tessellation for level of detail and wind effects in my upcoming grass system.

Once again, here is a sample tessellation shader which implements many of the algorithms featured here.

If you have any questions, feel free to contact me at any of the links at the bottom of this article.

If you want to see this tutorial from another angle, I created a video version you can watch here.

I want to thank Crubidoobidoo for all their support, as well as all my patrons over the last month: Adam R. Vierra, Alvaro LOGOTOMIA, Ben Luker, Ben Wander, Bohemian Grape, Brooke Waddington, Cameron Horst, Chris, Christopher Ellis, Connor Wendt, Crubidoobidoo, Danny Hayes, darkkittenfire, Electric Brain, Eric Gao, Erica, Evan Malmud, Isobel Shasha, Jack Phelps, Jesse Comb, JP Lee, jpzz kim, Justin Criswell, Kyle Harrison, Leafenzo (Seclusion Tower), Lhong Lhi, Lorg, Lukas Schneider, Luke Hopkins, Mad Science, Microchasm, Nick Young, Oskar Kogut, Patrik Bergsten, phanurak rubpol, rafael ludescher, rookie, Samuel Ang, Sebastian Cai, starbi, Steph, Stephen Sandlin, Steven Grove, Tvoyager, Voids Adrift, and Will Tallent. Thank you all so much!

If you would like to download all the shaders and experiments showcased in this tutorial, consider joining my Patreon. You will also get early access to tutorials, voting power in topic polls, and more. Thank you!

Thanks so much for reading, and make games!

🔗 Tutorial list website ▶️ YouTube 🔴 Twitch 🐦 Twitter 🎮 Discord 📸 Instagram 👽 Reddit 🎶 TikTok 👑 PatreonKo-fi 📧 E-mail: nedmakesgames gmail

Credits, References and Special Thanks

All code appearing in GitHub Gist embeds is Copyright 2021 NedMakesGames, licensed under the MIT License.

--

--

NedMakesGames
NedMakesGames

Written by NedMakesGames

I'm a game developer and tutorial creator! If you prefer video tutorials, check out my YouTube channel NedMakesGames!

Responses (1)