Let There Be Light: Writing Unity URP Shaders with Code, Part 2

NedMakesGames
22 min readJul 18, 2022
Video version of this tutorial.

Hi, I’m Ned, and I make games! Have you ever wondered how lighting and shadows work in Unity? Or, do you want to write your own shaders for the Universal Render Pipeline, but without Shader Graph? Either because you need some special feature or just prefer writing code, this tutorial has you covered.

In fact, this is the second part in a series about writing HLSL shaders for URP. In this video, I will show how to add lighting to a shader. This includes a simple explanation of shadow mapping — how objects cast and receive shadows in URP — as well as an introduction to keywords and shader variants — an important concept when writing shaders!

As I publish future sections, I will update this page with links! You can also subscribe here to receive a notification when I finish part three. If you’re starting here, I would recommend following the first part, where we write a basic unlit shader. This tutorial will continue directly from it.

  1. Introduction to shaders: simple unlit shaders with textures.
  2. Simple lighting and shadows: directional lights and cast shadows.
  3. Transparency: blended and cut out transparency.
  4. Physically based rendering: normal maps, metallic and specular workflows, and additional blend modes.
  5. Advanced lighting: spot, point, and baked lights and shadows.
  6. Advanced URP features: depth, depth-normals, screen space ambient occlusion, single pass VR rendering, batching and more.
  7. Custom lighting models: accessing and using light data to create your own lighting algorithms.
  8. Vertex animation: animating meshes in a shader.
  9. Gathering data from C#: additional vertex data, global variables and procedural colors.

If you prefer video tutorials, here’s a link to a video version of this article.

Before I move on, I want to thank all my patrons for helping make this series possible, and give a big shout out to my “next-gen” patron: Crubidoobidoo! Thank you all so much.

With that, let’s get started!

Blinn-Phong Shading. So far, we’ve learned to write unlit shaders, or shaders not affected by lights. Obviously, lighting is a very important aspect of rendering; programmers devote a lot of shader code to it. Luckily for us, URP provides a helper function which deals with much of it.

In URP’s “lighting.hlsl” file, there’s a function called UniversalFragmentBlinnPhong. It computes a standard lighting algorithm called the Blinn-Phong lighting model. Blinn-Phong is actually made of two components.

The first calculates diffuse lighting — what illuminates the side of an object facing towards a light.

The second calculates specular lighting — the shine or highlight that brings smooth objects to life.

Open MyLitForwardLitPass.hlsl, and in the Fragment function, call UniversalFragmentBlinnPhong. It returns a color, which we can simply return as well. UniversalFragmentBlinnPhong takes many arguments, but to keep things neat, it bundles them up into two structures. The first, called InputData, holds information about the position and orientation of the mesh at the current fragment. The second, called SurfaceData, holds information about the surface material’s physical properties, like color.

Define a variable for both. These structures have nearly a dozen fields each, but we don’t need to set them all yet. Unlike C#, structure fields must be manually initialized. To set all fields to zero, cast zero to the structure type. This looks strange, but it’s an easy way to initialize a structure without having to know all its fields.

Now, pass inputData and surfaceData to UniversalFragmentBlinnPhong.

Version Differences. Back in part one, I mentioned that there are several differences between Unity 2020 and Unity 2021. Well, here’s the first that affects our shader. In Unity 2020, URP does not have an overload of UniversalFragmentBlinnPhong that takes a SurfaceData struct. You’ll have to pass the fields individually, like below. For now, don’t worry about what each field means, we will get to them soon.

Both to keep this tutorial more organized and to help you upgrade projects in the future, I want the same code to run in Unity 2020 and Unity 2021. Thankfully, there’s an easy way to run different code depending on the current Unity version.

You might have seen a #if preprocessor command in C# — usually it omits code that should only run in the editor. #if is also available in ShaderLab and HLSL, where it’s a common sight! If the expression following #if is true, then the code in between #if and #endif will be compiled. Otherwise, the compiler will ignore it.

#if can only depend on values that are known before compiling code, like constants and number literals. Unity provides a constant called UNITY_VERSION which contains the current Unity version as an integer — basically the version with periods omitted.

So, in our fragment function, we want to switch between passing the surface data struct or its individual fields based on the Unity version. If it’s greater than or equal to “202102,” we can pass the structure. To define an else block, which works just like you would expect, use #else. Inside, call the version with individual arguments.

Anyway, now the shader code will dynamically change depending on which Unity version we’re working with. Neat!

For the future, if you need to support another possibility, you can use #elif, which is short for else-if. Here’s an example for a hypothetical Unity 2030 version.

This is just an example. You don’t need to add this code to your shader.

Check out your shader in the scene editor. It’s now just a black sphere!

To get back to where we were before, we need to fill some fields in the input structs. From the color properties, we can set albedo and alpha, which are fancy names for base color and transparency. But, remember that the shader doesn’t support transparency just yet, so don’t expect it.

Normal Vectors. Next, we need something called a “normal vector.” You may know what normal vectors are from math or Unity’s physics systems, but they’re vectors that point directly outwards from a surface. Blinn-Phong uses them to find where the mesh faces a light source.

Normal vectors visualized on faces of a cube model.

Normal vectors apply to faces, but they’re organized into a mesh vertex stream, like position or UVs. This can complicate things. There’s no problem on a sphere, but on sharp cornered meshes, like a cube, it can look like vertices have multiple normal vectors.

Normal vectors stored on vertices of a sphere.
Normal vectors stored on vertices of a cube. Notice the duplicated vertices!

In reality, Unity duplicates vertices — one for each normal vector. This way, a vertex’s normal always matches the face it applies to.

Regardless, the input assembler will take care of gathering normal data. Add a new field to the Attributes struct tagged with the NORMAL semantic. These normals are also in object space, like position.

When adding a new data source to a shader, it’s useful to plan out its “journey” through your code. Blinn-Phong needs normals in the fragment stage, but they’re only accessible through the input assembler. We need to pass them through the vertex stage and interpolate them with the rasterizer.

In addition, UniversalBlinnPhong expects normals in world space, and it’s necessary to transform them at some point. We could do that in the fragment stage, but we don’t need object space normals there at all. It’s a bit more optimal to calculate world space normals in the vertex function, since it runs fewer times than the fragment function.

Using this plan, go through the code section by section and modify it as needed.

We already added a normalOS field to Attributes. Add a normalWS field to Interpolators. The rasterizer will interpolate any field tagged with a TEXCOORD semantic, so tag normal with TEXCOORD1.

Why 1 and not 0? Well, TEXCOORD0 was already taken by UVs, and two fields should not have the same semantic. The rasterizer can handle many TEXCOORD variables — two is no problem.

In the vertex function, transform the normal vector from object to world space. URP provides another function to do this, GetVertexNormalInputs, similar to the one we used for positions. Call it, and set the world space normal in the Interpolators struct.

In the fragment function, set normalWS in the InputData struct.

Before moving on, let’s think a little about what happens to the normal vector when it’s interpolated. When the rasterize interpolates vectors, it interpolates each component individually. This can cause a vector’s length to change, like in this example.

When interpolating between normals pointing in opposite directions, the middle values will change length.

For lighting to look its best, all normal vectors must have a length of one. This requirement is common when a vector encodes a direction. We can bring any vector to a length of one using the aptly named normalize function.

These normals have been normalized to always have a length of one.

normalize is kind of slow, since it has an expensive square root calculation inside. I think this step is worth it for smoother lighting — it’s especially noticeable on specular highlights — but if you’re pressed for processing power, you can skip it.

In the scene editor, we finally have lighting!

Specular Lighting. But, it’s a little flat, with only diffuse lighting. For specular highlights, URP needs a little more data, specifically world space position.

Right now, the fragment function does not have access to world space position, only pixel positions. There’s not an easy way to transform those back to world space; it’s best to pass it as another field in the Interpolators struct. Tag it with another free TEXCOORD variable. (I reorganized them a little here, just for personal preference.)

Set position in the vertex stage using URP’s handy transform function! Then, in the fragment function, set positionWS in InputData. No need to normalize here of course, since position is not a direction and can have any length.

Your shader will not look like this yet.

If you move around an object using the default lit shader, you’ll notice the highlights also move slightly. This is because specular lighting depends on the view direction, or the direction from the fragment to the camera.

We can calculate this in the fragment function from world space position using another handy URP function, GetWorldSpaceNormalizeViewDir. Call it and set viewDirectionWS in InputData.

Highlights can sometimes have different colors than the albedo, and URP allows you to specify this with a specular field in the SurfaceData struct. For now, set it to white.

If you take a peek at the scene, there’s still no highlights! It turns out UniversalFragmentBlinnPhong uses a #if command internally to toggle highlights on and off. It uses a special type of constant called a keyword to do so. Keywords are sort of like boolean constants you enable using a #define command.

Shaders make extensive use of keywords to turn on and off different features. It’s faster to disable specular lighting instead of, for instance, setting specular color to black. Either option has the same visual effect, but not evaluating something is obviously quicker than throwing out the result.

However, I want specular lighting in this shader. For organization, I define keywords in the ShaderLab file for each pass, making it obvious which keywords are enabled at a glance. Add #define _SPECULAR_COLOR to your pass block.

Now — finally — highlights! But, they’re too big! URP provides an easy way to shrink them using a value called smoothness. The higher the smoothness, the smaller the highlight. Visualize a perfectly smooth metal ball; the highlight is quite focused!

For now, let’s define smoothness using a material property. Add a property called _Smoothness of the Float type to your shader.

In “MyLitForwardLitPass,” declare _Smoothness at the top of the file and set it in the SurfaceData structure.

Using the material inspector, you can control the size of the highlight using the smoothness property. Note that smoothness works differently depending on the Unity version. 2021’s implementation is much more sensitive. This is just a consequence of how URP calculates lighting behind the scenes.

One note before we move on. The shader only supports the main light for now. We should get the basics down before complicating things with additional lights, but I will show how to add support for them in part 5 of this series!

The Shadow Mapping Algorithm. So far, we’ve worked with just one object. If you create another, you’ll notice that objects with our shader neither cast nor receive shadows. These are separate concepts in the world of shaders, and we’ll need to implement both.

First, let’s investigate how URP handles shadows using an algorithm called “shadow mapping.”

The goal is finding a cheap way to check if a fragment is in shadow, with respect to a light source. Again, let’s only consider the main light.

We want to calculate if several surfaces are in shadow.

A naive approach is to check for an object between the fragment and the light. This is very slow, since the shader needs to execute a raycast, looping through all objects in the scene. There’s got to be a faster way.

The middle surface is in shadow. A raycast from the surface to the light intersects another surface.

First, let’s restructure our algorithm to orient the ray starting from the light and shooting out in a straight line, intersecting our fragment and any other surfaces on the same line.

Second, notice that only one surface on the ray is lit. For every surface but the one closest to the light, there is an object between it and the light. To determine if a fragment is in shadow, simply test if the distance to the light is greater than the minimum distance among all surfaces to the light.

Only the closest surface is not in shadow.

This reduces the problem to finding the distance from the light to the closest surface along all light rays. This kind of sounds familiar… When rendering, we draw the color of the closest surface to the camera along all “view rays.” Switch color with distance and the camera with a light and we’re in business!

The left image is a normal render, while the right image draws distance from the camera. Both are from the light’s perspective.

How can we draw distance? Remember that colors are just numbers, so we can store distance inside the red channel of a color.

URP’s shadow mapping system does this behind the scenes. Before rendering color, it switches the camera to match the perspective of the main light. Then, it utilizes another shader pass, the shadow caster pass, to draw depth for each pixel.

We don’t want these depths to draw to the screen though. URP hijacks the presentation stage and directs it to draw to a special texture, called a render target. This specific render target, containing distances from a light, is called a shadow map, hence the algorithm name.

The shadow map texture has UV coordinates like any other texture.

To calculate if a fragment is in shadow, we need our distance from the light and the distance stored in the shadow map. To sample the shadow map, we need to calculate the shadow map UV — also known as a shadow coord — corresponding to this fragment’s position. URP again comes through with a function, TransformWorldToShadowCoord, to convert world space position to a shadow coord.

URP will deal with comparing distances and sampling the shadow map if we set shadowCoord in the InputData struct. In the fragment function of “MyLitForwardLitPass.hlsl,” go ahead and do that.

Shader Variants. Similarly to specular lighting, URP toggles shadows on and off with a keyword called _MAIN_LIGHT_SHADOWS. However, what if I’m making a dark scene with no main light? In that case, I’d like to turn off shadows, but I don’t want to create a whole new shader with only this keyword undefined.

Luckily, Unity has the system of shader variants for this use case. Using this #pragma multi_compile command, we can have Unity compile a version of our shader with and without _MAIN_LIGHT_SHADOWS enabled. These two versions are called variants of our shader, or more specifically, variants of the forward lit pass.

Adding variants creates another subdivision below passes with slightly different vertex and fragment functions.

Multi compile can also take a whole list of keywords, in which case it will create multiple variants, one with each individual keyword enabled

By adding a single underscore, it will also compile a variant with none of the keywords enabled.

Even more conveniently, materials will automatically choose the correct variant for the situation at hand.

This is example code. No need to add it to your project

You see, if you want to use a variant with _MAIN_LIGHT_SHADOWS defined, simply call EnableKeyword(“_MAIN_LIGHT_SHADOWS”) on the material in C#. DisableKeyword will undefine the keyword. URP does this automatically if it detects a directional light in the scene.

Check it out! Create an object with a default lit material and move it in between your my lit object and the light.

Shadow Cascades and Soft Shadows. If you don’t see shadows, turn off cascades and soft shadows on your URP settings asset. It would be nice to support both of those options for better quality though.

Three shadow cascades, each with more focused view of the scene.

We’ve been talking like the main light has a position, but since it models the sun, it is actually infinitely far away from everything in the scene. This makes it difficult to create a shadow map containing the entire scene while keeping enough detail for good quality. Unity tries to balance this with cascades, where it renders multiple shadow maps, each containing larger slices of the scene, and samples the one with the most detail at any position.

Unity samples the highest detail cascade that contains a specific world position. Each color here represents shadow data taken from a different cascade.

Since the shadow map has square pixels, you sometimes see their jagged edges manifest on surfaces. Soft shadows help eliminate these artifacts by sampling the shadow map a few times around the given shadow coord. It averages these samples, effectively blurring the shadow map a bit.

We don’t need to worry about the details of either system though. We only need to enable two keywords and URP will take care of the rest.

Add more multi compile commands for these new keywords. With multiple multi compile commands, Unity will permute each and create a variant for every possible combination of keywords. We’ve barely started, but that’s already six variants. Each takes time to compile, so it’s worth it to keep this number low.

With that in mind, Unity 2021.2 tweaked the cascade system a little. In 2020, we must enable keywords for main light shadows and cascades, but in 2021, enabling cascades implies main light shadows are enabled. We can handle both using a #if block and reduce variant count in 2021.

Also, notice the _fragment suffix in the soft shadows pragma? We can save a little more compile time by signaling the _SHADOWS_SOFT keyword is only used in the fragment stage. Unity will have the variants created by this multi compile command share a vertex function.

With that, let’s test things out. Be sure to tweak shadow cascades and enable soft shadows to see all your shader variants at work. You can see URP dynamically compiling shader variants when your shader momentarily flickers magenta.

Unity 2022 has additional options for shadow quality: high quality soft shadows and a “conservative enclosing sphere” for shadow cascades. Try enabling them to see what they do — no code changes required.

The Frame Debugger. Now seems like a good time to introduce a powerful debugging tool: the Frame Debugger! Find it in the “Window” dialog under “Analysis.” Enable it using this button in the top left. Make sure your game view is visible — It will also automatically pause if it’s running.

This useful window tells you all kinds of information about how Unity renders your scene. It renders objects in the order they appear in the menu. You can see Unity creating the shadow map before rendering lit passes. You can even check out how the shadow map looks.

The shadow map with four cascades.

The frame debugger also tells you which shader variant is currently active for any object. Navigate to the “DrawOpaqueObjects” dropdown and find your sphere. Check the shader name, it will be “MyLit!”

You can see the current subshader and pass, and under that a list of defined keywords determining the shader variant. Try disabling soft shadows, cascades, and the main light game object and to see how that influences things.

The MyLit shader with the main light disabled and then with soft shadows disabled.

The window doesn’t do a good job of keeping the same object selected, and you’ll have to find your shader again as you toggle things on and off. There will also be slight differences depending on your Unity version, so keep that in mind.

The Shadow Caster Pass. You may have tried to apply the MyLit shader to your shadow caster sphere and noticed it no longer casts shadows. That’s because casting and receiving shadows are completely different processes in 3D rendering, and we haven’t dealt with casting yet!

In the previous section, I mentioned URP creates a shadow map texture using another shader pass, called the shadow caster pass. All we have to do to add shadow casting to MyLit is write this pass.

Remember, passes are shader subdivisions with their own vertex and fragment functions. Passes can also have their own multi compile keywords and shader variants. Each shader pass has a specific job, as given by URP. The UniversalForward (forward lit) pass calculates final pixel color, while the ShadowCaster pass calculates data for the shadow map.

Honestly, it’s much simpler in practice than it sounds. URP takes care of calling the correct pass at the correct time and routing the output colors into the correct target. These abstract passes are hard to visualize, so let’s get something written.

Begin by adding another Pass block to the “MyLit.shader” file. Duplicate the ForwardLit pass, changing the name and light mode tag to ShadowCaster. There will be no lighting here; delete the _SPECULAR_COLOR define and the shader variant pragmas.

For organization, I like to write each pass in its own HLSL file. Change the #include to refer to “MyLitShadowCasterPass.hlsl.” Then, create a new HLSL file called “MyLitShadowCasterPass.hlsl.”

Inside, start by defining the data structs. In Attributes, we’ll only need position, while Interpolators only needs clip space position. In the vertex function, call the URP function to convert position to clip space, set it in the output structure, and return it. In the fragment function, simply return zero.

The Depth Buffer. But wait, isn’t the shadow map supposed to encode distance from the light-camera? It does, but the renderer handles this automatically. See, clip space positions encode something called depth, which is related to distance from the camera. When interpolating, the rasterizer stores the depth of each fragment in a data structure called the depth buffer.

The Z-component of clip space position is (related to) a fragment’s depth.

Unity utilizes the depth buffer to help reduce overdraw. Overdraw occurs when two or more fragments with the same pixel position are rendered during a frame. When everything is opaque, as it is now, only the closer fragment is ultimately displayed. Any other fragments are discarded, leading to wasted work. The rasterizer can avoid calling a fragment function if its depth is greater than the stored value in the depth buffer.

The overlapping section of these two spheres would be rendered twice if not for the depth buffer.

URP reuses the depth buffer resulting from the shadow caster pass as the shadow map. But, most other passes have a depth buffer of their own as well.

The depth buffer of the windowed gallery scene.

Shadow Acne. Our MyLit objects should now cast shadows, but you will see some ugly artifacts called shadow acne covering them.

This occurs mostly on the surface of a shadow casting object. It’s another consequence of every programmer’s bane: floating point errors. In this case, the shadow map depth and the mesh depth are nearly equal, so the system sometimes draws the shadow on top of the casting surface.

To fix acne, apply a bias, or offset the shadow caster vertex positions. When calculating clip space positions, there’s no rule that they must exactly match the mesh. We can offset the positions away from the light and also along the mesh’s normals. Both of these biases help prevent shadow acne.

The girl’s vertices offset along their normal vectors.

The shadow caster now needs normals, so add a normal field to the Attributes struct. Then, write a new function, GetShadowCasterPositionCS, to calculate the offset clip space position. It requires world space position and normal.

ApplyShadowBias from URP’s library will read and apply shadow bias settings. It requires world space position and normal, as well as the rendering light’s direction. URP provides this in a global variable called _LightDirection. We need to define it, like a material property. Do that above the function and pass it to ApplyShadowBias. ApplyShadowBias returns a position in world space. Transform it to clip space using another URP function, TransformWorldToHClip.

Clip space has depth boundaries, and if we accidentally overstep them when applying biases, the shadow could disappear or flicker. The boundary of depth is defined by something called the “light near clip plane.” Clamp the clip space z-coordinate by the near plane value, defined by UNITY_NEAR_CLIP_VALUE.

To make things more complicated, certain graphics APIs reverse the clip space z-axis. Thankfully, URP provides another boolean constant, UNITY_REVERSED_Z, to tell us if the boundary is a minimum or maximum. Use a #if statement to handle both cases and return the final clip space position.

In the vertex function, calculate the world space position and normal using URP’s conversion functions, then call your custom shadow caster clip space function.

Back in the scene editor, things might look immediately better. If not, edit the shadow bias settings and light near clip plane on the main light component.

You can set global bias settings on the URP settings asset too.

ColorMask. Before wrapping up, we can optimize the shadow caster a little by adding some metadata to the pass block. Since the shadow caster only uses the depth buffer, we can basically turn off color using the ColorMask command.

This ColorMask 0 command does just that, basically directing the renderer to write no color. By default, ColorMask is set to RGBA, which is what we want in the forward lit pass, for example. With this setting it draws all color channels.

Lighting and shadows are some of the most fun aspects of shaders, and we will be revisiting them much more throughout this series. After this tutorial, you have a good groundwork to continue with more complicated features. We’ve got point lights, spot lights, baked lights, emission, screen space shadows and more to get to!

However, in the next tutorial, I will pivot to transparency. We’ll learn how to create transparent shaders and how to handle their idiosyncrasies. I’ll also introduce some of URP’s powerful optimization tools.

For reference, here are the final versions of the shader files.

If you enjoyed this tutorial, consider following me here to receive an email when the next part goes live.

If you want to see this tutorial from another angle, I created a video version you can watch here.

I want to thank Crubidoobidoo for all their support, as well as all my patrons during the development of this tutorial: 42 Monkeys, Adam R. Vierra, Adoiza, Amin, Andrei Hingan, Ben Luker, Ben Wander, bgbg, Bohemian Grape, Brannon Northington, bruce li, Cameron Horst, Charlie Jiao, Christopher Ellis, Connor Wendt, Constantine Miran, Crubidoobidoo, Davide, Derek Arndt, Elmar Moelzer, Eric Bates, etto space, Evan Malmud, far few giants, FELIX, Florian Faller, gamegogojo, gleb lobach, Howard Day, Huynh Tri, Isobel Shasha, Jack Phelps, Jesse Comb, jesse herbert, John Lism Fishman, JP Lee, jpzz kim, Kyle Harrison, Leafenzo (Seclusion Tower), lexie Dostal, Lhong Lhi, Lien Dinh, Lukas Schneider, Luke Hopkins, Luke O Reilly, Mad Science, Marcin Krzeszowiec, martin.wepner, maxo, Minori Freyja, Nate Ryman, Oliver Davies, Oskar Kogut, P W, Patrick, Patrik Bergsten, Paul, Petr Škoda, Petter Henriksson, rafael ludescher, Rhi E., Richard Pieterse, roman, Ryan Smith, Sam CD-ROM, Samuel Ang, Sebastian Cai, Seoul Byun, shaochun, SHELL SHELL, Simon Jackson, starbi, Steph, Stephan Maier, teadrinker, Team 21 Studio, thearperson, Tim Hart, Tomáš Jelínek, Vincent Thémereau, Voids Adrift, Wei Suo, Wojciech Marek, Сергей Каменов, Татьяна Гайдук, 智則 安田, 이종혁.

If you would like to download all the shaders showcased in this tutorial inside a Unity project, consider joining my Patreon. You will also get early access to tutorials, voting power in topic polls, and more. Thank you!

If you have any questions, feel free to leave a comment or contact me at any of my social media links:

🔗 Tutorial list website ▶️ YouTube 🔴 Twitch 🐦 Twitter 🎮 Discord 📸 Instagram 👽 Reddit 🎶 TikTok 👑 PatreonKo-fi 📧 E-mail: nedmakesgames gmail

Thanks so much for reading, and make games!

Changelog:

  • May 29th 2023: Mention new shadow quality options available in Unity 2022.

©️ Timothy Ned Atton 2022. All rights reserved.

All code appearing in GitHub Gists is distributed under the MIT license.

Timothy Ned Atton is a game developer and graphics engineer with ten years experience working with Unity. He is currently employed at Golf+ working on the VR golf game, Golf+. This tutorial is not affiliated with nor endorsed by Golf+, Unity Technologies, or any of the people and organizations listed above. Thanks for reading!

--

--

NedMakesGames

I'm a game developer and tutorial creator! If you prefer video tutorials, check out my YouTube channel NedMakesGames!