Unity - Custom Skybox Shader

Unity - Custom Skybox Shader

A skybox is the color information that the engine uses to clear the screen. For those that have used something like Processing, it’s like the “background( )” used at the top of a “draw( )” loop. Its purpose is to overwrite what was drawn on the previous frame, so that the current frame will not have remnants from the former. It is drawn first, as opposed to filled later so that transparent objects without opaque geometry behind it will blend properly.

A shader for a skybox is essentially the same as any other screen space shader, and can be made unlit. Because Unity does not use any geometry for the sky box by default, skyboxes utilizing a non-screen space approach require a custom geometry to which a material shader could be applied.

A fully procedural skybox has the advantage that it can require no texture sampling and therefore can remain pixel perfect at every resolution. A procedural sky box attempting to create Earth-like atmospheric effects can leverage the same game state parameters that other shaders have access to, such as camera look direction and world light (highest priority directional light) direction. By utilizing the inverse of the world light direction, the light direction can be made to look as though its coming from a source on the skybox. Similarly, the dot product of the source light and the apparent path to the zenith can be used to blend between the appropriate atmospheric hues. Also, because each rasterized pixel for a perspective camera is computed using a projected “look angle” from the camera’s transform, glows and other faux-scattering effects can be applied based on each pixel’s relative angular distance from the direction of the “sun”.

As an alternative to strict realism, a 4-way blend approach (example of math pictured below) can be used to strictly control the color at sunrise (SR), high noon (HN), sunset (SS), and midnight (MN). In the approach, below, the dot product of the world space light direction and orthogonal zenith angles are used to deterministically assign skybox color based on the “sun’s” apparent position in the sky. This approach assumes that the sun travels a roughly equatorial path over the game world, but testing shows that an offset of up to 30-degrees has little visible impact to the output colors. The reason for this set up is that it provides a meaningful 0-1 output node for each reference bearing and no contribution beyond 90-degrees to either side. This creates a sort of 4-way LERP. The inclusion of the power operator to each output allows the user to define the characteristic of the blend characteristic for values in between the poles.

Applying a custom shaded skybox to a camera is only slightly different than applying a post-processing style screen space shader effect to a camera. A material must be made to reference the skybox shader. Next the camera’s clear flag should be set to skybox. By default, Unity’s procedural skybox will be used. However, a skybox component can be added to the camera’s game object to allow an alternative skybox material to be used in its place.

Though the skybox may seem like a special case within the Unity development environment, understanding it’s function, mechanisms, affordances, and implementation makes it much more straightforward to understand and is a great opportunity to inject a bespoke aesthetic to any project.

Skybox Math.png


Unity's Forward Rendering Path


Unity's Forward Rendering Path is performant, and is currently favored for VR.

Forward rendering can be faster than deferred, it works well when you don't have an excessive number of dynamic lights, and it supports transparency for objects.  It's also worth noting that forward rendering is supported on Windows, Mac, Linux, iOS, Android, and console build targets.

Also, according Alex Vlachos of Valve (GDC 2015 and 2016), Valve's teams currently favors forward rendering because it works with MSAA (multisampling anti-aliasing).  Deferred rendering is not compatible with MSAA, because "lighting decisions are made after the MSAA is 'resolved' to its final image size".


Forward rendering differentiates itself from deferred rendering in the way that it treats and computes lighting.  Specifically in forward rendering, a light may be rendered on a per-pixel (PP), per-vertex (PV), and/or SH (spherical harmonics) basis, which are listed in order of descending computational expense and accuracy.  Because all lights are not necessarily handled on their own rendering passes, this allows for predictable rendering overhead.

Whether a light is applied PP, PV, or as SH is dependent on whether brightness, type of light (directional, point, spot), importance (the setting), and the "Pixel Light Count" in Quality Settings, as described in the documentation.

Per frame, it should be noted that lights will be categorized PP, PV, or SH, and since, be default, only the directional PP light from the base pass will cast shadows.  Unfamiliarity with how settings will impact the categorization and treatment of lights may result in erratic shadow behavior, or a dimishing performance benefit over other rendering paths.


Maximizing the performance benefit of forward rendering relies on staging the scene to take full advantage of the Base Pass.  The Base Pass will render objects with 1xPP, up to a max of 4xPV, and any number of SH lights.  Every subsequent PP light will happen on additional passes.

Additionally, with the reliance on PV lights, 3D artists should be deliberate in both the variation and maximum size of tris in lit models.