The Single-Pass VS Multi-Pass rendering in Unity for VR

Videep
4 min readJul 9, 2023

--

If you use Unity to develop XR applications, you might have always thought about performance and optimization. It might be possible that you might have over looked the stereo rendering modes or switched to Single-Pass Instanced rendering after watching videos or reading some articles.

But ever wondered WHY?

Here is a very simplified version of the differences between single-pass and multi-pass rendering. Before that, just a quick image on graphics pipeline so that you can understand where the rendering starts and where it ends:

Here is a quick view of the scene I would be rendering:

URP’s Sample Scene with Assets

Multi-Pass Rendering:

One of the first approaches to render stereoscopic images with dual cameras just to optimise certain aspects of rendering like shadows(as per unity). But here, the drawing of the objects was done two times. So the pipeline looked like this:

Though there was optimisation but not much. This was because we eventually rendered objects twice. One optimization that was further done was that in the culling stage which is on the CPU, the use of single camera was made rather than culling the objects twice(once per camera).

After debugging, this is what Unity does:

Image shows the rendering of objects is done 2 times

Single Pass Rendering:

In this approach, we looped over the rendering pipeline once, making sure that we are not drawing the objects twice. This was a major saver in terms of iteration but we had to use a render target that was double in the width. Also, we switch the viewport(which is generated using the origin xy and height, width). Unity internally does the switching of the viewports to render the objects in their specific viewports left or right(based on the unity_StereoEyeIndex it is rendering). Hence, we have the data saved and the drawing takes place per viewport. In recent versions of Unity, we get to see the Single-Pass Instanced Rendering option.

Single Pass Instanced Rendering:

In this approach, we use the GPU’s capabilities of using Instancing. Here we also leverage the capabilities of rendering into RenderTargetArrays, thus avoiding the usage of a double width texture and switching of viewports. Now the render targets can share the same viewport. Now we can simply iterate through the objects we have and issue single draw call per object with an instance count of 2(if the object is rendered once, this is because its rendered once per eye) or more(if the objects are replicated).

Firstly, we just reduce the draw calls by half as we just issue an instanced draw command:

Draw Opaques, transparents etc. is just issued once.

Secondly, we can see the slices of the render target:

Each object is rendered in each slice with a single instanced draw command.

Multi-View Rendering?

It is quite similar to the Single Pass rendering, but seen mostly on OpenGL and OpenGLES api’s where the graphics drivers themselves have the capability of multiplexing the drawcalls.

You can see the performance improvisations here:

Taken from Unity video (Unite 2017)

I’d be doing an analysis on a demo scene from the asset store in my next post. Do stay tuned.

My YouTube channel:

Instagram: @TechWithVideep

--

--