Fitting Unity in the Graphics Pipeline

Videep
5 min readMar 28, 2022

In this article, we are going to take a look at the Graphics Pipeline which is the basics or fundamentals of rendering a 2d/3d object on to the screen.

Earlier in the days, when such dynamic game engines were unavailable, graphic programmers used to write their own applications using languages like C++ and utilizing graphics libraries like OpenGL or DirectX. Games like Doom are one of the best examples where Graphics evolved to the next level and 3d games came into the picture. There are also design software’s like AutoCad and 3ds Max etc. that use the same concepts, except they have an option to render out using software renderers, which uses CPU(s) for rendering. The use of GPU(s) is prominent in real-time rendering as it heavily depends on the Parallel processing architecture and has multiple cores to perform calculations simultaneously(but comes with its own limitations).

These days real-time rendering is being used in almost every industry and not only games. e.g. Archviz, HealthCare, Sports, Film & TV, Large Industries are some that I have contributed to professionally.

Unity being so robust and having an easier learning graph, is being preferred to kick-off projects as well as create production quality results. Not only students, but professionals ranging from artists to programmers who want to switch into real-time rendering do try their hands on such software’s.

The process is very simple, one just has to download a few 3d models and import them into Unity, then add materials and textures, Setup lights, cameras and a few scripts for a good First/Third person controller. Then top it up with some good post-processing and your first application is ready to be exported to either desktops or mobile devices! The learning goes deeper when people want to add features and enhance the quality. Hence, I would be writing a series of articles that will map the fundamentals and how Unity (or such engines) are based upon them. This will help us understand where could the solution to a possible issue lie.

Understanding the FUNDAMENTALS!

As mentioned earlier, the graphics pipeline or the rendering pipeline, is a conceptual model that describes what steps a graphics system needs to perform to render a 3D scene to a 2D screen.

From a high level, this is how it looks:

Application -> Geometry Processing -> Rasterization -> Pixel Processing

From a closer level, this is how it looks:

Then followed by Discard ops for pixel, Alpha Test, Depth Test, Stencil Test, Color Buffer Blend, mainly called as Raster Operations and more. The color is finally sent to your Screen(LED/LCD etc.) using a cable and some of you might also know how the display’s work.

Mapping unity with the Higher-View

Application stage: This is usually executed on the CPU and the developer has a lot of control over it. The application stage make sure that the rendering primitives like points, lines and triangles are passed on successfully to the Geometry Processing stage. In Unity, the process of importing the 3d models and setting up a scene with those models is where you can see an overlap of the Application stage in General Graphics Pipeline and Unity. The moment you drag-and-drop fbx, prefabs etc. into the scene OR create 3d primitives (cube, cylinder, capsule, plane, etc.) from the unity menu, the Application collects and organizes data (in the form of mesh filter) and passes it on to the next stage.

Geometry Processing Stage: The geometry processing stage at a higher level is where the space conversions happen from Object/Local space to view/camera/eye space and finally Perspective or Orthographic projection is applied. If you must have heard about the MVP(Model, View, Projection) matrix, its what I am talking about. Then finally the clipping takes place where the geometry which is not in the frustum of the camera is clipped off. This helps in reducing calculations. In Unity, the mesh-renderer is responsible to take things forward from application stage to the Geometry Processing stage. If you open up the mesh-renderer component, it has a few settings and then it has a material input property. This material contains a shader. The shader contains the vertex and fragment functions which are executed on the GPU. The mesh-filter contains the mesh data and this is passed on to the shader via the mesh-renderer. If you see further in detail, the vertex shader/function is where unity does the ObjectToClipPos conversion. There are optional functions/shaders like Tessellation and Geometry which may or may not be written.

Rasterization: The output of the Geometry stage is the vertex position in the clip-space. Which means we get an array of vertices or positions . If one has to pictorially represent it, we can use Point primitive(from the OpenGL, DirectX api) and render/visualize them. Imagine a 3d cube, if not rasterized would just be 8 points. To view the cube as a solid geometry, rasterization is performed, where we find all the pixels inside a given primitive. By primitive here, I mean the Graphic Library primitives (point, line, triangle). E.g. the rasterizer will find all the pixels for each triangle of a cube. In most of the applications, we do not see this process or control it. Unity too does not expose this to developers and takes care of it at the Graphics API level. The bare rasterization equation and the program cannot be controlled or replaced (AFAIK), but we have control on input parameters based on which the rasterization takes places and respective output is yielded. The importance of this stage will be discussed in the upcoming articles.

Pixel Processing: Finally, we get all the pixels which are inside a graphics primitive and its time to apply color to them. This color can come from textures as well. In Unity, we know this by the fragment function/shader. This is responsible to provide a color for each pixel. One can perform a lot of calculations here, but what is returned is just a color in the form of RGBA. An example could be from simply applying a texture or animating a texture using the UV’s or creating a procedural checker-board pattern to effects like Gaussian blur.

As an over-view, we definitely know that Unity does stick to the Graphics pipeline and also provides its users a good amount of control over it.

What next?

This is just a very high level information on the Graphics Pipeline, but where do the amazing things happen? The effects, the merging, stencil buffers, Z-test, etc. How are transparent objects rendered? What are render passes? What control do we have while using Unity? We shall discuss the graphics stages and many more topics in the upcoming-articles.

--

--