Unity and the Application Stage in Graphics Pipeline — Part I

The Application

Videep
4 min readApr 4, 2022

It all starts with the application which is responsible to process a piece of code or execute a program. As “Users”, we install an application which can be as simple as notepad or calculator or as complex as the God of War game. For us, its just a few clicks to install and run the application. We click the game icon and voila, we have the game running with some cool graphics!

An application program (application or app for short) is a computer program designed to carry out a specific task other than one relating to the operation of the computer itself, typically to be used by end-users.

Lets talk about Windows. A programmer would create a program which has an “.exe” extension. This program might be run in command line or might spawn a window with a Graphical User Interface. Assuming the program has a Window with a GUI, by default, Windows will use its Graphics Device Interface (GDI or GDI+). This GDI is run on the integrated graphics card and hence you get to see a GUI and interact with the application. A team of programmers could make some real good applications which can be called software's. They could even make games, but if the game had to run on the integrated graphics device, it would be bound to the limitations of the device. Hence the lag or lower FPS depending on the device’s capability.

If the programmers were to make a software like AutoCad, Maya, Max, Blender etc. They would require powerful graphics device. This is where the GPU’s come into existence. That is exactly the case with Unity. The “Scene” and “Game” window, are just instances of 3d Graphics Context.

I remember using integrated graphics card with 3ds Max about 10 years ago, but the viewports always used to “glitch”. When a GPU was purchased and fixed, it used to run so smooth.

Unity as an Application

Now we can relate Unity to an application. As users, we have an executable which spawns an application with so many windows and two Graphics Context windows: “Scene” and “Game”.

In 3d softwares like Maya, Max etc. we create 3d models and view them in the “viewport”. This viewport is the graphics context. What we see in the Viewport is data. Data in an organized form. There are specialized tools for modelling, texturing, lighting, rigging, skinning, animation, particle system etc. We see the final results when we render out a frame, until then we are seeing real-time rendered 3d objects in the Viewport. Hence, software's like Maya also have “Hardware-Rendering” as an option in the viewport settings.

These days, renderers like V-Ray also offer real-time rendering and mixed rendering options apart from the traditional CPU based rendering.

The similarity between Gaming engines like Unity and 3d software's is that both have graphics contexts setup in which we can see textured 3d models with or without lighting. But in gaming engines, we do not have the extended capability of modelling, uv-mapping or texturing, CPU based rendering (and many more).

The Game!

When we run the Game Application, the OS hands over the specific execution of events to the application, where in, the application loads up all the User Preferences, 3d/2d Assets, connects to servers for gathering data etc. Loading and organizing the data is the application’s job, as well as creating graphics context which communicates with the extended GPU using the Graphics API’s(OpenGL, DirectX, Vulkan, Metal, etc). Whenever a frame is being rendered, data is passed on from the CPU to the GPU and then the rendering takes place. We have concepts like Double-Buffering and Triple-Buffering where in the GPU renders one frame in the back buffer while the front buffer is being displayed on the screen.

When you run an FPS analyzer, you see the number of times a frame is being rendered on the GPU. 60 FPS means 60 times the data is being transferred to GPU from the CPU! Too much repetitive work here…

Now that we have an idea as to what an application is, be it a software or a game and the basic principles it uses to deliver a rendered output to the user, we can clearly state that CPU is connected to the GPU via a bridge that is responsible to carry all the data from CPU to GPU for rendering a frame.

This is where major bottle-neck issues appear.

There are algorithms and various methods that we can use to reduce the payload per frame.

In the next article(s), we shall see how Unity fit’s into the Application stage of the Graphics Pipeline and look at some methods that help is reducing the data transfer each frame.

--

--