Settings mentioned in this article apply to 2D games and GUI’s where the camera is orthographic and it’s distance to the 2D elements is fixed, also note that our aim is to render pixel perfect sprites thus the source texture we will use should have the same dimensions as our desired size on the game viewport.
Figure 1.1 Mentioned texture import settings applied
Mip mapping is a technique where smaller versions (mips) of your textures are generated automatically. Each mip level is a quarter of the previous mipmap. Additional mipmaps are generated if you use Anisotropic filtering but more on that later. The mipmap which is of size closest to the screen space size of the mapped 3D geometry in a given frame is selected and mapped on to the object.
It is used because when you map a 1024 x 1024 texture on an object that covers only 20 pixels on a screen, no matter what type of filtering you use, it will have poor visual quality when you cram that many pixels on a that small screen area. Also it would a be a waste of bandwith which in turn increase our render time, most of that texture information will go to waste anyways. But you trade your video memory (%33 more vram required) for better filtering and better use of bandwith.
In most games, GUI textures have no perspective, they are completely orthogonal to the UI camera and the UI camera itself is in orthogonal rendering mode. Their distance to the camera never changes either. So the first thing we should do is to get rid of all the mip maps to save some video memory, we don’t need them. That’s why we turn them off.
NPOT means a texture with a size that is non-power of two. (e.g. 128x50, 600x500, 512x127).
Older hardware and some mobile phones do not support such sized textures and they will upscale those to the next POT size in the background once they are uploaded to the GPU. So if we’re using a fairly modern GPU it’s okay to turn off the NPOT setting and use the texture as is. But using a POT texture atlas is the much better option. Especially if we are to use PVRTC compression for our iOS build, which only supports not just POT textures but also only square ones!
Figure 2.2 A UI texture atlas used in the game Defender 3D © Accidental Empire Entertainment
In Unity 3D, sprite and UI element texture atlas generation can be automated with it’s given tools. NGUI also features an Atlas Maker tool. The geometry that will use a texture within the atlas will have it’s UV coordinates set in a way that it corresponds to the desired texture in the atlas. So this way individual textures can be of any size, POT or NPOT, it doesn’t matter as far as the GPU is concerned, the atlas itself is the only texture and it’s POT and squared if need be.
We should also change the Wrap Mode to Clamp just in case so our texture will be clamped to the UV space limits. It solves the single pixel row/column of repeating texture problem.
So as mentioned above, in this scenario we are using our textures in 1:1 scale with no mipmaps. It only makes sense to use point filtering to render our elements as they appear in image form, sharp and as original.
Anisotropic filtering generates additional mip levels in rectangular form to be used when the applied geometry is rendered from oblique angles.
As we are not using any mipmapping here, we don’t need anisotropic filtering. Our geometry is orthogonal so rendering from oblique angles is not possible. (It also makes me wonder why this option is still active after turning off mipmapping in Unity import settings. If you know something I don’t please contact me and let me know)
Figure 2.4 Effects of anisotropic filtering comparison, (source: Wikipedia)
And finally Max Size should be set to something where it is equal or greater than our source texture’s dimensions to prevent Unity from downscaling our textures, we’ve already prepared our textures to be used as pixel-perfect size.
What is overdraw? Simply put, overdraw happens when fragments is rendered multiple times per frame. With an alpha blended shader, even at complete transparency we still render on their corresponding fragments unless we’re using an alpha-test/clip shader.
Alpha-test discards pixels marked with a transparency treshold and it can be costly on some mobile gpu’s, especially on Apple’s PowerVR devices but it can also be used as a good optimization if all our transparent pixels are fully transparent. Research on your target hardware before choosing a shader.
For example we have our 3D game world rendered and then we render a GUI hud on top of it. This is a problem because we are limited by our GPU’s pixel fillrate (number of pixels a GPU can render to screen and write to video memory per second). That’s why we have to keep overdraw to a minimum. Even if it’s not that dramatic in a 2D game.
Overdrawing is way worse in 3D games, imagine foliage, lush trees, complex particle system like big explosions and fires
One way to do it is to use what’s called a “tight sprite”. It just means that the flat the mesh containing our texture traces the limits of the image and minimizes the fully transparent area we have to render. There are some asset store products that can automate this process, Unity’s own sprite packer also features this even though it uses a not very well optimized version of this (excessive vertex count).
Figure 2.5 Quad and Tight sprite comparison
A UI system I know well and use daily sadly doesn’t feature tight sprites, it generates quads. But there is a logical reason for this, some of the effects in NGUI works on vertices, like rendering text elements as gradients. It just changes the top two and bottom two vertex colors and let the vertex color interpolation do the rest. This example technique could be more costly on a tight mesh per-text character setup.
Keep per-vertex calculations on tight sprites to a reasonable amount as much as possible