Textures in OpenGL Games Programming

Sections

Texture Creation and Residency

Texture Internal Formats

Palettised Textures

Texture Compression

Texture Blending and Multitexturing

Dynamic Texture Uploads

See the target support section for a given target for information on rendering to a texture.

Texture Creation and Residency

On hardware with onboard texture memory (as opposed to AGP only PC cards such as the Intel 740 or exotic hardware like the 3DLabs Permedia 3 with its "virtual textures"), textures will generally be downloaded (made resident) when they are created using glTexImage2D and glBindTexture (see this example of texture object creation). If more texture objects are created than will fit in available texture memory, they are normally swapped in and out by the driver depending on what is necessary to render the current frame (e.g. which texture objects have had glBindTexture called on their ID). This caching procedure can be tuned by using glPrioritizeTextures to set the "current importance" of a texture object.

The interface also supports manually handling texture residency, by calling glPrioritizeTextures with values of 1.0f and 0.0f (maximum and minimum importance) and checking the current status using glAreTexturesResident.

OpenGL 1.1 does not support setting mip map priorities so that only parts of a mip mapped texture can be conveniently made resident at a time. OpenGL 1.2 does support this, and an extension with the same functionality, SGIS_texture_lod, is becoming available on some Win32 consumer drivers such as those from NVidia.

Note that current OpenGL implementations on Windows 9X will keep a copy of all texture objects in system memory, to allow for the 9X kernel's tendency to erase the contents of video memory without warning. Obviously this can be a problem for very large data sets, even if you do not keep additional copies of the texture in system memory yourself, since you might want to store them in memory compressed, on a CD, etc. Various proposals for extending OpenGL to deal with this problem, from e.g. Tim Sweeney, have been made, but no definite extension has been created as of December 1999.

In general, there are limits to what types of texture can be created. OpenGL 1.1 specifies that the texture width and height must be a power of two. There may be hardware specific limits as well. As of December 1999, 3dfx hardware, for example, is limited to a maximum texture size of 256x256, and textures cannot have an aspect ratio of greater than 8:1.

The maximum texture size on a given implementation can be obtained by calling glGetTexParameter with the GL_TEXTURE_WIDTH and GL_TEXTURE_HEIGHT parameters. Note that this value is not necessarily the actual maximum width or height that can be used; instead it is the maximum width or height that can be guaranteed to work, assuming the worst case for values such as the colour depth of the texture. A greater width or height for a texture than the values returned by this function might be valid for a texture of less than maximum colour depth. Also, this call would not tell you that there was a problem with, for example, a texture with an aspect ratio of 16:1 on Voodoo 1 hardware.

A better way to check if a texture is valid is therefore to use a proxy texture to check the limits of the hardware before passing in actual textures. Note that in the current OpenGL interface proxy textures can be used only to check whether a texture with a supplied width and height, palette (if any), etc can be created, not whether a specific texture can be made resident given the texture objects already in existence. Here is an example of how to use proxy textures.

The normal result of passing an invalid texture is that the default texture object (which is solid white) is used whenever that texture would have been rendered.

Texture Internal Formats

To specify the internal format of a texture (i.e. request what format should be used to store the texture on the hardware), you need to use the internal format parameter of glTexImage2D.

Palettised Textures

To render textures with an attached palette (on hardware that supports it), you need the EXT_paletted_texture extension. Example code showing how to use paletted textures is also available.

Texture Compression

S3's texture compression algorithm (S3TC, adopted by D3D as the standard) is available in OpenGL via the S3TC extension. As of early December 1999, only S3's drivers export that extension in OpenGL. 3dfx have proposed a different and more extensive "open source" standard, FXT1, which will have an OpenGL extension, and the ARB is currently considering an ARB_texture_compression extension, which may be based on FXT1.

Texture Blending and Multitexturing

To do multitexturing, you should use the ARB_multitexture extension. (The specification is in appendix F to the OpenGL 1.2.1 specification on that page.) More information about using ARB_multitexture is also available. Sample multitexture code from Nick Triantos can be downloaded.

To perform more sophisticated texture blending operations, e.g. overbrightening, you can use the EXT_texture_env_add or EXT_texture_env_combine extensions.

Note that texture_env_add is basically a lightweight version of texture_env_combine for cards that cannot support all the functionality of texture_env_combine. Note also that some vendors (e.g. NVidia) stack more texture blending functionality on top of texture_env_combine using additional extensions, e.g. NV_texture_env_combine4.

Note that the SGIS_multitexture extension is essentially obsolete and should not be used.

Dynamic Texture Uploads

These are probably best done using the SubImage functions, e.g. glTexSubImage2D, to update an existing texture, rather than repeatedly using glGenTextures and glTexImage2D to create new textures. You may be able to cut down on the conversions done by the driver by using the packed_pixels extension, if available. See this document from Michael Gold for more information on packed_pixels.

Back to main