Rendering Massive Virtual Worlds

Rendering Massive Virtual Worlds - SIGGRAPH 2013

Thursday, July 25, 2013 - 2:00PM TO 5:15PM
Anaheim Convention Center Room 304 A-D

Course Description

This course is presented in four sections. The first two presentations show how huge data sets can be streamed and displayed in real time for virtual-globe rendering inside a web browser. Topics include pre-processing, storage, and transmission of real-world data, plus cache hierarchies and efficient culling algorithms.

The third section reviews content generation using a combination of procedural and artist-driven techniques. It explores integration of content-generation applications into production tool chains and their use in creation of real-world video games. Topics include productivity, data dependencies, and the trade-offs of putting massive procedural content generation into production.

The fourth section covers recent advances in graphics hardware architecture that allow GPUs to virtualize graphics resources (specifically, textures) by leveraging virtual memory. It discusses augmentation of traditional graphics APIs and presents several use cases and examples.

The final presentation shows how support for hardware-assisted virtual texturing was integrated into a game engine. It reviews the challenges associated with ensuring that the engine continued to operate efficiently on hardware that does not support virtual texturing. It also illustrates the concessions made in the engine for limitations of existing hardware and proposes some future enhancements that would improve the usability of the solution.

Course Notes from the ACM Digital Library.

Course Organizer

Graham Sellers
Advanced Micro Devices, Inc.


Using Multiple Frustums for Massive Worlds

Patrick Cozzi
Analytical Graphics, Inc. and University of Pennsylvania

In massive worlds, precision problems manifest themselves as rendering artifacts. Vertex transform precision creates jittering and depth buffer precision creates z-fighting. In this talk, we present the approach and implementation used to eliminate z-fighting at Analytical Graphics, Inc. By partitioning the scene into multiple frustums, we have enough depth precision for our virtual globe and space applications that require massive view distances. However, a careful implementation is required to achieve acceptable performance and minimize new artifacts.



Patrick Cozzi is the Principal Graphics Architect at Analytical Graphics, Inc., where he leads the graphics development of OpenGL and WebGL virtual globes. He is coauthor of 3D Engine Design for Virtual Globes and coeditor of OpenGL Insights. Patrick teaches GPU Programming and Architecture at the University of Pennsylvania, where he received a master's degree in computer science.

World-Scale Terrain Rendering

Kevin Ring
Analytical Graphics, Inc.

Terrain datasets for massive worlds - especially those that aim to represent the massive world we call Earth - can easily measure in the terabytes. Add in detailed textures for the surface, such as color maps derived from satellite imagery or aerial photography, and it is not at all uncommon to see datasets measured in the hundreds of terabytes. Such datasets are much too large to fit in memory, and even too large to fit on a local system. Working with these datasets requires effective algorithms for selecting the subset of the world we actually want to render, at an appropriate level-of-detail. This is particularly challenging when we need to use off-the-shelf data whenever possible. We discuss how we solved these problems in Cesium, an open source virtual globe that runs inside a web browser without the need for a plugin.


Slides (.pptx) (15.2 MB)


Kevin Ring is coauthor of 3D Engine Design for Virtual Globes and the lead architect of STK Components at Analytical Graphics, Inc. In recent years, he has immersed himself in the problem of massive terrain rendering and analysis while developing the terrain and imagery rendering engine for Cesium, a WebGL virtual globe. Kevin received a bachelor's degree in Computer Science from Rensselaer Polytechnic Institute.

Populating a Massive Game World

Emil Persson and Joel de Vahl
Avalanche Studios

Creating a game world that is big is one thing, but populating it with meaningful content and bringing it to life is another matter. This presentation covers some of the experiences from Avalanche Studios in producing content for their large game worlds. We will describe our approach to content production for a massive game world, but we will also discuss some issues with our content production pipeline. This includes issues such as unreliable compilers, long turnaround times and broken builds, but more important, what improvements and fixes we have done to our pipeline in the years after the release of Just Cause 2 to solve these issues. Also covered in this talk is various systems in place to make the world feel more diverse and alive.


Slides (.pptx) (6.9 MB)


Emil Persson is the Head of Reseach at Avalanche Studios, where he is conducting forward-looking research, with the aim to be relevant and practical for game development, as well as setting the future direction for the Avalanche Engine. Previously, Emil was an ISV Engineer in the Developer Relations team at ATI/AMD. He assisted tier-1 game developers with the latest rendering techniques, identifying performance problems and applying optimizations. He also made major contributions to SDK samples and technical documentation.

Joel de Vahl works as a Senior Engine Programmer at Avalanche Studios, focusing primarily on graphics and engine technology development. Previously, Joel worked as an Engine Programmer at Starbreeze Studios, focusing on lighting and rendering technology.

Hardware Virtual Texturing

Graham Sellers
Advanced Micro Devices, Inc.

Recent advances in graphics hardware allow GPUs to assist in functions such as streaming texture data, managing sparse data sets and providing reasonable visual results in cases where not all of the data is available to render a scene. In this talk, we take a deep-dive into AMD's partially resident texture hardware, briefly cover sparse texture extensions for OpenGL and then explore some use cases for the hardware and software features, including some live demos.


Slides (.pptx) (1.7 MB)


Graham Sellers is the manager of the OpenGL driver team and a software architect at AMD. He represents AMD at the OpenGL ARB and Khronos Group and is responsible for the design and delivery of new features in AMD's OpenGL implementation, including extensions and new versions of the OpenGL API. He has authored over 20 OpenGL extensions, many of which are now part of the core API specification. He is also co-author of the OpenGL SuperBible and the OpenGL Programming Guide. He holds a Masters' degree in Engineering from the University of Southampton, UK.

High Quality Software and Hardware Virtual Textures

J.M.P. van Waveren
Id Software, LLC

Modern simulations increasingly require the display of very large, uniquely textured worlds at interactive rates. In large outdoor environments and also high detail indoor environments, like those displayed in the computer game RAGE, the unique texture detail requires significant storage and bandwidth. Virtual textures reduce the cost of unique texture data by providing a sparse representation which does not require all of the data to be present for rendering, while leaving the majority of the texture data in highly compressed form on secondary storage.

A virtual texture is divided into small pages that are loaded into a pool of resident physical pages as required for rendering. In RAGE these small pages are square blocks of 128 x 128 texels and the pool with physical pages is a fully resident texture that is logically subdivided into such square blocks of texels. While a virtual texture can be very large (say a million pages) and is never fully resident in video memory, the texture that holds the pool of physical pages is fully resident but much smaller (typically only 4096 x 4096 texels or 1024 pages). Virtual texture pages are mapped to physical texture pages, and during rendering virtual addresses need to be translated to physical ones.

Virtual textures differ from other forms of virtual memory because first, it is possible to fall back to slightly blurrier data without stalling execution, and second, lossy compression of the data is perfectly acceptable for most uses. Implementations of software virtual textures exploit these key differences between virtual textures and other forms of virtual memory to maintain performance and reduce memory requirements at the cost of quality. Implementing virtual textures without special hardware support is challenging and inevitably comes down to finding the right trade between performance, memory requirements, and quality. While the implementation of software virtual textures in RAGE emphasized performance, the visual fidelity of the virtual textures in RAGE can be improved in several ways that trade performance and memory for quality.

When RAGE first shipped there was no special hardware support for virtual textures. The virtual to physical address translation had to be implemented in a fragment program through page table and mapping textures. The latest AMD graphics hardware, however, supports hardware virtual textures also known as Partially Resident Textures (PRTs). Instead of using page table and mapping textures the hardware can perform the virtual to physical translation using the page tables of the underlying virtual memory system. Hardware virtual textures can further improve the quality of virtual textures but taking advantage of this special hardware comes with its own set of challenges.



J.M.P. van Waveren studied computer science at Delft University of Technology in the Netherlands and he is currently the lead technology programmer at id Software. He has been developing technology for computer games for over a decade and has been involved in the research for, and development of various triple-A game titles such as: Quake III Arena, Return to Castle Wolfenstein, DOOM III and RAGE.