A feature soon to be in Cesium is order independent transparency for improved visual quality of transparent geometry. Previously, we used alpha-blending for transparency. The color of transparent geometry is blended with the background. When drawing more than one transparent object, the geometry needed to be sorted back-to-front to correctly blend the geometry color with the background color. Now we are using weighted blended order-independent transparency, which is a technique for drawing transparent objects without the need to sort them.

A problem with alpha-blending is that it is impossible to get sorting right, which leads to artifacts. Sorting by comparing the the distance from the eye to a bounding sphere is incorrect for large intersecting geometry, like sensors and ellipsoids, where the bounding spheres are near the same size and location. The sort-order can change with a slight change in the view. We could split the geometries at their intersections for correct sorting at the cost of a performance hit. We often do the opposite and batch geometry for increased performance. Even if we could sort correctly, we would sort on the CPU, which has more CPU overhead in JavaScript than in C++.

We decided to use the algorithm in Weighted Blended Order-Independed Transparency by Morgan McGuire and Louis Bavoil. We chose this method because WebGL has a limited feature set compared to desktop OpenGL. A common method for implementing order independent transparency is to create per-pixel linked lists that require atomic image operations not available in WebGL. Weighted blended order-independent transparency only requires that multiple render targets and floating-point textures are supported. Even if multiple render targets are not supported, we use the same algorithm but with two passes over translucent objects.

For more details, see Morgan McGuire’s blog post.

Alpha blending Alpha blending (different view) OIT

We render all opaque objects to a framebuffer and use the same depth texture attachment during transparent object rendering. We then do a full-screen pass to composite both images. Because we draw to framebuffers without any hardware anti-aliasing support and composite the images manually, there is aliasing so we use FXAA as a post-process for anti-aliasing.

When implementing this algorithm, if your results are saturated like the image on the left. Make sure to use pre-multiplied alpha.

Check out the demo.