Modern Planet Rendering

The art of rendering planets has always been a very fascinating aspect of graphics programming, mostly due to the fact that it offers an impressive sense of scale to the player, but also because it pushes the developers to think of new ways to workaround the hardware limitations that comes with objects that exceeds the limit of a floating point. From games such as Spore, Kerbal Space Program or the more recent No Man’s Sky and Star Citizen, we have learned that, although it won’t warrant a good game, planet rendering offers many technical and gameplay related advantages, some more obvious than others.

Advantages of Planet Rendering:

  1. Natural Horizon Culling:

    The shape of a planet makes it convenient to hide tiles that are below the horizon line, without having to resort to artificial distance based fog.

  2. More Accurate Lighting:

    This one is more related to the atmoshere than the planet itself, but having access to precomputed transmittance/irradiance/inscatter textures during the lighting stage is very useful to get an accurate lighting model, at least for the first bounce.

  3. Large Procedural Scene:

    Using procedural data as the base for a scene allows you to free a lot of storage space that could be used for more important data. I think it’s faire to say that using only procedural content can get old very fast, but it’s possible to combine procedural and lower resolution residual data in order to acheive a scene made by an artist (or generated with other external programs such as World Machine or L3DT) at a very low cost.

Framework:

I started this project mainly to build a framework along side with it, the reason being that planet rendering exposes a lot of edge cases that could not be encountered otherwise. The framework is made in C++ and uses both LuaJIT and luabind (deboostified) for scripting. The core idea is to create all the generic components needed in an engine on the C++ side and then expose them in Lua so that we can seperate all the game logic into a compressed archive, in the same vein as Love2D. It’s basicaly a single Lua context on steroid.

Scripting:

The first thing called by the framework is an entry script located at the root of the project, this script then creates ScriptObjects that can optionally be tagged as dynamic. A dynamic ScriptObject is updated by the framework and allows you to perform update/rendering calls. For this project, the only aspect implemented in the framework is the terrain class since it’s generic and can be used in other context, everything else is implemented in Lua.

Atmosphere/Planet/Ocean Rendering:

After searching around for a while I found that the most elegant open source approach is by far Proland. It offers many advantages such as having a Deformation class that can be applied to a specific terrain, this allows all the code related to planet rendering to be backward compatible with a regular planar terrain approach.

Deferred Rendering:

One of the big challenge to make this approach ready for modern games was to change the rendering type from forward to deferred, this requires a couple of small tricks in order to acheive acceptable results. The firt step is to render the depth, color and normals into a gbuffer, and process the lighting in a post process pass. In order to properly apply the atmosphere lighting I had to rebuild the word space position from the depth buffer. Since the actual camera position exceeds the floating point precision, I rebuild the position using a view matrix that is not translated and add the translation after. The main difficulty I encountered was that now that the position were rebuilt from the depth buffer, I had major precision artifacts in the far view. In order to fix this I split the view frustum into two parts, the near view uses deferred rendering, and the far view uses forward rendering, this way the objects inside the near view have the lighting applied to them without having to modify their original shaders. I expected a major seam artifact at the split location but suprisingly it’s not even visible, meaning that the information rebuilt in the deferred stage is accurate.

deferred

Cascaded Shadow Mapping:

At first I tried to implement VSM but unfortunately, the performance I got were actually worse than regular PCF. So in the end I went with Deferred Cascaded PCF Shadow Mapping, I also finally got the shadows to stop flickering when moving by rounding the shadow projection matrix in texel space, a very small but effective tweak.

local rounded_origin = math.round(origin)
local round_offset = (rounded_origin - origin) * (2.0 / self.resolution_)
round_offset.z = 0.0
round_offset.w = 0.0	
sun_proj:set(3, sun_proj:get(3) + round_offset)

 

Ambient Occlusion:

I implemented HBAO, and combined with a bilateral blur you get a very decent ambient occlusion approximation.

occlusion

Pseudo Lens Flare:

John Chapman framework code is another base I used, mainly for the camera code and the HDR post processing. This effect coupled with motion blur really gives you the feeling that you are seeing the world trough a camera lens, and it’s actually a good thing in a context where you are seeing the world trough a screen. This effect is generated and then applied at the very end of post processing.

Subpixel Morphological Antialiasing:

Not much to say here, you input a render texture and get a antialiased output.

Conclusion:

If you are further interested in planet rendering I would strongly recommend taking a look at Proland publications page, Outerra blog, and Making Worlds by Steven Wittens. I included the scripting side code, keep in mind that this code is in no way release ready nor properly commented, just some work in progress.

OpenGL 4.0 Planet Rendering using Tessellation

Description:

This project is based on the article Making Worlds by Steven Wittens, I recommend reading it first if you’re interested in planet rendering. The two main differences with this approach is that it’s not procedural but rather data based, and that it’s using tessellation instead of quadtree for LODing.

cubemap
This image shows how you can use a normalized cube to make a sphere.

For the tessellation, I simply adapted Florian Boesch’s OpenGL 4 Tessellation to work with a cubic heightmap. It worked pretty much out of the box and allowed the culling to be performed on the GPU. The heightmap is generated in World Machine using a 2:1 width-to-height ratio, I then open it in Photoshop to apply some panoramic transformations. The whole process is described here. At first I was using HDRShop 1.0.3 just like in the tutorial but was only able to output the result as a RGB32 image. This loss of precision resulted in some rather nasty stepping artifacts when mapped on a terrain. I then moved to Flexify 2 and was able to output a proper 16bit greyscale image.

result
This image shows the final result before being split into tiles.

Known Bugs:

  • Visible seams at the edge of the tiles, I’ll definitely fix this at some point.
  • Patches can be culled out when the camera is too near and perpendicular to the terrain, this is related to the view frustum shape.

Tools Used:

Libs Used:

Source:

Builds:

Media:

terrain2 terrain1 terrain3

Streaming Large World Using KdTree

Implementation:

This implementation is using the Composite/Visitor Pattern (based on OpenSceneGraph). In order to get descent performance on large scene I had to figure out a way to retrive the nodes closer to the player without having to do a entire scene traversal. I found a solution using kdtree, it goes like this:

  1. When loading the scene, add the position of the nodes that are higher in the scene hierarchy in a kdtree index.
  2. When the scene is loaded, build the kdtree index.
  3. Before traversing the scene, perform a nearest neighbor search based on the player position and build a temporary node holding the result.
  4. Perform the culling stage (CullingVisitor for those familiar with OSG) on the result.
  5. Finally render the scene.

I also split the scene into areas, each holding it’s own kdtree. This way it’s possible to do a quick distance check before doing the actual nearest neighbor search on a area.

Result:

When building a scene you can specify the number of area on the x/z axis and a resolution. To test things out, I made a 8×8 scene with a resolution of 4096, each area holds 8000 “parent” node that each holds 4 children node, giving us a total of 4096000 nodes. The scene is running at an average of ~550 FPS when disabling the actual rendering calls. Right now there’s no frustum culling and I’m rendering the cube using deprecated opengl so I’m getting around ~260 FPS with rendering, it should improve later on). This approach is mostly CPU bound and searching with a radius too large can drastically decrease performance so I’m thinking of holding the nodes that can be seen from far away (a terrain per instance) in a different group in order to keep a far view. Also I’m currently using alpha blending as a fog to avoid nodes popping on the screen. Here’s the result:

Media:

OpenGL 4.0 Minecraft Like Rendering

The demo implements a few new techniques but there’s still a lot more to do, right now there’s:

  • Voxel Volume Generation
  • Atmospheric Light Scattering / Irradiance Map

And still to implement:

  • Motion Blur / Adaptive HDR
  • Cascaded Shadow Mapping
  • Maybe some AO

Builds:

Media:

ROOT_FRAMEWORK_REV12 2014-09-24 18-14-29-50 ROOT_FRAMEWORK_REV12 2014-09-24 18-12-37-46