Modern Planet Rendering : Editor, Physics and Clouds

It’s been a few weeks since I last posted about this project, I tought I’d write a quick update to share some of the progress and difficulties I’ve had so far. One of the major challenge that quickly became apparent was how to edit a scene in a planet with such a large radius. When you think about it, you could add millions of object scattered around randomly on a planet and still have hard time locating even one, the scale is just that big. The usual solution to this is to procedurally place the objects as you are generating/moving around the planet, but I did not want to rely on procedural content when it comes to actual scene editing, this should really be done by an artist. Another restriction I had was that, in order to avoid precision artifacts, I need to draw the smaller objects using a camera from the center of the world (x, y, z), and then copy this camera and offset it (x, y + r, z) to draw the planet, kinda like a local coordinate system based on an offset vec3. And the final restriction was that you simply can’t iterate trough millions of objects in a standard container such as a standard vector.

Sector Paging System:

The first part of the solution was to use the concept of a sector to represent the root of a scene at an arbitrary offset point on the planet, this way all the camera offsetting stuff are abstracted away in this object. Now for the second part of the solution, I had already done some experiments with using a kdtree to handle large scene in the past and actually got some very decent results, the drawback was that doing a radius search based on the camera position every frame was brutal on the CPU. To fix this, I can now register a callback whenever a new tile is created during the culling traversal of the planet, when the tile level is equal to (max_level – 2), I use the bounding box of this new tile to get the center and perform a radius search around this point, this reduce the overhead to almost zero, even when having millions of sector indexed.

 self.faces_[i] = root.Terrain(radius, max_level, i - 1)
 root.connect(self.faces_[i].on_page_in, function(tile)
     if tile.level == max_level - 2 then
         local deformed_center = tile.owner:get_face_matrix() * tile.owner:get_deformation():local_to_deformed(tile:get_center())
         self.active_sectors_ = {}
         local matches = self.sectors_point_cloud_:radius_search(deformed_center * self.sectors_point_cloud_scale_, 0.01) 
         for i = 1, table.getn(matches) do
             self.active_sectors_[i] = self.loaded_sectors_[matches[i]]


To implement the editor, I simply added a new script that instantiate an editor object when allocated. When starting the framework, you can choose to define the scripting entry point, by default it’ll search for the file client.lua, but if you pass a script path in the arguments, it’ll start with this script instead. It’s then possible to load/reload/unload the client script from the editor to quickly drop in game and test things out. I used GWEN for the GUI and wrote a custom renderer based on a SpriteBatch, I also used fontstash for the text rendering. One of the neat features that GWEN offers is the ability to redraw the GUI only when it’s modified, this allows you to draw into a texture by using a framebuffer, drastically improving the performance. Since we now have two entry points, there’s some scripts that’ll be shared between the sub-applications, those are now located in the shared folder.



For now the physics are done on the client side since the network is not implemented yet, but it’ll eventually be cloned in a server sub-application in order to have a fully authoritative server. I’m using Bullet 2.85 built with the BT_USE_DOUBLE_PRECISION preprocessor. To create a tile collider, I again use the new tile callback described above, but this time I check for tiles with the max level, I then read back the height value from the GPU and use the planet’s deformation class to create a spherical btTriangleMesh. For the player I’m using a regular btKinematicCharacterController, but I’m planning on using a spring system to handle the slope better.


For the clouds, I ported kode80’s volumetric clouds to glsl and wrote the planet_clouds.h script object to implement it. The clouds are drawn during the post processing stage and are applied during the sky pass so that they can be properly blended with the atmosphere. I also blend the clouds coverage shadow with the sun shadow for even more realism. It’s a very heavy shader, so I had to downsample and reduce the number of iteration from 128 to 80 in order to keep a stable >120 FPS on my 970, so it can look much better if you have the right GPU.



There’s a lot I did not cover in this post, I spent a lot of time trying to get SSLR working but in the end I decided to drop it entirely, because let’s be honnest, the fact that you can only reflect what’s visible on the screen introduces way too much artifacts, and does not outweigh the visual gain. I really liked the idea of Cone Tracing in order to reduce the artifacts, but while it looks awesome in box-shaped area, it did not work really well with distance. I know that games such as the latest Doom uses SSLR combined with cubemaps (or maybe environment probes? not sure) as a fallback, but I think it only really works in more enclosed space where it’s possible to use technique such as parallax corrected cubemap. The next available option is to render the scene flipped on a plane, but the way I render a planet makes this really hard to achieve. I also did some work to allow script and shader dynamic hot-swapping, this’ll be very useful because the next step is proper BRDF.

Modern Planet Rendering

The art of rendering planets has always been a very fascinating aspect of graphics programming, mostly due to the fact that it offers an impressive sense of scale to the player, but also because it pushes the developers to think of new ways to workaround the hardware limitations that comes with objects that exceeds the limit of a floating point. From games such as Spore, Kerbal Space Program or the more recent No Man’s Sky and Star Citizen, we have learned that, although it won’t warrant a good game, planet rendering offers many technical and gameplay related advantages, some more obvious than others.

Advantages of Planet Rendering:

  1. Natural Horizon Culling:

    The shape of a planet makes it convenient to hide tiles that are below the horizon line, without having to resort to artificial distance based fog.

  2. More Accurate Lighting:

    This one is more related to the atmoshere than the planet itself, but having access to precomputed transmittance/irradiance/inscatter textures during the lighting stage is very useful to get an accurate lighting model, at least for the first bounce.

  3. Large Procedural Scene:

    Using procedural data as the base for a scene allows you to free a lot of storage space that could be used for more important data. I think it’s faire to say that using only procedural content can get old very fast, but it’s possible to combine procedural and lower resolution residual data in order to acheive a scene made by an artist (or generated with other external programs such as World Machine or L3DT) at a very low cost.


I started this project mainly to build a framework along side with it, the reason being that planet rendering exposes a lot of edge cases that could not be encountered otherwise. The framework is made in C++ and uses both LuaJIT and luabind (deboostified) for scripting. The core idea is to create all the generic components needed in an engine on the C++ side and then expose them in Lua so that we can seperate all the game logic into a compressed archive, in the same vein as Love2D. It’s basicaly a single Lua context on steroid.


The first thing called by the framework is an entry script located at the root of the project, this script then creates ScriptObjects that can optionally be tagged as dynamic. A dynamic ScriptObject is updated by the framework and allows you to perform update/rendering calls. For this project, the only aspect implemented in the framework is the terrain class since it’s generic and can be used in other context, everything else is implemented in Lua.

Atmosphere/Planet/Ocean Rendering:

After searching around for a while I found that the most elegant open source approach is by far Proland. It offers many advantages such as having a Deformation class that can be applied to a specific terrain, this allows all the code related to planet rendering to be backward compatible with a regular planar terrain approach.

Deferred Rendering:

One of the big challenge to make this approach ready for modern games was to change the rendering type from forward to deferred, this requires a couple of small tricks in order to acheive acceptable results. The firt step is to render the depth, color and normals into a gbuffer, and process the lighting in a post process pass. In order to properly apply the atmosphere lighting I had to rebuild the word space position from the depth buffer. Since the actual camera position exceeds the floating point precision, I rebuild the position using a view matrix that is not translated and add the translation after. The main difficulty I encountered was that now that the position were rebuilt from the depth buffer, I had major precision artifacts in the far view. In order to fix this I split the view frustum into two parts, the near view uses deferred rendering, and the far view uses forward rendering, this way the objects inside the near view have the lighting applied to them without having to modify their original shaders. I expected a major seam artifact at the split location but suprisingly it’s not even visible, meaning that the information rebuilt in the deferred stage is accurate.


Cascaded Shadow Mapping:

At first I tried to implement VSM but unfortunately, the performance I got were actually worse than regular PCF. So in the end I went with Deferred Cascaded PCF Shadow Mapping, I also finally got the shadows to stop flickering when moving by rounding the shadow projection matrix in texel space, a very small but effective tweak.

local rounded_origin = math.round(origin)
local round_offset = (rounded_origin - origin) * (2.0 / self.resolution_)
round_offset.z = 0.0
round_offset.w = 0.0	
sun_proj:set(3, sun_proj:get(3) + round_offset)

Ambient Occlusion:

I implemented HBAO, and combined with a bilateral blur you get a very decent ambient occlusion approximation.



ScaleBias -> GaussianBlur -> ScaleBias -> Done.

Pseudo Lens Flare:

John Chapman framework code is another base I used, mainly for the camera code and the HDR post processing. This effect coupled with motion blur really gives you the feeling that you are seeing the world trough a camera lens, and it’s actually a good thing in a context where you are seeing the world trough a screen. This effect is generated and then applied at the very end of post processing.

Subpixel Morphological Antialiasing:

Not much to say here, you input a render texture and get a antialiased output.


If you are further interested in planet rendering I would strongly recommend taking a look at Proland publications page, Outerra blog, and Making Worlds by Steven Wittens. I included the scripting side code, keep in mind that this code is in no way release ready nor properly commented, just some work in progress. Also here’s a video of some artifacts I got.


Here’s a video showing another camera type used to view the planet.

OpenGL 4.0 Planet Rendering using Tessellation


This project is based on the article Making Worlds by Steven Wittens, I recommend reading it first if you’re interested in planet rendering. The two main differences with this approach is that it’s not procedural but rather data based, and that it’s using tessellation instead of quadtree for LODing.

This image shows how you can use a normalized cube to make a sphere.

For the tessellation, I simply adapted Florian Boesch’s OpenGL 4 Tessellation to work with a cubic heightmap. It worked pretty much out of the box and allowed the culling to be performed on the GPU. The heightmap is generated in World Machine using a 2:1 width-to-height ratio, I then open it in Photoshop to apply some panoramic transformations. The whole process is described here. At first I was using HDRShop 1.0.3 just like in the tutorial but was only able to output the result as a RGB32 image. This loss of precision resulted in some rather nasty stepping artifacts when mapped on a terrain. I then moved to Flexify 2 and was able to output a proper 16bit greyscale image.

This image shows the final result before being split into tiles.

Known Bugs:

  • Visible seams at the edge of the tiles, I’ll definitely fix this at some point.
  • Patches can be culled out when the camera is too near and perpendicular to the terrain, this is related to the view frustum shape.

Tools Used:

Libs Used:




terrain2 terrain1 terrain3

Streaming Large World Using KdTree


This implementation is using the Composite/Visitor Pattern (based on OpenSceneGraph). In order to get descent performance on large scene I had to figure out a way to retrive the nodes closer to the player without having to do a entire scene traversal. I found a solution using kdtree, it goes like this:

  1. When loading the scene, add the position of the nodes that are higher in the scene hierarchy in a kdtree index.
  2. When the scene is loaded, build the kdtree index.
  3. Before traversing the scene, perform a nearest neighbor search based on the player position and build a temporary node holding the result.
  4. Perform the culling stage (CullingVisitor for those familiar with OSG) on the result.
  5. Finally render the scene.

I also split the scene into areas, each holding it’s own kdtree. This way it’s possible to do a quick distance check before doing the actual nearest neighbor search on a area.


When building a scene you can specify the number of area on the x/z axis and a resolution. To test things out, I made a 8×8 scene with a resolution of 4096, each area holds 8000 “parent” node that each holds 4 children node, giving us a total of 4096000 nodes. The scene is running at an average of ~550 FPS when disabling the actual rendering calls. Right now there’s no frustum culling and I’m rendering the cube using deprecated opengl so I’m getting around ~260 FPS with rendering, it should improve later on). This approach is mostly CPU bound and searching with a radius too large can drastically decrease performance so I’m thinking of holding the nodes that can be seen from far away (a terrain per instance) in a different group in order to keep a far view. Also I’m currently using alpha blending as a fog to avoid nodes popping on the screen. Here’s the result:


OpenGL 4.0 Minecraft Like Rendering

The demo implements a few new techniques but there’s still a lot more to do, right now there’s:

  • Voxel Volume Generation
  • Atmospheric Light Scattering / Irradiance Map

And still to implement:

  • Motion Blur / Adaptive HDR
  • Cascaded Shadow Mapping
  • Maybe some AO



ROOT_FRAMEWORK_REV12 2014-09-24 18-14-29-50 ROOT_FRAMEWORK_REV12 2014-09-24 18-12-37-46