The Anti-Feature Dream

Today I’m going to talk about a rather abstract vision of where I think the future of tools for game tech should be going. This is a topic that I have been raving to friends and colleagues about for a long time and I’ve used many different names to describe it: “the buffer-buffer dream”, “one graph to rule them all”, and many others. Today I just call it “The Anti-Feature Dream”, mainly because it sounds funny but also because it more accurately describes the vision. The core concept is simple:

What if we stop focusing on building explicit features (such as terrain, vegetation, vertex painting, particles, vfx, and so on) and instead try to breakdown the features into their core building blocks and expose them instead.

In the examples below I have deliberately left out lots of implementation details as this post is not about how to practically go about implementing the systems in question, but more about a different way to think in a less monolithic and more modular way when designing these types of features.

Terrain

Take for example a simple height map based terrain system. If we break it apart you’ll get something like:

  • Rendering of a tessellated quad, typically with some form of seamless LOD-scheme.

  • A paint tool writing data into a number of masks sampled by the terrain shader for displacement, material information, etc.

  • Some kind of material library specifying the ground materials.

On top of that you typically also have some kind of undergrowth/debris system that procedurally scatters grass, small bushes, rocks, etc. So let’s break that apart too:

  • Rendering of a few unique meshes with tiny variations (scale, tint) but high instance counts.

  • Something that generates the placement of all instances, typically driven by the painted masks coming from the terrain system.

  • Some way to specify a “mesh library” describing the various meshes we are interested in scattering.

So rather than exposing a “Terrain” component to our Entity-Component-System as an explicit feature that tries to solve all of our terrain visualization needs, we could instead represent it by implementing and exposing the individual building blocks. Then a technical artist could assemble those building blocks into a terrain system. Let’s take a closer look at what that might look like in practice.

First of all we need some kind of asset that compiles into a palette (or library) of sub-assets (mesh data, materials). Since we rely on running on top of a modern graphics API we can compile this into a bindless-friendly data structure.

Then we need some way for describing the painting logic. I imagine this being done using our graph based visual scripting component sending paint brush requests to The Truth (our data-model). That way we automatically get collaboration and undo/redo support. This graph would also be responsible for building up the terrain tools UI.

Last but not least we need some way to generate draw- and compute dispatch calls. My current take is that this also can be handled by the visual scripting component, representing the draw- and compute dispatch calls, as well as declaration of GPU resources as nodes. During the graph compilation we would then output one or many Render Graph Modules that get inserted into the main viewport module for scheduling and execution.

The GPU work we’re looking at issuing for this simple terrain system example is fairly trivial, we need to:

  • Issue the draw call for the tessellated quad.

  • Issue compute dispatches for filling the instance buffer for placement of debris/undergrowth.

  • Issue the draw call(s) for rendering the debris/undergrowth meshes.

  • Issue the draw call for visualizing the paint brush if we are currently editing the terrain.

  • Issue the draw call for executing any newly added brush strokes writing into the available masks.

And that’s it, really, we’ve created our first completely data-driven tool. And while I’ve definitely glanced over some details (like physics collision/materials) it shouldn’t be hard to represent that in a similar way. But the real benefits with this approach for building tools might not show until we breakdown some more tools, so let’s move on and look at a more general “Mesh Scattering System”.

Mesh Scattering

A Mesh Scattering System is essentially a tool that allows you to easily place lots of instances of meshes on top of already placed geometry using a paint tool. It differs from the terrain undergrowth/debris system in the sense that the placement is not procedurally driven from the terrain material masks, but instead each placement is stored in some compact form inside a large instance buffer. This buffer is then later consumed by one or many instanced draw calls scattering meshes at the locations read from the buffer. So what do we need:

  • A mesh library asset describing which meshes to place — we’ve already built this for the terrain undergrowth-/debris system, so let’s reuse that.

  • Paint logics — again, very similar to the paint logics for the terrain, it’s only the final shader(s) applying the brush strokes that need to be changed to something a bit more complex (instead of a simple “blit” into a render target we need to add/remove instance placements from an Unordered Access View, UAV).

  • A way to issue draw calls for rendering the scattered meshes — again, this is something we already have exposed as a building block when implementing the terrain undergrowth/debris system.

That’s enough to get a very brute-force mesh scattering system up and running. Typically, though, we probably want some form of view frustum culling. And most likely we also want to maintain some mapping between the underlying mesh we scattered stuff on top of and the scattered meshes so that we can move our scattered meshes if the underlying/parent mesh moves.

So we would need a thing that allows us to attach arbitrary amounts of auxiliary data located inside a big buffer to a parent transform. On top of that we need some other thing that allows us to sort the auxiliary data based on locality so that we can build some kind of grid based culling volumes that we can use for view frustum culling.

The first thing needs to be an engine system as we need to get notified when the transform of an entity that has been referenced inside the buffer changes. For now let’s just call it an EntityGroupAuxiliaryBuffer.

The other thing, responsible for grouping on locality and building culling volumes, can be seen as a simple data transformation that can happen through one (or multiple) compute shader dispatches. In its simplest form it can be a single compute shader consuming the buffer(s) produced by the EntityGroupAuxiliaryBuffer and doing a simple bucket sort, placing each instance into one of a number of predefined culling volumes. So it just boils down to a single shader consuming the output of the first thing (EntityGroupAuxiliaryBuffer) outputting to a new buffer.

Vertex Painting

As a last example, let’s take a look at a vertex paint tool. It is is exactly what it sounds like, a tool that allows you to paint new vertex data on top of already placed mesh instances in the scene. So what do we need:

  • Some place to store the vertex colors associated with the mesh we are painting on. Sounds like a job for our EntityGroupAuxiliaryBuffer.

  • Paint logics — again, we can most likely use something very similar as the paint logics for the Terrain or Mesh Scattering. A brush stroke shader could execute as a regular vertex shader writing the result to an UAV owned by the EntityGroupAuxiliaryBuffer.

In its simplest form, that’s all we need for a vertex paint tool.

Conclusions

My goal with this post has been to challenge programmers to think differently when designing tools.

The described method of thinking about tools design shares lots of similarities with data oriented programming in the sense that it puts data layouts and transforms of data in the front seat. At the end of the day almost all tools we see in the editors for game engines are responsible for generating new data into the game world by transforming existing data from one form to another.

I think there are lots to gain from exposing these type of modular building blocks and piece them together inside the editor to build up a the final tool rather than building the final tool as a monolithic predefined package.

Sure, this might all feel a bit abstract and vague at first, but it’s important to remember that our intention is not to force every single user of the editor to build their own tools from scratch. We still intend to ship predefined tools that our users recognize from other engines, but implemented in a way that encourages experimentation. Making it easy to change, improve upon and building new tools without the need of programmer involvement.

We’ll see how far we can push this vision, and if it ends up in too many compromises when it comes to usability and accessibility of the tools. My gut feeling says its probably not going to be easy to achieve in practice but if done right it would unlock the feeling of “anything is possible”. Which in many ways is our goal with The Machinery.

A Final Note

Throughout this post I’ve talked a lot about meshes, but if you think about it, what is a “mesh” really composed of? It’s just a name describing something that issues one or many draw calls reading its input data from a bunch of buffers and images.

There’s nothing saying that these buffers and images have to be static. They could just as well be procedurally created, populated and updated. While I definitely think it makes sense to expose a traditional “mesh” component in the editor interface, I see very little reason for defining that as a static core concept on the engine side.

by Tobias Persson