Making the move/rotate/scale gizmos work with any component

Our goal with The Machinery has always been to make a system that takes flexibility and extensibility to new levels. This leads to some unique challenges.

At the heart of our system is an entity-component model, where user written plugins can add new components. However, in addition to adding components, we also want the user to be able to extend the system by adding new custom editors to work with these components. For example, we want it to be possible to replace the 3D scene editor with a customized tile-based 2D editor, or do other crazy things that we haven’t even thought of.

This presents a bit of a problem, how can we make editors and components work together when the editors don’t know exactly what components there are and the components don’t know exactly what editors there are?

How can editors and components talk with each other?

As an example, let’s look at the implementation of the transform gizmos that are used move, scale, and rotate objects. We usually think of these as manipulators for the Transform component. I.e., they move, rotate or scale the entity’s transform (stored in the Transform component) which in turn affects any graphical components owned by the entity. However, we want to make it possible to use these tools to manipulate other components too.

For example, a Spline component might want to use the gizmos to move the control points of the spline. A Wire component for drawing and animating power lines might want to use the gizmos to manipulate the end and middle points of the wire.

Manipulating splines and wires with transform gizmo.

Note that in these cases, the Move manipulators aren’t moving entities around. They are moving sub-parts of an entity component (such as points on a spline). How can we make the manipulator do this in a way that works with new components that the manipulator doesn’t know anything about?

The Machinery is C based and as discussed before, our basic method of abstraction is an interface which we represent as a struct filled with function pointers. For example our file I/O interface looks like this:

struct tm_os_file_io_api
{
    tm_file_o (*open_input)(const char *path);
    tm_file_o (*open_output)(const char *path, bool append);
    void (*set_position)(tm_file_o file, uint64_t pos);
    int32_t (*read)(tm_file_o file, void *buffer, uint32_t size);
    bool (*write)(tm_file_o file, const void *buffer, uint32_t size);
    void (*close)(tm_file_o file);
};

Our API registry allows these interfaces to be registered and queried for. This way, an API can be registered in one plugin and queried for and used in a different one.

APIs can either have a single or multiple implementations. The File I/O API only has a single implementation — we only have one way of manipulating files. Components, on the other hand, are an example of an API with multiple implementations. Each component — Transform, Light, Spline, etc provides an implementation of the component API. When a plugin is loaded, it registers its components with the API registry, and other parts of The Machinery can query the registry to get a list of all the components:

#define TM_COMPONENT_INTERFACE_NAME "tm_component_i"

reg->add_implementation(TM_COMPONENT_INTERFACE_NAME, tm_transform_component);
reg->add_implementation(TM_COMPONENT_INTERFACE_NAME, tm_location_component);
reg->add_implementation(TM_COMPONENT_INTERFACE_NAME, tm_link_component);

TM_COMPONENT_INTERFACE_NAME is just a unique name to identify the Component API, among all the other APIs in The Machinery. Each API has a unique name and a plugin can extend the system with new APIs by just providing a unique name and a function pointer struct in a header.

The basic tm_component_i looks like this (we’ll add more stuff to it soon):

typedef struct tm_component_i
{
    uint64_t (*truth_type_name_hash)();
} tm_component_plugin_i;

Here truth_type_name_hash() is a function that returns the type name of the component’s data in The Truth. If you recallThe Truth is our data model — it allows arbitrary data to be stored and retrieved. The Truth contains a collection of objects of different types. The types are identified by unique strings, just as our APIs, but for performance we usually use 64-bit hash values of these strings rather than the strings themselves.

These hash values are set up as defines like this:

#define TM_TT_TYPE_HASH__TRANSFORM_COMPONENT \
    TM_STATIC_HASH("tm_transform_component", 0x8c878bd87b046f80ULL)

A short side note about this: TM_STATIC_HASH is a macro that just returns the second value:

#define TM_STATIC_HASH(s, v) v

We have a utility program that automatically searches the source code for TM_STATIC_HASH macros and patches them if the hash value isn’t correct. So when you write the code you can just write TM_STATIC_HASH("tm_transform_component", 0) and then run the program. It will compute the correct hash value and patch the file.

I’ve made the source code of this utility available here. As you can see, we dogfood by writing all our little utilities and tools like this in C, using our standard APIs. This has the added advantage that we have the entire codebase in a single language instead of a hodgepodge mix of C, C++, Perl, Python, Ruby, JavaScript, Lua, Go, Rust, C#, etc.

Anyways, since we know The Truth type of all components, we can go back and forth between a component’s representation in the data model (The Truth) and the API we use to talk to that component.

Let’s see how we can use this to define the interaction between components and the editor.

One thing we could do is just to extend the tm_component_i with the functions that the editor needs to interact with the component. For example, the editor probably needs to know the name of the component so it can display it in the entity tree:

The entity tree shows the name of components.

So we could just add that to the API:

typedef struct tm_component_i
{
    uint64_t (*truth_type_name_hash)();
    const char *(*display_name)();
} tm_component_i;

const char *transform_display_name()
{
    return TM_LOCALIZE("Transform");
}

We could continue like this, and add everything else that our editor needs, but this would create a tightly coupled component interface that is completely tied to how our editor works right now. It doesn’t make it possible for some one else to come up with new editor workflows in a plugin. And it also makes it harder for us to modify the editor workflows in the future.

In addition, the editor isn’t the only system that needs to interact with the components. The renderer needs to interface with renderable components, and there might be other systems in the future that also need their own special component interactions.

To accommodate this, we add another layer of abstraction, instead of putting the editor interactions directly in the component, we put them in a separate interface and make it possible to query the component for whether it supports this interface or not:

typedef struct tm_component_i
{
    uint64_t (*truth_type_name_hash)();
    void *(*get_interface)(uint64_t name_hash);
} tm_component_i;

This works similarly to the API registry. Any part of The Machinery can define an interface component together with a hash value used to query for it. And any component that supports the interface can return it as a response to a get_interface() query:

#define TM_EDITOR_COMPONENT_NAME \
    TM_STATIC_HASH("tm_editor_component_i", 0xcf29a2363dadc28eULL)
    
typedef struct tm_editor_component_i
{
    const char *(*display_name)();
} tm_editor_component_i;

Now if the editor wants to interact with a component, it can call get_interface() with the right constant to get a pointer to the interface and then use that interface to manipulate the object.

tm_editor_component_i *editor = (tm_editor_component_i *)
    c->get_interface(TM_EDITOR_COMPONENT_NAME);
const char *name = editor->display_name();

With this approach you can add completely new systems and functionality to the engine and any component can interact with them by implementing and exposing the appropriate interfaces.

With this framework in place we can now go back to the problem of implementing our transform gizmos. Let’s start at the beginning, with the selected object that we want to move. This could be anything — an entity, a particular component in an entity or some feature of that component, such as a point on a spline, a face on a piece of modelled geometry, etc.

If the selection is a component, the next step is easy — we look up that component in the API registry and talk to it. But what if the selection is something else?

In The Machinery, a selected object is represented by an ID that references an object in The Truth. The objects in The Truth form a hierarchy where every object has an owner. This means that if the user has selected some sub feature of a component, such as a node in a spline, we can just walk the owner chain up until we get to that component. Then we can use the component API to handle interactions with it.

If an entity is selected, we have to choose which of its component that we should control the transform of (there may be multiple ones). If the entity has a Transform component, we should probably use that over anything else, since it represents what we think of as the “position” of the entity. But we don’t want to hard code that, rather we want to leave the door open for other “position” concepts, such as 2D positions, or double precision positions to represent astronomical distances. We can achieve this by adding a “gizmo priority” to the editor component interface:

float (*gizmo_priority)();

When an entity is selected, the gizmo will manipulate the component with the highest “gizmo priority”. If the user wants to manipulate another component, she can select that particular component. Components that don’t have transforms to manipulate will have NULL function pointers for the gizmo callback functions.

Next up is the gizmo interaction itself. We could just give the component move_gizmo(), rotate_gizmo() and scale_gizmo() callbacks and leave it up to the component to draw the gizmos and handle all of the interaction, but I would argue that this is actually *too muc*h customization. First, it will lead to a lot of code duplication across the different components. Second, it creates the risk of fragmenting the code so that the move tool behaves subtly different for different components — when we actually want it to always be the same. And third, it makes it impossible for us to improve the move tool in the future without going in and changing all the code in all the plugins (some of which won’t be written by us).

So instead of giving the component full control, we just want it to give us the transform of the selected object, then we will draw the move gizmo, handle the interaction, and in the end, tell the component to update the transform with a new value.

Here’s the interface:

bool (*gizmo_get_transform)(const tm_the_truth_o *tt, tm_entity_context_o *ctx,
    uint64_t object, tm_transform_t *world, tm_transform_t *local);
        
void (*gizmo_set_transform)(tm_the_truth_o *tt, tm_entity_context_o *ctx,
    uint64_t object, const tm_transform_t *local, uint64_t undo_scope);

gizmo_get_transform() gets the transform for the selected object. We pass in references to The Truth and the context where entities live, so the component can use that to obtain the transform. Note that the function returns both the world (global) transform of the object and its local transform (the transform relative to its parent). The reason is that depending on what modifier keys we press, the transform gizmos can work either in local or global space.

As the object is being moved, we call gizmo_set_transform() to tell the component to update the transform of the object. undo_scope is a reference to a set of undoable events in The Truth. If it is zero, that means that the user is still actively dragging the object around, so we shouldn’t create any undo event yet. (If we created undo entries while the user was dragging an object around, the undo queue would get spammed and unusable.) If undo_scope is non-zero it means the user released the mouse button and finished the dragging interaction — so we should create an undo event and add it to the specified scope.

With this interface we can easily create new components that interact with the move gizmo in natural ways. And we can use the same technique to add other forms of editor interactions, such as components adding their own tools to the editor.