The (Machinery) Network Frontier -- Part 1

One of our goals for 2021 is to let users create multiplayer games with The Machinery. The systems I’m going to describe in this blog post are not production-ready at this point in time (in fact you can’t publish multiplayer-games at the moment), but we felt it was the right time to start talking about how the network layer of The Machinery works under the hood, in the hope of helping users to start to grasp its internals and begin to create multiplayer experiences as soon as possible.

The post has been divided into three parts to make it more readable:

  • Part 1 will introduce the goals of the systems and explain some basic concepts.
  • Part 2 will deal with higher-level concepts and how the user interacts with the networking layer in the editor.
  • Part 3 will drill down into the low-level implementation details of the network layer, examining the tradeoffs we’ve made and looking at what’s currently missing.

Goals and desired features

  1. Debuggability

    Debugging multiplayer games is intrinsically more difficult as the number of possible “states” your game can be in at any given point doesn’t depend on a single executable but on two, three, potentially more different ones running at the same time. One of the main goals of our network system is to make it easy to debug multiplayer games directly in the editor. We want The Machinery to be able to run many different “types” of network processes (for example a client, a server, and a login-server) at the same time, in the same executable, behaving as close as possible to when those processes are run on separate executables on separate machines. We also want to make it so that you can simulate “real world” network conditions from within the editor itself, making it possible to test corner-case scenarios without having to set up an actual network.

    Another thing we had in mind from day one is the concept of a “network profiler”. Imagine being able to pause the game and inspect what packets were sent from the server to the client 10 frames ago. We want to natively support these kinds of workflows.

  2. Easy to evolve and maintain

    We don’t want to force users to strictly adhere to a specific network protocol like the Server-Client model or other high-level network constructs like component data replication. We want users to be able to create their own network protocol (one of my life goals is to take a stab at a full P2P MMORPG 🙂 ), but we also want to provide a robust protocol out of the box that developers can then customize as they want.

    Being able to easily define new “kinds” of data packets and how they should be handled is another priority for us, as different games have different needs and they will surely need custom packets to function properly. (Think about a fighting game, where you probably want to transmit the entire state of your controller to the other peer every frame and avoid entity replication altogether.)

  3. Minimize friction when moving from single-player to multiplayer games, and vice-versa

    It is generally considered easy to create the single-player version of a multiplayer game: just run a server and a client locally, and everything will work. But this has the downside of not being as “optimized” as a single-player-only game. It would be better to handle any messages between the client and the server internally, instead of sending them over the network interface (even localhost packets come with a cost).

    On the other hand, it is considered pretty hard to create the multiplayer version of a single-player game as an afterthought… and for good reasons! When you implement a single-player game you don’t need to transmit any messages at all as everything runs locally. Moving from single-player to multiplayer will probably always mean a bunch of work, but we want to make it as simple as possible in The Machinery. Fortunately for us, our visual scripting language is event-based, so supporting “remote events” can take us a big part of the way.

  4. Automatization “by default”, optimization when needed

    You WILL need to optimize your network protocol before you ship your game, especially if you want to support a large number of players over a small network. But the need for such optimizations to happen eventually shouldn’t stand in the way of experimenting and playing with multiplayer in the prototype phase.

    So our goal is to make both the Entity_Context and Entity_Graphs multiplayer-ready by default, even if the transmission of the data it’s not optimized. It should be very easy to start and prototype multiplayer games, and then, later down the road, do the necessary optimizations.

The goal is not so much to optimize the networking layer of The Machinery ourselves (we will do that too of course, but there are limits to what we can do since we need to be generic and support all kinds of games), but to make it easily optimizable. That way you can prototype your multiplayer game quickly knowing that you’ll be able to optimize it when you need to and make targeted optimizations that are customized for your specific game.

Basic Concepts

Simulation Instance

In The Machinery, every running simulation has its own Entity Context and is completely separated from all the other simulations. If you have the editor open right now, chances are there are probably four or five simulation instances going on, as any tab that needs to simulate something (such as the Preview Tab, the Scene Tab, and the Simulate Tab) will have its own simulation instance.

Network Node

A Network Node binds a specific simulation instance to an IP and port number. For example, in a Client-Server multiplayer game, there will be one Server Network Node and potentially many different Client Network Nodes running at the same time — each one with its own simulation instance running, potentially on different machines. You can think of the Internet as a giant graph of nodes talking to each other, and we model that abstraction in The Machinery.


A Pipe is what allows a specific node n1 to transmit data of any kind to another node n2. Note that the pipe is unilateral, so if n2 needs to send messages to n1 as well, it will need to open another pipe that goes in the opposite direction.

Given two specific nodes n1 and n2, only one pipe at a time can exist that goes from n1 to n2 (and another from n2 to n1 of course). All the data that needs to be transmitted will go through the same pipe.


Assume n1 wants to send some data to n2. n1 will then request the Network API to open a pipe to n2 via tm_network_api->open_pipe(). The following things will happen:

  1. If a valid Pipe from n1 to n2 already exists, nothing needs to be done.
  2. A 32-bit ever-incrementing integer is used as the Pipe Identifier (after a pipe is created the running index is incremented, so no two pipes starting from n1 will ever have the same identifier)
  3. n1 Sends a Pipe Request message to n2 that contains the chosen pipe identifier
  4. When n2 receives the request, it decides whether or not the pipe request should be accepted
  5. n2 sends a Pipe Response message to n1 containing the 32-bit pipe identifier, and a bool that tells n1 if the pipe request was accepted or not. (If n2 has already accepted the pipe request and is receiving a duplicate request, then true is returned immediately).
  6. If the request is accepted, n1 now knows that n2 is ready to receive messages, and can start to send any kind of data to n2 via the pipe.

Important Note: The way our connection handshake works at the moment is not definitive or production-ready, it’s there only to allow users to start prototyping multiplayer games in the editor. We will improve and add all the necessary features to our connection process in the following releases.

Packet Type

Every piece of data that n1 sends to n2 needs to be identified by a Packet Type. Doing so allows n2 to know how to interpret the data that n1 sent on the pipe, but it also defines things like retransmission policy and the necessity to send back ACKs.

A new packet type can be simply defined via the tm_network_api->define_packet() API, so that later on data of that type can be transmitted via tm_network_api->send(). For example:

tm_strhash_t packet_hash = tm_murmur_hash_string("test_packet");
tm_network_api->define_packet(network, packet_hash,

uint16_t port = 555;
tm_network_node_o* node = tm_network_api->create_node(network, port);
tm_network_pipe_id pipe = tm_network_api->open_pipe(network, node,
  {.ip = "", .port = 666});

while (application_running) {
    char packet_data[64];
    sprintf(packet_data, "hello world");
    tm_network_api->send(network, pipe, packet_hash, packet_data,

Note how we didn’t need to specify any other information when sending data other than the packet type: just a pointer to the data and the size to be sent.

Guaranteed Delivery

As we are using UDP as the transmission protocol for our packets under the hood, we require the users of the network API to specify what kind of guaranteed delivery they want: maybe they don’t need any guarantee at all (it’s fine if the packets arrive out of order or if they don’t arrive at all), or maybe they want the packets to be received in the exact same order as they were sent. The network API makes sure that the guaranteed delivery requests are fulfilled. (I’ll give you some more details on this in the third part of the blog post).

Network Interface

A Network Interface is a specific piece of code (a callback for all intents and purposes) that is executed when certain things happen in the networking layer, for example:

  • A receiver interface is executed when a packet is received, if the interface returns true it means that the receiver correctly handled and dispatched the packet.
  • A bootstrapper interface is executed just after the network node is created.
  • An accepter is executed when a pipe request message is received: if the return value is true, the pipe request is accepted.

Network interfaces are basically modular pieces that can be associated with network nodes via Network Node Assets (we will talk about them in the next post). Of course, multiple interfaces of the same type can be associated with a network node.

Note that network interfaces are registered exactly like any other interface in The Machinery:

static void bootstrap_client(struct tm_network_o *network, void *network_context,
    struct tm_network_node_o *node)
    tm_network_pipe_id pipe = tm_network_api->open_pipe(network, node,
        {.ip = "", .port = 666});

    // ...

static tm_network_bootstrap_i *bootstrap_client_i = &(tm_network_bootstrap_i){
    .name = "bootstrap_client", 
    .bootstrap = bootstrap_client

tm_add_or_remove_implementation(reg, load, tm_network_bootstrap_i,

In the next part, we’ll take a look at how we combined these basic building blocks to create more high-level constructs like Network Assets, Entity Context and Graph Events Synchronization, and the Network Profiler.

by Leonardo Lucania