Hi, my name is Raphael, and I’m an intern at Our Machinery.
I’m a Linux user and passionate about Game Development, Computer Graphics, and Low-Level programming. In my opinion, developing a Game Engine is one of the most challenging missions, as it involves several areas of computer science, mathematics, and physics, in addition to real-time requirements. So I am very happy with the opportunity to work on this team and with everything that I have come to learn so far. My main task was initially to help with porting The Machinery to Linux and in this post, I will talk about this process, how I incrementally added the needed building blocks, and the main issues faced during this journey. This will not be a fully technical text, but I will touch on some technical details.
I think that it’s easier to work when you can interact with software, so my first goal was to get something showing on the screen and implement a very basic version of tm_os_window_api. There are two options for a window system in Linux: X11 and Wayland. For The Machinery, I decided to go with the first one because it’s the most used and has better support for Nvidia cards. Note that this is not an “X11 is better than Wayland"-decision and we plan to test with Wayland in the future.
To see something on screen we only need a basic window and the ability to pass its surface to the render backend, so that was what I did first. Basically, I implemented the “Hello World” of Basic Graphics Programming With The XCB Library tutorial. The next step is to make it possible to interact with the application. The X server uses an event-driven approach, so the application needs to register the events it is interested in, to be able to receive them in the event loop and feed them to a tm_input_source_i interface. It is worth mentioning that since there is no event for relative movement of the mouse it was necessary to emulate it by keeping track of the last mouse position.
Linux users have many options for GUI environments, from basic Window Managers to fully-featured Desktop Environments. It’s important to note that window managers and desktop environments are free to decide what to do with windows. For example, an application may define a specific size for a window, but this doesn’t mean that will be the real size of the created window. With so many possibilities, the challenge arises of how to create a graphic application that behaves uniformly in various environments.
To achieve a predictable behavior, we adhere to the ICCCM and EWMH specifications that provide conventions for the way applications interact with each other and the window manager. However, it is not always so clear how to best implement them, whether due to ambiguity or vague concepts.
One of the features that I found most challenging to implement was the
as there is no central clipboard concept in Linux, and we must implement it using the selection
mechanism as specified in the ICCCM. There are three main selections defined: PRIMARY, which is
normally used to paste selected text using the middle button, SECONDARY, and CLIPBOARD for the
conventional copy and paste with
Ctrl-C/Ctrl-V. An application that wants to copy something must
acquire the ownership of the selection and keep the copied data until another application asks for
it or requests ownership of the selection. That is, it is the responsibility of each application to
keep the data available and that is why normally we cannot paste a copied text after closing an
application containing that text. Drag-and-drop support was achieved following the XDND
specification, which works using the
selection mechanism in a way very similar to the clipboard functionality.
Usually, the borders and title bars of the windows are the responsibility of the window manager, and
guess, in the X system, the title bar is a window too. However, with the
the user can choose to create windows with or without decorations and there is no a standardized way
of doing this on the X server. The solution I found was to use
_MOTIF_WM_HINTS, which provides a
way to ask the manager to create windows without decorations. Although there is not much
documentation on this, apparently a good number of window managers accept its use. The downside of
this is that we may have an application with a double title bar, in case the request is not
respected. Another issue is that with a custom title bar, we lose the ability to drag the window
around. To get around this problem I used the message
_NET_WM_MOVERESIZE according to the EWMH
specification and, if the window manager does not support this message, I fallback to the
xcb_configure_window function. The interesting thing about implementing this feature
was that, initially, the application did not move correctly, due to a wrong assumption about the
window hierarchy. Normally, the coordinates are relative to the parent window, which are not the
same for a window without a title bar. To correct this flaw I always translate the coordinates to
the root window coordinate system.
The operating system abstraction layer in The Machinery is provided by tm_os_api in the foundation library. Although it may seem out of order to speak of this layer only now, most of the necessary functionality was already implemented when I started working on the port. For example, the thread system uses POSIX Threads and fiber support was achieved using the libcoro library. With the window system in place, it became easier to program and test the missing parts of the OS API.
The most complicated task in this step was the implementation of the file system watcher, using the Inotify API, which provides a simple and efficient way to monitor these type of events on Linux. A great resource to learn more about its use is the Monitor Linux file system events with inotify tutorial. To support native dialogs, the excellent tiny file dialogs library was used, which has all the necessary features and was easy to integrate.
The implementation of the audio system has been described in a previous post. Since all mixing is done in software, we only need to implement the tm_sound_backend_api. For that, it was necessary to understand the options available for audio programming on Linux. Among those available we have ALSA, JACK, and PulseAudio.
ALSA is the sound system used by the Linux kernel but also provides a user-space API. PulseAudio and JACK are sound servers. Applications that want to use audio connect themselves to the server that is responsible for mixing the sound. The PulseAudio server is present in most desktops and is easily configurable. However, it is not intended for professional use. On the opposite, JACK provides low latency and targets professional users, but its configuration is more complex.
In The Machinery we chose to use the ALSA API, for more control and fewer dependencies. At first we would have the limitation that an application using audio would block the audio from another application. However, sound servers also behave as audio devices, to which ALSA can connect. Thus, multiple applications can operate at the same time. The use of the API is very straightforward, but apparently, when using the device provided by PulseAudio we do not have the correct information about the number of channels, and by the tests carried out we can request any number of channels, even if the actual device doesn’t support it. So at the moment we chose to support only two output audio channels until a more appropriate solution is found.
Working on porting The Machinery to Linux was a very rewarding experience. There is still a lot of work ahead, joystick, multiple mice and keyboard support are still missing, but at least we have a usable Linux version of the engine which will help improve it through testing in more than one platform. That’s all for now, so if you have any doubts, or if you want to know how some specific system was implemented on Linux please ping me at [email protected].