Something I’ve been wanting to do for a long time is to create a one-button setup and build system. I.e., on a fresh machine, on any supported platform, it should be possible to type a single command that will first fetch, install and setup everything that is needed to build, and then perform the build and print the result.
Why? Mainly to make the source code more friendly to new users. Complicated, or even worse, incomplete build instructions are not a nice way to get started with a project. A user faced with something like:
- Install CMAKE
- Set these environment variables
- Make sure you have this exact version of Ruby installed and in your
$PATH
- Download these libraries and put them in these directories
- Install doxygen
- Add these git-hooks to your
.git
repository - …
may just give up, unless they are already deeply invested in the project.
Building something, running it and playing with it is the first step to understanding how it works. And we want to encourage curious people to experiment with our stuff.
Short and simple build instructions are good, but having a single button that you can just “press to build” is even better.
Programs that do things are always better than instructions that tell you how to do things. Having clang-format format you code is better than having a document with source formatting guidelines. Similarly, a tool that builds your code is better than a document with instructions on how to build your code. Documents can be vague and ambiguous whereas executable code is always precise. Documents easily get out-of-date, whereas if the executable gets out of date and no longer does the right thing, it is easily discovered. Executables can be put into pipelines, enabling further automatization and they free the people from rote memorization of rules and guidelines, so that their minds can be used solely for programming, the way they were intended to.
In the past, there was always something that prevented me from achieving full automatization — some legacy code that was just too complicated to rewrite. But that’s the nice thing about starting from scratch with Our Machinery. We get a chance to revisit old hang-ups and do them right from the beginning.
Picking a language
The first question is what language to use for our one-button build script.
In the past I’ve used Ruby for similar scripting tasks, but Ruby doesn’t come pre-installed on all platforms that we support; If you have to install Ruby before you can run the build script then it isn’t really a one-button solution. In addition, Ruby easily runs into version problems and gem troubles and I’m just not as fond of the language as I used to be.
We could use the built-in scripting environments (bash
on OS X, .bat
files on Windows), but that would mean having to maintain multiple scripts and also, those languages are kind of horrible to program in.
Something I’ve discovered when working on large projects before is that you really need to watch out for language proliferation, or before you know it your code will be a mix of C++, Lua, C#, JavaScript, CMAKE, Ruby, Python, Go and Rust. This creates all kinds of inefficiencies. It raises the barrier of entry for anyone wanting to understand the source code. The same routines will often have to be implemented multiple times in different languages and data has to be marshalled back and forth between them. So even though some time might be saved initially by using a scripting language instead of C for some administrative task, in the long run we believe it will be a net loss.
At Our Machinery we have taken the drastic decision to use C and (simple) C++ for everything. (As you know from my last blog post, we like drastic decisions.) So tools, helpers, tests and experiments will all be written in C, using our foundation
APIs which provide platform independent utilities for memory allocation, networking, file system access, etc.
The advantage of this is that to understand the code, you only need to know one language and everything can be debugged together in the same debugger. In addition, it is a good litmus test for our foundation
libraries. If we can’t easily write our various little tools and snippets on top of these libraries, then something is clearly wrong. Eating your own dog food is the best way of making sure that it tastes good.
Following this rule, our build script is just a simple one-file C program built on top of the foundation
libraries. It compiles to a platform specific executable which we call tmbuild
. We distribute these executables together with the source code and to build the source you just need to run tmbuild
in the source code directory. Currently, it has the following options for customizing the build:
Usage: tmbuild [OPTION]...
-h
--help
Display this help and exit.
-t <tool>
--build-tool <tool>
Specifies the tool to use for building [gmake, xcode4, vs2017].
If not specified, the platform default is used.
-c <config>
--config <config>
Specifies the configuration to build [Debug, Release]. Default is
Debug.
We expect to add more options in the future as we support more kinds of builds.
Operation of tmbuild
To build the project, tmbuild
goes through these steps:
- Install dependencies
- Install build tool (Xcode, Visual Studo, …)
- Run premake
- Build
- Run unit tests
- Generate documentation
The dependencies are all the third-party libraries and binaries that we need in order to build the project in the current configuration. For example premake5
is a dependency, because we use it to generate solution files for the build tools.
The dependencies are specified in a JSON file called packages.json
, which looks like this:
{
"premake-osx": {
"build-platforms": ["osx"],
"lib": "premake-5.0.0-alpha11-macosx",
"role": "premake5"
},
...
}
The build tool has the concept of a build-platform
(the platform we use when we run the build tool) and a target-platform
(the platform that we are targeting the build for). Only the dependencies that are needed for the current build and target platforms are installed.
Each version of each dependency is identified by a unique string, such as premake-5.0.0-alpha11-macosx
. To install the dependency we simply download premake-5.0.0-alpha11-macosx.zip
from the web server where we host the libraries and unzip it into the library directory. The library directory is ./lib
by default, but can be configured with a TM_LIB_DIR
environment variable.
We use our own HTTP API from the foundation
for downloading the files. Since we don’t have zip file support in our foundation
(yet) we rely on platform specific tools for unzipping. On OS X we use the pre-installed unzip
program. On Windows we use 7-Zip. Since 7-Zip is not installed by default, we host an uncompressed version of the 7-Zip standalone executable 7za.exe
on the library server. This way, we can bootstrap the install process on Windows by first downloading 7za.exe
and then use that to unzip the other dependencies.
Since each dependency has a unique name, it is easy to check if a dependency already exists in the library directory and in that case skip installation. To make a new version of library, you simply create a new zip file with the updated version, upload that to the library server and check in an updated package.json
file with the new library name. package.json
is committed together with the rest of the source code, so we will always fetch a version of the dependencies that matches the version of the code that we are currently building.
Setting up the build tool is the only step of the process that is not fully automated. The reason for this is that we can’t know which version of Visual Studio the user wants to install (Community, Professional or Enterprise) and it is kind of rude to run the Visual Studio installer on their machine without asking. Instead, in this step we just detect whether Visual Studio is installed or not. If it isn’t, we print an informational message and open up the web page for downloading and installing Visual Studio.
After this, the rest of the steps are easy. We run premake5 from the library directory where we downloaded it in order to set up the solution files, then we just launch the selected build tool to build the platform binaries. If we run into an error at any point, we abort with an error message.
After the binaries have been generated we run our unit tests to make sure no errors were introduced and then run our documentation generator to generate API documentation. Following our principle to write everything in C, these tools are themselves simple one-file C programs. If there is enough interest, I will talk more about these tools in a future blog post.
Using tmbuild
I’m currently using tmbuild
as my main build command. I work mostly in Atom, using Visual Studio and Xcode only for debugging.
I have an .atom-build.js
configuration file that sets up tmbuild
as the build tool for atom-build. It captures error output both from the compiler and from our unit tests, so that atom-linter can pick up the errors. Whenever I want to compile the code, I just press F9
and a second or so later I have the compile and unit test results as well as freshly generated documentation: