build-swarm
Build Swarm
A system for super simple cross-platform build automation.
Supports
- Windows (via MSYS2)
- MacOS (with Homebrew)
- Arch Linux
- Debian
- Red Hat Enterprise Linux
- FreeBSD
Known Issues
- No integrated SSL support, needs to be used on a trusted network only
Introduction
Build Swarm has three main components. The first is the central control server. This is a web server, running on some persistent server or your own development machine which provides the web UI and orchestrates the activities of the workers. The second component is the workers, these are small persistent daemons running on a variety of virtual machines which connect to the central control server and wait for instructions. The final component is the build script repository. This is a Git repository where you keep the build scripts. When you initiate a build the central control server sends the correct build script from the build script repository to each worker, which then executes the build script before uploading the result back to the central control server. You can then download the result of the build from the web UI. In this way Build Swarm allows you to tailor the behavior of the build on each platform without actually needing to touch the virtual machines after the initial install of the worker.
Setting Up a Build Script Repository
To set up a build script repository you will need to create a Git repository on your favorite Git hosting provider, such as GitLab or GitHub. Into the root directory of this repository you will need to place POSIX shell scripts, named after the platforms they will run. The platform names are as follows:
windows.sh
: Will run on Windows hosts.macos.sh
: Will run on MacOS hosts.arch.sh
: Will run on Arch Linux hosts.debian.sh
: Will run on Debian hosts.rhel.sh
: Will run on Red Hat Enterprise Linux hosts.freebsd.sh
: Will run on FreeBSD hosts.
You can see an example of a build script repository here.
Setting Up the Control Server
Now you need to install Build Swarm on the central control server (e.g. your own machine). You will need the Crystal programming language, you can get it here. Once you have it customize and run the following commands:
export BUILD_SWARM_HOST=MY_IP
export BUILD_SWARM_PORT=MY_PORT
git clone "https://gitlab.com/amini-allight/build-swarm"
cd build-swarm
crystal run ./src/main.cr -- MY_BUILD_SCRIPT_REPOSITORY_URL MY_BUILD_SCRIPT_BRANCH
- You need to replace
MY_IP
with a stable IP address or hostname associated with your server that is accessible from the workers, as this is the address they will communicate with. - You can replace
MY_PORT
with any port you wish to use, I use 8111. - You must replace
MY_BUILD_SCRIPT_REPOSITORY_URL
with the URL for your build script repository. - You must replace
MY_BUILD_SCRIPT_BRANCH
with the branch in that repository you wish to use, usuallymaster
ormain
.
Here's a filled-in example:
export BUILD_SWARM_HOST=192.168.1.2
export BUILD_SWARM_PORT=8111
git clone "https://gitlab.com/amini-allight/build-swarm"
cd build-swarm
crystal run ./src/main.cr -- "https://gitlab.com/amini-allight/test-program-build-tools" master
After executing the commands you can visit http://MY_IP:MY_PORT/
in your web browser to access in the web control UI. In the case of this example that would be http://192.168.1.2:8111/
.
Setting Up Workers
On the web control panel you will find several install commands for various platforms. To use these perform the following steps:
- Create a new virtual machine.
- Install your desired operating system.
- Set up your operating system to automatically log into your user account. This is only required on Windows.
- Make sure the prerequisites are installed.
- Run the appropriate install command for your chosen operating system provided on the web control panel.
- On Windows you must run the install script in the correct MSYS2 terminal type, this varies depending on your use case but is usually
MINGW64
,CLANG64
orCLANGARM64
, not the defaultUCRT64
that opens after installation.
- On Windows you must run the install script in the correct MSYS2 terminal type, this varies depending on your use case but is usually
- Reboot the virtual machine to start the worker. This is only required on Windows and MacOS.
Note: Linux workers are tailored to a specific package ecosystem but not a specific distribution. It's possible to install the Arch worker on Endeavour, the Debian worker on Ubuntu, the RHEL worker on Fedora, etc. Workers are implemented as pure POSIX shell scripts without any binaries and so should be architecture agnostic.
Note: You can append an optional identifier to the end of the download URLs provided, for example /setup/windows
becomes /setup/windows/arm64
. This has no effect on the worker installed beyond changing its worker ID from windows
to windows-arm64
and so causing it to execute the windows-arm64.sh
build script instead of windows.sh
. This can be useful for example if you have multiple Windows workers targeting different architectures and you need them to use different build scripts and deliver separate build artifacts.
Note: Workers will have the BUILD_SWARM_HOST
and BUILD_SWARM_PORT
values that were supplied when you launched the central control server baked into their code. If you want to change the network location of the central control server you will need to reinstall by running the install command again.
Running a Build
To run a build just press the "Build" button on the web control panel. This will pass each worker the script that matches its ID from the build script repository you provided when launching the central control server. For example the worker called freebsd
will be passed the file freebsd.sh
from the root level of the repository, if one exists. The script can do all sorts of different things but the eventual result should be producing a single file which can be uploaded to the server. This could be a Linux package archive, a ZIP file or a standalone executable. Once the script has created this file it must record the name of that file so the worker can find it, for example your build script might end with these two lines:
7z a "$BUILD_SWARM_WORKER_ID.zip" test-program/bin/test-program
echo "$BUILD_SWARM_WORKER_ID.zip" > file-name
Placing the file name into a special file called file-name
in the script's initial working directory enables the worker to find the file and upload it. Once the script exits the file will be uploaded, along with a log of all stdout/stderr output from your script. If the build failed and no file was created only the log will be uploaded.
Note: Any environment variables beginning with BUILD_SWARM_
that are available to the central control server when it starts up will be copied to the environment of the script. This can be useful for example to pass secret access tokens to scripts without needing to manually add them to your virtual machines or store them in your build scripts repository (which is insecure and to be avoided).
Downloading a Build
Once one or more build artifacts are available you can press the "Download" button on the web control panel to download all currently available ones. If a given build succeeded you will download both an artifact and a log, if it failed you will only download the log. These files will be available until you start or cancel another build.
Canceling a Build
If you want to cancel an ongoing build you can press the "Cancel" button on the web control panel. This will single to all workers to kill their script processes and return to an idle state.
Credit & License
Developed by Amini Allight. Licensed under the AGPL 3.0.
This project contains files from the Open Sans font under their license.
build-swarm
- 0
- 0
- 0
- 0
- 0
- 2 months ago
- December 15, 2023
GNU Affero General Public License v3.0
Sun, 22 Dec 2024 07:06:52 GMT