Continuous integration is a great tool to help with QA of a project, and some of the teams within Linaro want to make use of it.

Although Hudson is not written in Python, it is a lot more compelling than buildbot thanks to its richer ecosystem, a better UI, many plugins and a repository with frequent updates.

We will maintain a single Hudson instance to be shared across all teams within Linaro as that's easier to maintain and allows cross-team chained builds.

We will move the kernel WG's Hudson builds to our Hudson instance. If possible we'd like to accommodate migrating the toolchain WG's builds as well.

To summarize the Continuous Integration project will aim :

* Would be to get a hudson service set up that would support the Working Groups in getting their builds going

* Further help them to be able automatically trigger tests to verify their builds

Release Note


User stories

  • Bob wants to continuously fetch and build the tip of gcc-linaro's bzr branch
  • Gina wants to cross build the latest linux-linaro whenever its git repo is changed
  • Matt wants to run the daily linaro images on QEMU and run some sanity checks
  • Michael wants to build gcc natively on arm and run benchmarks
  • Peter wants to test Linaro images under QEMU




The long term plan is to get the Hudson source package (once it exists) built and deployed on Canonical's DC just like every other web app we have deployed there.

Since the source package doesn't exist yet and deploying the binary package distributed by the Hudson developers is not acceptable, we need an interim solution. Two alternatives have been suggested so far:

  1. Deploy in a DMZ on on Canonical's Data Center
  2. Deploy on the cloud

Deploy in a DMZ on Canonical's Data Center

By having it in a DMZ with very limited access to the external world (i.e. only and, we greatly reduce the damage that could be caused in case the machine running Hudson were to be compromised, thus hopefully making it acceptable to deploy the existing binary package.

The limited access to the external world would make it impossible to install plugins using the Web UI, so one would have to install them manually.

Deploy on the cloud

External access to canonicloud instances is possible only via an OpenVPN connection, so we can't use that.

If we run it on ec2 we won't block on having it packaged. Other teams (e.g. U1) do that, although Hudson is still an experiment for them. If we do run it on ec2, we must assume that to be an insecure environment, though, which means:

  • No user credentials can be stored in the ec2 instance.
    • I think we'd need to store credentials if we wanted to use the Hudson ec2 plugin for instantiating new slaves as they become necessary.
  • Warn people that the artifacts produced should not be trusted?
  • How would we pay for that? We don't have to pay if we use canonicloud UEC

The deployment itself (be it on EC2 or canonicloud) should be trivial (as we'd be using the binary package provided by Hudson developers). Once that's done we must install all the necessary plugins and then move the existing jobs from Loïc's instance to the new one by copying the relevant directories under $HUDSON_HOME/jobs

Once that's done we will assist other teams with adding more builds and tweaking existing ones when necessary.

General configuration

Regardless of where/how Hudson is deployed, it must be publicly accessible on a meaningful host name.

New jobs are added via the Web UI and end up as config files under $HUDSON_HOME/jobs. It'd be nice to put them under version control but that is tricky to do because most users will not have access to the raw config files, so for now we won't put them under version control.

After the migration is completed, we will work with stakeholders to define ACLs. If possible we should use the teams extension of Canonical's SSO for that, but Hudson doesn't seem to have an OpenID plugin, so it may be too much work if we have to write it ourselves.

Test/Demo Plan

Unresolved issues

BoF agenda and discussion

Michael Hope doesn't use Hudson. Used to use buildbot, but it didn't work very well.

Unsure about Hudson for toolchain:

  • - build gcc - use that output to build eglibc - also use to output to build a bunch of benchmarks in various configurations - moderately complicated, but Hudson can in theory do these things - long building, so wants to be able to log in and test things
    • - hudson leaves the trees around for a configurable amount of time.
    - doesn't want one benchmark failing to cause the others to not build. - wants to test gcc with different options.
    • - this could be done with parameters for the first build, or perhaps
      • injecting flags in to the other stages.
    - Also gdb, perhaps binutils and eglibc in the future
    • - want to test them directly, and with the compilers being produced
      • by the gcc jobs
    - New jobs are not added frequently - Want to be able to test upstream releases, and other branches on demand.

Needs ARM boards available as slaves.

  • - Could do this on Michael's network as an interim solution.

Current implmentation details for Jenkins to be useful for gcc-linaro

As part of the actual implementation we used the ec2 instances to setup Jenkins/Hudson Continuous Integration service. The details on the global configuration for the CI work is available @

Jenkins/Hudson can be used by the Toolchain gcc team to build upstream linaro gcc, private branch changes and run tests to track regression. All these things can be done with minimal configuration using Jenkins/Hudson. The builds and tests can be configured to be done automatically whenever a merge is accepted to a branch, buy using the polling mechanism of the Jenkins/Hudson Plugins. The details of how to submit a new job to build and execute tests for gcc-linaro can be found @ Jenkins/Hudson built and tested with the native gcc for x86. It currently tests the gcc only, but can be tweaked to test the C++ library or run complete tests as well. Plan is to support gcc cross build and testing ASAP.


internal/archive/Platform/Infrastructure/Specs/ContinuousIntegration (last modified 2013-08-23 02:14:56)