• Strategy as of Feb 8, 2011

Intro

As the organization grows and Linaro's promise of a "faster time to market" comes to fruition, the Platform Team's mission is transitioning from supporting an upstreaming engineering unit in 10.11 to something that ships fully enabled and validated evaluation builds, consumed by ODM and OEM products as a baseline, in the not so far future.

Focus of Testing and Validation in Linaro

The nature of the term "Testing and Validation" can easily derail to an extent that wouldn't reflect the main mission of Linaro anymore. Things like doing core hardware validation or validating whether a product's user-experience matches end-user expectations are currently, and probably in the future, out of scope for Linaro.

To stay focused the term "Testing and Validation" needs to be defined in the context of Linaro. To reset and align expectations about Linaro's Test and Validation efforts, the Platform Team defined three initial core areas to focus on. While this feels like a tough task, it becomes a lot easier when recalling that Linaro is really focusing on improving the Linux ARM ecosystem from two fronts:

  1. Linaro Working Groups: tackle the much needed refinement of commonly used upstream topics for ARM
  2. Linaro Landing Teams: make BSP's fit for production in key products, through upstreaming, consolidation and continuous integration

Automation vs. Manual testing

To ensure that we can really validate software stacks to a level that they can be used in a professional ODM/OEM environment, it is understood that we require more or less complete test coverage in the focus areas above. To keep this testing effort scalable, a high priority is put into getting our automated testing story right. However, while this is true, it is also understood that parts of testing and validation, especially those that are highly dependent on human senses (like audio and graphics and video on screen) will be hard to replicate in a completely automated fashion. Hence, Linaro's Testing and Validation strategy will continue to involve manual testing of areas not covered in our automated setup.

To accomodate the increased manual testing demand, Linaro has opened one job opening for each evaluation build and encourages member companies to assign engineers to cover those needs.

LAVA: Linaro Automated Validation Architecture

The first project the newly formed Validation Platform team is focusing on is bringing up a hardware validation farm that bundles up to 50 boards in a rack which can be remotely controlled. The project that wraps all this together was dubbed LAVA, which stands for Linaro Automated Validation Architecture.

The first iteration of LAVA, to be delivered in time for the 11.05 release, will focus on automated execution of the already integrated open-source benchmarks on top of Linaro's Evaluation Builds. Tests will be executed on top of clean evaluation build installs that get reinstalled before each test run. Later this will be extended by integrating more Working Group specific test cases that would come with a more complete test coverage through mechanisms like matrix testing.

Projected Timeline for Linaro Validation Ramp up

November 2010

During the first Linaro cycle, automated validation and test efforts focused on creating core technology for test execution, integration and result publishing.

A test integration and execution framework called Abrek was created that allows straightforward integration of arbitrary tests and benchmarks to be run on the target. For that, Abrek ensures that any prerequisites for the execution of a given test and benchmarks are met. Integrated tests and benchmarks are run and results are normalized into a format consumable by the Linaro result web reporting framework called launch-control.

On the client side launch-control comes with a simple web API and python API to normalize the resultant data for submission before submitting the results to the launch-control web-service. On the web-service side, launch-control provides a result submission web API, a result storage mechanism as well as an easy to extend visualization template feature which comes with a growing set of default/common visualization templates that can be used when integrating the majority of existing open-source test and benchmarks results.

May 2011

As the requirements on automated validation increase, Linaro decided to setup a dedicated Validation Team that will focus on getting an automated validation architecture and hardware farm in place.

The validation team is currently working on LAVA that involves deploying a hardware validation farm and developing and setting up the tools and infrastructure to allow automatic execution of smoke and functional tests against continuously integrated Evaluation Builds.

For that, a mechanism to reinstall images on a daily basis will be developed that will ensure that tests will be executed against the latest available Linaro Evaluation Builds. It is important that we choose a mechanism to reinstall images that will work on all available boards. For that, we rely on a known-good "master image" which lives in a partition on the device. The master image will download the evaluation build to be tested, collect the test results at the end of testing, and act as a recovery image if something fails during testing.

Although the master image must be highly reliable, no assumptions need to be made about the reliability of the evaluation image being tested. For example, if no network is available, or even if the system fails to boot, LAVA will record the output seen on the serial console and submit this as an error log for the validation run.

... on to the future

While the goals for May 2011 already sound quite ambitious, the Linaro Validation effort does not end there. Quite the contrary. The team is committed to continue innovation around the wider automated testing and validation topic. As one example, the Validation Team will look into continuing their ongoing effort to integrate more and more open-source benchmarks into Linaro's central test repository. On top of that the team will take a closer look at including more functional tests into the test case repository and will explore more production-like ways to install images (e.g. onboard flash rather than SD card). To summarize, Linaro will aim to do more thorough validation on even more boards, with a wider set of tests and parameterizations. Topics to look into are to setup matrix based test coverage as well automating more and more tests which are done manually today. Robots for automating user interface interactions are currently under discussion. Overall, Testing and Validation is an exciting project which will surely produce solid results.

Platform/Validation/Strategy (last modified 2011-02-09 19:27:27)