Nowadays , the pervasiveness of open source software is absolute. From source code snippets, web libraries, platforms’ plug-ins and add-ons, computer programs, and even operating systems are open source. The fact that open source software allows to check for the source code has been claimed as beneficial to guarantee its quality, and even its safety, at least until Heartbleed and Shellshock happened. These two incidents have revealed that a project having many users does not equate to having many people going through the project’s code base.
The recent dissertation of Adina Barham discusses the issue of SQA in Free/Libre Open Source Software (FLOSS). We don’t have access yet to the dissertation in full, but we have been able to read the statement that the examiners have written. From there, we can extract the main findings of her work. Her extensive work has analysed the relationship between users of FLOSS and SQA. Basically, how FLOSS projects create a community for SQA with users, and whether this user community is tightly linked to the FLOSS developer community.
Adina Barham has been using the site OHLOH, now Black Duck | Open Hub, to gather the statistics of FLOSS projects, specially contributors and the contents of their mailing lists. From the contributors and the mailing lists of Mozilla she found out that about two-thirds of the communications are one-way, with no so much of interaction between the contributors. It can be inferred that users may be using the mailing list to notify of problems in Mozilla software. Later, those problems are elevated to their bug-tracking software (Bugzilla), where they are managed.
In her work, she has focused on the user engagement in SQA, and not so much in the FLOSS developers and their SQA practices. However, from the analysis of her other use-cases of FLOSS projects, she has noticed that for some projects the disconnection between the developers and the users involved in SQA activities is greater than in others. No specific reason is given, but contributors to projects like Mozilla belong and participate to more than one mailing list, including the QA one. It is also noted that projects with a clear end-user focus like Ubuntu and LibreOffice have well-defined and rigorous QA practices for users to contribute. Users are not expected to be tech experts, and thus, user feedback has to be collected in a more systematic way. For example, Ubuntu, apart from the tech centred QA group, has very active forums and community around quality.
In summary, Adina’s work is one of the first to analyse the state of QA in FLOSS and has done it quite thoroughly. From UQASAR’s point of view, we should ask ourselves how to involve users in code quality, and not only in user-facing issues/bugs. Tech oriented repositories like GitHub already have several add-ons for code quality like Landscape.io, which, amongst others, reports a percentage of code “health” to the people checking out the source of a FLOSS project. A great step would be to make this kind of information available to users, for example at Black Duck’s Open Hub so they can also choose wisely for FLOSS solutions based on the code quality.