Last month took place the PROFES 2014 conference, focused on “software engineering techniques, methods, and technologies for product-focused software development and process improvement as well as their practical application in an industrial setting”. The conference consisted of tutorials, a doctoral consortium, and academic presentations. The online conference program now includes slides for all the presentations.
Anu Raninen directed the LAPPI tutorial, which introduced the “Light-weight Technique to Practical Process Modelling and Improvement Target Identification” to the participants. The LAPPI technique aims to provide a gentle introduction for software companies to work on software process improvements. As seen in Raninen’s PhD, the technique consists of 13 steps that include two modelling sessions. In those sessions, the “leading team”, which may be an outside organization, gets the “customer organization” to get a software process overview and improvement suggestions. The process overview includes “the process model and diagrams/tables describing the process phases and participating roles”. The LAPPI technique claims to be simple and flexible. And its main advantage is the participatory approach: employees feel part of the improvement process and better solutions are generated.
“Measurement in Software Startups” is a PhD plan by Sohaib Shahid Bajwa of Bolzano University. He is aware of the lack of software engineering around startups and he is focusing on collecting data around the measurements used by them. In his preliminary findings, he has collected data from 3 startups to find that no data is collected from the software development process. Collected data is the one regarding the application usage, and no formal decision making process is in place. Interestingly, they do not have any specific methodology to develop the software.
Anna-Liisa Mattila from Tampere University of Technology will study in her PhD the use of visualizations for software metrics and data. Visualization is a powerful tool to use as it helps identifiying salient problems and, at the same time, the overall status of the measured project. From the UQASAR perspective, Anna-Liisa’s idea of using visualizations, which have yet to be designed, to help choosing project metrics is very attractive. Visualizations could be automatically created from the existing data sources of the software project (e.g., repositories, bug-tracker, etc.)
The academic papers are only avaliable through Springer, but we will make a short summary of the papers we found more interesting.
A theme going through the conference was Goal-oriented Quantitative Management and Strategies, or GQM+Strategies. Takanobu Kobori and others presented a paper titled “Identifying rationales of strategies by stakeholder relationship analysis to refine and maintain GQM+Strategies models“. GQM+Strategies use quantitative data to establish and evaluate management strategies and plans. The paper reflects on how business goals need to be aligned with IT requirements. Most of the top companies are usually software companies, even banks, that they develop and use to carry out a business. Thus, the need to ensure the processes and quality so that it matches the business side of the company is certainly a priority for big and small companies.
Mushtaq Raza and João Pascoal Faria took the big data approach in their paper “A Benchmark-based Approach for Ranking Root Causes of Performance Problems in Software Development“.
They carried out a deep analysis of 30,000 finished projects to find out the causes where organizations or individuals introduce issues in software. The issues they considered were those that can delay the software release or reduce the quality of the software. They came out with 21 Performance indicators in three main categories 1) accuracy of estimations, 2) quality of the software, and 3) productivity of the team. Given that the accuracy of the model compared to a humen, the main benefit of this model is that it can replace intensive manual checking of the causes in big software projects.
Torgeir Dingsøyr and others (including Juha Itkonen, also part of the UQASAR FP7 project) delved into a hot topic as Agile and SQA are still finding their common metrics. The authors try to answer a seemingly simple question: “What is Large in Large-Scale? A Taxonomy of Scale for Agile Software Development”. Their theoretical discussion arrives to a simple categorization. Projects can be either small-scale, large-scale and very large scale. The criteria used is the number of Scrum teams involved in the project; 1, 2-9,10+, respectively. They propose the use of Scrum of Scrums to coordinate the large-scale categories.
Finally, in this world of open and closed ecosystems, Danielle Pichlis and her co-authors presented “The Challenges of Joint Solution Planning: Three Software Ecosystem Cases“. For the author, a Software Ecosystem (think Facebook and its apps) “include the use of a common technology; collaboration with the keystone player (who is primarily responsible for the common technology), third party developers and users; and shared value proposition”. From the three cases presented, the understanding is that the technology requierd for ecosystems is already in place and face no real challenges. However, the big challenge is the value proposition that they key stakeholder sees in promoting a new partner in their ecosystem, and how the users will be affected.