Redesigning Maturity Models When Rolling Out Agile Transformations
##plugins.themes.bootstrap3.article.main##
Agile is conquering the competitive and fast-changing world. It benefits mandate that the implementation of a transformation to agile working is implemented as agile as can be. An important element of such a transformation is setting an improvement target: The first step an organization has to take to start working agile. One of the techniques for setting such a target is the use of maturity models (MMs), where maturity levels indicate the smart stops on the way to an ideal situation. This study finds that the traditional and external MM designs create a large gap between management targets and work floor reality. Therefore, a new method to create MM models has been introduced herein in an Agile setting to provide an easy-to-use yet powerful assessment tool for organizations. This model could not only reflect the uniqueness of the organization but also modify this uniqueness for the requirements of Agile transformations. The developed model is compared with the alternatives in literature, providing highly competitive performance with enhanced simplicity. It is clearly confirmed that the gap between top-down target setting and bottom-up achievement was sharply reduced when the structure of the MM was modified.
Downloads
Introduction
When organizations transform to work more Agile, there is a set of characteristics such as routines, cultural aspects, formal governance, and unspoken rules that determine how an organization will likely execute that transformation. The capability to transform can be assessed by the use of maturity models (MMs). A correct alignment of the transformation setting and an MM might accelerate the speed and quality of the transformation. An MM is defined as a descriptive model of the ideal steps through which the Agile transformation progresses. Usually, an MM is constructed of five levels towards maturity, e.g., initial level, repeatable level, defined level, managed level, and optimizing (Beckeret al., 2009). It measures how capable an organization or system is of transforming itself. MMs assist organizations to understand their maturity level and how to improve this within an Agile transformation framework by asking questions and developing action plans. At any given point in an Agile transformation, there is a moment to move from analysis and decide on the improvement target for the next period. Agile transformation usually affects a large part of the organization, if not all employees. Hence, an integral part of an Agile transformation is polling the views of as many employees as possible: the ‘wisdom of the crowd.’ This has shown to be a valuable contribution to the decision-making process (Giles, 2005; Surowiecki, 2005).
There exists a wide variety of MMs. Grant and Pennypacker (2006) listed more than 30 available models in a variety of industries, including petrochemical, defense, construction, and engineering organizations. According to Mullaly (2006), most of these models have been developed at the beginning of 2000. The widely accepted opinion is that organizations with higher maturity levels are assumed to be more successful in project effectiveness and efficiency (Cooke-Davies & Arzymanow, 2003). Yet, there does not seem to be a one-on-one relationship between maturity and project performance. Literature findings are somewhat paradoxical. Yazici (2009) indicated that there is no clear evidence of maturity in competitive advantage or organizational success. Similar findings were also mentioned by Ibbs and Kwak (2000), who claimed no statistically significant correlation between MM and project success based on cost and schedule performance. Furthermore, Mullaly (2006) highlighted the lack of evidence of MM’s contribution to transformation success. A similar study by Grant and Pennypacker (2006) indicated (after a study of 126 organizations from various industries) that the median MM level is 2 out of 5. These studies demonstrate the need for developing a new project management maturity model and how this relates to project performance.
It is important to understand what role an MM has in an Agile transformation. A maturity model cannot only help to find weaknesses but also lay the foundation for an improvement plan. Another element is to ensure that an Agile transformation assessment has the proper level of executive commitment (Rosenstocket al., 2000). Finally, Jugdev and Thomas (2002) highlighted some potential drawbacks of MMs:
- MMs lack the potential to measure progress over time.
- MMs often lack a theoretical basis.
- MMs mainly focus on the work processes, ignoring transformational aspects.
The focus of this research is to see how MM redesign could overcome some of the drawbacks listed above. Therefore, its aim is to seek to provide a response to the following research questions:
- How are MMs structured?
- To what extent do teams manage to achieve maturity levels?
- Are there MM redesign recommendations to make?
- How can MMs be used to create detailed Agile transformation action plans?
Towards this aim, cluster-based analysis of the survey data is performed to assess if the maturity stages are able to represent real-life configurations of organizations.
Method
Procedure and Participants
When an organization wants to set a target for its Agile transformation, there is a need to objectively measure to what extent the work floor is ready for the transformation. The use of a Likert scale results in a variety of biases (most notably extreme response styles), hence influencing the overall scores and distorting overall management’s view.
An alternative technique, designed specifically for objectively tallying transformation readiness by polling employees, is employed in this study (van de Poll, 2018, 2021; van de Pollet al., 2022a). A total of 79 employees in 12 Agile teams in a country organization of a global retailer were asked to answer a questionnaire covering various aspects of an Agile transformation (e.g., planning for Agile, executing Agile principles, measuring impact, and so on).
Three different MM designs were employed, and the gap between an employee’s actual score and a maturity level was calculated using PRAIORITIZE, an automated consultancy platform, the main functionality of which is to give advice, to propose improvement targets by using maturity levels which are the perfect intermediary steps toward a perfect score of 10 out of 10.
Measures and Data Analysis
To facilitate an objective assessment to what extent employees, achieve a certain maturity level, an alternative survey format based on the Gosnell (1951) scale was constructed. This other scale is ordinal and multiple-choice: every next answer is better than the answer before. Uhlaner (2005) calls these ‘breaking points’. For example:
Q. How do you fit a user story into a sprint?
- We don’t size user stories and adapt them later.
- We never make user stories too big, so they always fit.
- During the sprint planning, we match based on a formal intake.
The respondents’ self-reporting bias was reduced in the survey by adding “proof-words,” e.g., ‘formal,’ ‘measurable,’ ‘described,’ and ‘documented’ (Donaldson & Grant-Vallone, 2002). Those words were deliberately added to diminish the cognitive or emotional meaning that employees associate with the questionnaire’s answers (Frese & Zapf, 1988). Plus, adjectives or adverbs that could not be verified (e.g., “good”) were circumvented. This survey format was considered to be adequately objective (Ahrens & Chapman, 2006; Plewis & Mason, 2005) for application in maturity models.
Maturity levels were constructed by assigning individual answers to levels. Employees were compared on three different maturity model designs. The right part of Table I shows these three designs.
Maturity model | Levels | N = 79 | Share % | Cumul. % | St. dev. | Sample structure | ||||
---|---|---|---|---|---|---|---|---|---|---|
Q1 | Q2 | Q3 | … | Qn | ||||||
Traditional | No level | 38 | 48% | 48% | 0.205 | |||||
Level 1 | 26 | 33% | 81% | 1 | 1 | 1 | … | 1 | ||
Level 2 | 15 | 19% | 100% | 2 | 2 | 2 | … | 2 | ||
Level 3 | 0 | 0% | 100% | 3 | 3 | 3 | … | 3 | ||
External | No level | 43 | 54% | 54% | 0.232 | |||||
Level 1 | 29 | 37% | 91% | 2 | … | |||||
Level 2 | 5 | 6% | 97% | 3 | 2 | … | ||||
Level 3 | 2 | 3% | 100% | 2 | 3 | … | 2 | |||
Level 4 | 0 | 0% | 100% | 3 | 3 | 2 | … | 3 | ||
Level 5 | 0 | 0% | 100% | 3 | 3 | 3 | … | 3 | ||
DNA | No level | 18 | 23% | 23% | 0.101 | |||||
Level 1 | 25 | 32% | 54% | 2 | … | |||||
Level 2 | 15 | 19% | 73% | 3 | 2 | … | ||||
Level 3 | 9 | 11% | 85% | X | 3 | … | X | |||
Level 4 | 10 | 13% | 97% | X | 3 | 2 | … | 2 | ||
Level 5 | 2 | 3% | 100% | 2 | 3 | 3 | … | 3 |
The traditional maturity model (like CMMI in process maturity) has five maturity levels linked one-on-one to five answers for each question. In such a design, the five answers also follow the Guttman (1950) format: Every next answer is better than the answer before. For example, to achieve Level 1, an employee must score the least sophisticated answer everywhere; for Level 2, everywhere, at least the second answer, and so on. In this study, the questions had three answers, and so the maturity model had three levels as well.
As an alternative, an external maturity model design was added where the questions were not considered equally important.
Employees needed to score the best answer on some questions, while others were not even considered for that level. It was postulated that the worst answer of the three was by default achieved: only answer 2 and answer 3 had to be assigned to a level. The level was not achieved if a respondent missed a single required answer. Still, in other studies, such a maturity model design showed a gap between the management target and the actual achievement of the employees (van de Pollet al., 2022b).
To counter this gap, a third alternative design (DNA variant) was developed by using the k-means algorithm (randomly initialized with 20,000 iterations). Herein, the employee answer profiles were used to create clusters to identify patterns of agile organizations. Adding some constraints, it was possible to perform a k-means that spread the employees as evenly as possible over the five maturity levels. The DNA variant assumes that an externally applied maturity model does not do justice to the unique character of an organization. In that light, an external maturity model could be regarded as a “one size fits all” solution. The worst-scoring employees cluster together, just like the best-scoring employees. The term ‘DNA’ originates from these constraints: it takes the natural clusters as the basis (reflecting the uniqueness of the organization) but then modifies this uniqueness for transformation purposes. Table I shows these modifications with various X’s.
It is postulated that an MM performs better in a transformation as the respondents are as equally as possible divided over the levels. For example, an MM with five levels should have close to 20% of respondents in each level from Level 1 to Level 4% and 20% of the respondents not even achieving Level 1. Then, there is a motivational, reachable target for every respondent. Additionally, the more respondents are divided over the levels, the easier it is to facilitate knowledge sharing. If, on the other hand, respondents are all huddled into one level, there is no differentiation for target setting. The standard deviation of the share of the respondents per level easily indicates the equal division of respondents among the levels. The lower the standard deviation, the better the MM performs for target setting and knowledge sharing.
Results
Closing the Gap
Table I shows the comparison of the three MMs. The Traditional design was considered not to be very useful for setting the transformation target. After all, 48% did not even achieve Level 1, and, cumulatively, 81% did not come further than Level 1 itself. Setting the target to Level 1 is not ambitious enough, while going for Level 2 is too big a stretch for the organization.
Remarkably, the external maturity model design did not fare better. The trade-off between the need to score fewer questions and better answers resulted in a comparable 91% achieving Level 1 or less. This is also in line with the results from previous literature (van de Pollet al., 2022a).
Finally, the DNA model resulted in the best spread of employees over the maturity levels: the standard deviation was by far the lowest of the three MM. In this particular organization, the “No level achieved”-group was the smallest of the three designs (23% compared to 48% for Traditional and 54% for External). However, 85% of employees made it to Level 3 in the DNA design.
Adjusting the ´DNA´
Table I shows (lower right of the table) a sample structure for the DNA design, where the Xs indicate modifications compared to the external design. A k-means clustering likely deviates from an external maturity model due to the uniqueness of an organization. As with DNA, that uniqueness is not prone to errors. Like human DNA might be altered to neutralize Alzheimer’s, the organization’s Agile DNA might need some repair, too. For example, if historically, the organization hardly ever worked with SMART improvement targets, it is not a reason to continue doing so. Conversely, there could not only be shortages in a maturity level that need repair; there are also questions that need not be included in certain levels or the model. Table II shows the (dis) similarities among employees’ scores.
Top-7 questions where respondents align most in their scores | 1. To what extent does the product owner manage the product backlog? |
2. How are the daily stand-ups organised? | |
3. How do you define division of done (DoD)? | |
4. How diverse are the competencies in your agile team? | |
5. How is the collaboration within your agile team? | |
6. To what extent can the agile team affect the work that is completed in sprint? | |
7. How do you align scrum ceremonies (meetings) and sprint cadence? | |
Top-7 questions where respondents differ most in their scores | 1. How do you handle impediments? |
2. Which metrics do you use? | |
3. What do you use the metrics for? | |
4. How is the velocity of the agile team? | |
5. What is the quality of the sprint reviews (Demos)? | |
6. Where does testing fit in your agile cycle? | |
7. How do you assign story points to user stories? |
Further analyzing the questions where respondents align most in their scores, it turned out that, for the most part, these were questions where most employees scored well. Adding such questions to a target might block other employees in their target, while such elimination from the model does not affect the transformation’s progress.
Setting up an Action Plan
Table II shows the questions that drive employees to their maturity levels. The questions where employees differed most in their scores are the elements that must be included in the DNA design. Table III translates these questions into the DNA levels. The complete maturity model covers more than just these seven questions. Table III shows which questions pivot on which level to a better answer. For example, Level 3 differs from Level 2 because of a different answer to question 1. As said, Table III only shows the top 7 questions (for better readability).
Question | Target answer | ||
---|---|---|---|
Level 1 | 1. | How do you handle impediments? | We haven’t identified impediments |
2. | Which metrics do you use? | We don’t use metrics | |
3. | What do you use the metrics for? | We don’t use metrics | |
4. | How is the velocity of the agile team? | The velocity is not measured | |
5. | What’s the quality of the sprint reviews (Demos)? | They aren’t organized | |
6. | Where does testing fit in your agile cycle? | No formal place; we test when we need to | |
7. | How do you assign story points to user stories? | We don’t assign story points | |
Level 2: differences with level 1 | 1. | How do you handle impediments? | We occasionally identify impediments |
2. | Which metrics do you use? | We use our own set of (Agile) metrics | |
3. | What do you use the metrics for? | To report to stakeholders | |
4. | How is the velocity of the agile team? | The velocity is measured but (usually) not acted upon | |
6. | Where does testing fit in your agile cycle? | Testing is a regular routine in the realization of user stories | |
7. | How do you assign story points to user stories? | Some user stories are assigned story points | |
Level 3: differences with level 2 | 1. | How do you handle impediments? | We structurally identify and monitor impediments |
Level 4: differences with level 3 | 5. | What’s the quality of the sprint reviews (Demos)? | Not all stakeholders are present and/or feedback is not structurally captured |
6. | Where does testing fit in your agile cycle? | Testing is a formal part of the spring planning | |
Level 5: differences with level 4 | 2. | Which metrics do you use? | We use an organization-wide prescribed set of (Agile) metrics |
3. | What do you use the metrics for? | To report to stakeholders AND to improve our team performance | |
4. | How is the velocity of the agile team? | The velocity is formally measured and acted upon | |
5. | What’s the quality of the sprint reviews (Demos)? | Most/all stakeholders present AND feedback structurally captured | |
7. | How do you assign story points to user stories? | All user stories are assigned story points |
The question format further allows us to collect best practices for moving from one answer to the next. After all, each answer describes a verifiable state of the Agile transformation, and best practices describe how to move from one state to the next. An example is shown in Fig. 1. With three answers. There are two improvement possibilities (from Answer 1 to Answer 2 and Answer 2 to Answer 3). Fig. 1 shows steps from Answer 2 to Answer 3 for a single question, including an online reference to additional best practice material.
Fig. 1. Adding improvement best practices to a maturity level.
Discussion and Conclusion
This research is based on theoretical work as well as empirical studies. A quantitative survey describing the elements of an agile organization has been carried out in the Netherlands among various teams in one organization. A gap between management perception and work floor reality can devastate an agile transformation. Using an overstretching maturity model will alienate and overburden employees. Consequently, precise target setting is of prime importance. That means that maturity models should be used not only as a rough reference for management to set targets but also to measure the progress on the work floor precisely.
It is possible to answer this paper’s four research questions. The two research questions, “How are MMs structured?” and “To what extent do teams manage to achieve maturity levels?” are addressed first. The Traditional and External MM designs create a large gap between management targets and work floor reality: the vast majority of employees did not make it past Level 1. It seems that a Traditional or External maturity model design is too rigid and too much one-size-fits-all to represent what Agile aims to be: dividing tasks into short phases of work and frequent reassessment and adaptation of plans.
The third research question is, “Are there MM redesign recommendations to make?” A maturity model that reflects, in a way, the organizational DNA is inside-out compared to the other two outside-in maturity model designs. It clusters the employees in a maturity model but ‘repairs’ any unwanted side-effects of the clustering. Finally, “How can MMs be used to create detailed Agile transformation action plans?” can be answered by combining the format of the multiple-choice questions with Agile transformation best practices.
Setting a specific, measurable, achievable, relevant, and timed improvement target is a cornerstone of any agile transformation. This study conducted a methodological comparison to gain deeper insight into maturity models that could best fit the reality of an agile work environment. By connecting the maturity model design with a questionnaire on Agile working, an Agile transformation can be truly conducted in an agile way. New research into these ‘organic’ maturity models combined with research into a question format and Agile best practices might likely lead to a much more realistic, adaptable, and, therefore, more usable target setting.
Limitations and Future Research
In this paper, many cautionary remarks are made about our research—the paper aimed to start methodological research into alternative maturity model design. The 79 respondents were purely figured for calculation purposes. More conceptual thinking of which elements of the DNA (the k-means clusters) need what kind of ‘repair’ precedes calculations with much larger respondent samples. Furthermore, different Agile-related questionnaires–and even topics other than Agile–will have to be researched to understand whether the gap between management targets and work floor reality is a more generic phenomenon. Finally, it has to be researched to what extent Agile transformation best practices can be dissected into usable improvement suggestions on the level of the individual questions.
References
-
Ahrens, T., & Chapman, C. S. (2006). Doing qualitative field research in management accounting: Positioning data to contribute to theory. Accounting, Organizations and Society, 31(8), 819–841. https://doi.org/10.1016/j.aos.2006.03.007.
Google Scholar
1
-
Becker, J., Knackstedt, R., & Pöppelbuß, J. (2009). Developing maturity models for IT management: A procedure model and its application. Business & Information Systems Engineering, 1(3), 213–222. https://doi.org/10.1007/s12599-009-0044-5.
Google Scholar
2
-
Cooke-Davies, T. J., & Arzymanow, A. (2003). The maturity of project management in different industries. International Journal of Project Management, 21(6), 471–478. https://doi.org/10.1016/s0263-7863(02)00084-4.
Google Scholar
3
-
Donaldson, S. I., & Grant-Vallone, E. J. (2002). Understanding self-report bias in organizational behavior research. Journal of Business and Psychology, 17(2), 245–260.
Google Scholar
4
-
Frese, M., & Zapf, D. (1988). Methodological issues in the study of work stress: Objective vs subjective measurement of work stress and the question of longitudinal studies. In C. L. Cooper & R. Payne (Eds.), Causes, Coping, and Consequences of Stress at Work (pp. 375–411). Wiley & Sons.
Google Scholar
5
-
Giles, J. (2005). Wisdom of the crowd. Nature, 438(7066), 281–281. Grant, K. P., & Pennypacker, J. S. (2006). Project management maturity: An assessment of project management capabilities among and between selected industries. IEEE Transactions on Engineering Management, 53(1), 59–68. https://doi.org/10.1109/tem.2005.861802.
Google Scholar
6
-
Gosnell, H. P. (1951). Review of the book measurement and prediction: studies in social psychology in World War II. S. A. Stouffer, L. Guttman, E. A. Suchman, P. F. Lazarsfeld, S. A. Star and J. A. Clausen, Eds. American Political Science Review, 45(3), pp. 934–5.
Google Scholar
7
-
Ibbs, C. W., & Kwak, Y. H. (2000). Assessing project management maturity. Project Management Journal, 31(1), 32–43. https://doi.org/10.1177/875697280003100106.
Google Scholar
8
-
Jugdev, K., & Thomas, J. (2002). 2002 Student paper award winner: Project management maturity models: The silver bullets of competitive advantage? Project Management Journal, 33(4), 4–14. https://doi.org/10.1177/875697280203300402.
Google Scholar
9
-
Mullaly, M. (2006). Longitudinal analysis of project management maturity. Project Management Journal, 37(3), 62–73. https://doi.org/10.1177/875697280603700307.
Google Scholar
10
-
Plewis, I., & Mason, P. (2005). What works and why: Combining quantitative and qualitative approaches in large-scale evaluations. International Journal of Social Research Methodology, 8(3), 185– 194. https://doi.org/10.1080/13645570500154659.
Google Scholar
11
-
Rosenstock, C., Johnston, R. S., & Anderson, L. M. (2000). Maturity model implementation and use: A case study. Proceedings of the 31st Annual Project Management Institute 2000 Seminars and Symposium. vol. 2000. Project Management Institute.
Google Scholar
12
-
Surowiecki, J. (2005). The wisdom of the crowds. Anchor Books. Uhlaner, L. M. (2005). The use of the Guttman scale in development of a family orientation index for small-to-medium-sized firms. Family Business Review, 18(1), 41–56. https://doi.org/10.1111/j.1741-6248.2005.00029.x.
Google Scholar
13
-
van de Poll, J. M. (2018). Ambition patterns in strategic decision-making [Unpublished doctoral dissertation]. Technical University Eindhoven.
Google Scholar
14
-
van de Poll, J. M. (2021). An alternative to the Likert scale when polling employees. The International Journal of Business & Management, 9(5), 239–244. https://doi.org/10.24940/theijbm/2021/v9/i5/bm2105-044.
Google Scholar
15
-
van de Poll, J. M., Li, C., & Yang, Y. (2022a). Rethinking maturity models for organizational transformation. Journal of Research in Business Studies and Management, 9(2), 19–23.
Google Scholar
16
-
van de Poll, J. M., Shamsi, A., Brouwer, A., & Miller, M. (2022b). Operationalizing purpose between the actual situation and ambition. International Journal of Business Management and Economic Review, 5(1), 9–20. https://doi.org/10.35409/ijbmer.2022.3353.
Google Scholar
17
-
Yazici, H. J. (2009). The role of project management maturity and organizational culture in perceived performance. Project Management Journal, 40(3), 14–33. https://doi.org/10.1002/pmj.20121.
Google Scholar
18