Welcome to our brutally honest, totally hip CMMIFAQ.

We're probably going to make as many enemies as friends with this FAQ, but hey, we expect it to be worth it. :-)

We also did a bit of research and found it pretty hard (if not impossible) to find this kind of information anywhere else on the web. So anyone who has a problem with our posting this information is probably the kind of person who wants you to pay to get it out of them before you have enough information to even make a good decision. But we digress... We do that a lot.

This site was designed to provide help with CMMI for people who are researching, trying to get to "the truth" about CMMI, or just looking for answers to basic, frequently asked questions about CMMI and the process of having an appraisal for getting a level rating (or CMMI certification as some people (inaccurately) prefer to say).

The information on this site has also been demonstrated to provide answers and new insights to people who are already (or thought they were) very familiar with CMMI and the appraisal. Feedback has indicated that there is more than a fair amount of incomplete and actual incorrect information being put forth by supposed experts in CMMI.

Your feedback is therefore very important to us. If you have any suggestions for other questions, or especially corrections, please don't hesitate to send them to us.

This is a work-in-progress, not all questions have been answered yet—simply a matter of time to write them, not that we don't know the answers— but we didn't want to keep you waiting, so we're starting with that we have.

Besides, we definitely don't know all the questions! So, please, send us your questions!.

For your own self-study and for additional information, the source material for many of the answers on CMMI come from the CMMI Institute. They're not hiding anything; it's all there.
(Though, you might have to pay for some of it.)

Oh, also—we're really sorry. We know that pictures are so much better than worlds. However, we're not allowed to use images from the official materials nor to create our own versions of the materials. That would violate the intellectual property rights of the CMMI Institute and ISACA. (Who? Read later.)



The CMMI model and appraisals are transitioning to a new version, "V 2.0". To accommodate the many user perspectives, this page is organized as follows:

PLEASE BE AWARE that this FAQ, like its prior versions, is NOT going step through every practice and is in no way enough information to use or appraise to the model.
For specific help understanding and using CMMI for your given situation, please contact us or another CMMI Institute Partner.

One last note: This FAQ was originally created when many users were moving from older versions of CMMI to v1.3. Except for reference and to aid the transition to V2.0, materials, definitions, and policies true to v1.3 but no longer accurate in V2.0 have been removed from this page and will no longer be supported. Content that remains the same in V2.0 from prior versions will not be deleted. An archive of the previous FAQ (true through January 2014) can be found here.



If you're reading this, we're assuming that you have experience with versions of CMMI prior to V2.0. If that's not you, don't blame us if you don't follow what we're saying. Go back and pick another section of the FAQ to start in.

V2.0 is an holistic view of the entire CMMI product suite that includes the Model, Adoption & Guidance, Systems & Tools, Training & Certifications, and the Appraisal Method.

This FAQ mostly focuses on the Model and the Appraisal Method. We'll let the CMMI Institute and Partners take up the slack of providing further insight into Adoption & Guidance, and Systems & Tools. We will provide a little information on Training & Certifications, but this will be for beginners, not for experienced users.


What's new > What's new in the model?


What's new > What's new in the model? >
Completely new stuff (unrecognizable)

There's no perfect way to dive into this. But if you consider yourself an expert—like us, most of you will find some piece of published V2.0 material and immediately flip to check out the PAs.

If you do, you will immediately notice that many of the PA names and acronyms are different—even if somewhat familiar, e.g., "Risk and Opportunity Management (RSK)", "Process Quality Assurance (PQR)". You'll then see a number of new PAs that appear to be entire PAs dedicated to what used to be goals or practices from before, e.g., "Peer Reviews (PR)" and "Estimating (EST)", as well as at least one mashup PAs, e.g., "Verification and Validation (VV)" and "Requirements Development and Management (RDM)", and some PAs that dropped constraining language, e.g., "Planning (PLAN)".

You might notice that some PAs seem to have disappeared entirely, e.g., v1.3's "IPM", and "IWM", or CMMI-SVC "CAM". But with a little sleuthing, you'll see that these are mostly absorbed into other PAs.

Details of which PAs are new and which have been changed and how and why are elsewhere in the FAQ. CTRL-F is your friend.

The one thing—well, two things—that will most likely immediately jump out at you are these complete WTH(!) PAs: "Governance (GOV)" and "Implementation Infrastructure (II)".

To keep things clean, we'll leave some of the other completely new and likely unrecognizable bits in the details for the Architecture, Structure, Levels (name changes, terminology, definitions), Higher Maturity, and Important Details sections, below.

Last, we should point out that the complete model itself is now only available by license, online (or limited download), for a limited time per user, and you can expect it to be updated more frequently. This means that individuals and organizations can purchase access to the complete model directly from the CMMI Institute. There are a variety of purchase options. The online license allows users to print a PDF of the model watermarked to their own personal use.

While we're on this topic, also unrecognizably new is that the online model is accessed through a "viewer" web application. This viewer allows users to see only those pieces of the model that are relevant to a given situation. For example, only SVC PAs at level 3, or only ACQ PAs at level 5. At the risk of getting ahead of ourselves, experts will have noticed that we just wrote "level" without the accompanying "capability" or "maturity" modifiers. That, too, is explained in the next section.

Before we leave this section on "new and unrecognizable" model characteristics, we should point out that there are many other aspects of the model one can characterize as "new and unrecognizable". This section is limited to several highlights and easily observed differences at a casual glance. Important changes are included in more detail below.

BACK TO Top | BACK TO What's new   BACK TO What's new in the model?


What's new > What's new in the model? >
Architecture, Structure, Levels (name changes, terminology, definitions)

The architectural changes actually drive most of the changes above, but describing the changes by starting with the architecture is like describing someone's personality by starting with their DNA. How they behave in the world (both the CMMI and a person) are much easier to describe when we take the perspective of what we can easily observe. But to truly understand what's going on, a deeper dive into the psyche is often useful. In the case of understanding CMMI V2.0, it's critical.


Name changes

Practice Areas, not Process Areas. And, Practices, not Specific Practices.

These may seem trivial details, but they're not. One of the traps prior versions of CMMI led users into is the conflation between an organization's own processes for getting work done and the role CMMI practices were expected to play in those processes.

Together, "Process Area" and "Specific Practice" combined in a very bad way. These terms took common, everyday words and repurposed them in an unintended but nonetheless unhelpful way. The result was too many organizations (AND consultants AND FAR TOO MANY appraisers) defined (or expected) organizations' processes to mirror each Process Area and to define the processes by the Specific Practices.

In other words, organizations would have a defined process called "Project Monitoring and Control" and in it would be 10 steps where each step was a Specific Practice. That would be an all-too-common situation. On the surface, the language lends itself to this misunderstanding. Why call it a "process area" if it wasn't supposed to be a process? Why call them "specific practices" if we weren't supposed to make our processes with them? This was never the intent, but this was (is?) a common misconception.

Changing from Process Areas to Practice Areas, and from Specific Practices to merely Practices not only helps deconflict the roles this content plays in an organization's operations but it also makes the language internally consistent with the architecture. More importantly, however, is that the language + architecture more obviously support the long-time notion that CMMI practices help improve the organization's processes, not define them.

Other name changes

Name changes to Practice Areas as well as the details of practices are just too much to get into here. We can, however, point out that practice labels are no longer prefaced with "SP". Instead they're just the acronym of the PA. We can also point out that the architecture has new terminology that are 100% not simply swapping old names for new names.

Practice areas and practices now follow a similar outline (architecture). They now include clearly labeled "Required Information" and "Explanatory Information." It's critical to recognize that the "Required Information" is "required" as part of correctly understanding and implementing—and appraising—the content found there.

A valid criticism of prior versions of CMMI was the conflation of what was "normative" (i.e., required of everyone) and what "descriptive" or "informative". Despite best efforts, this allowed users and appraisers to routinely mix and match bits and pieces of content to suit their needs. This resulted in inconsistencies across appraisals as well as benefits from improvements from both strict as well as lax implementations and expectations.

When it came to appraisals, the phrase "meets the intent" was bandied about. But what, exactly, is the intent was not explicitly spelled out. In fact, the "intent" of a process area or practice was left to be interpreted or interpolated from the provided text. It was never explicitly provided.

Now in V2.0 what was previously loosely expected as "Required Information" now very specifically includes "Intent", "Value", and "Additional Required Information". The idea is that these point to the practices as well as to the organization's processes. In use as in an appraisal one could ask themselves whether the organization's processes align with the intent of the work and achieve the value of doing the work? This goes for the practices as well. Does the organization's execution of the practices align with the intent of the practices and achieve the value of doing the practices?

This is important! This strongly implies is that an appraisal team can now go beyond whether or not the organization has evidence. Teams are now returned to a time in history when they were explicitly expected to decide whether the organization's processes meet the intent and achieve the value of doing the processes they're being appraised on.

***It is now more important than ever that whoever is picked to be the lead appraiser and appraisal team has sufficient contextual experience to be able to make judgments on the business system as expressed through work performed by processes.***

The "Additional Required Information" not only includes definitions to be used in interpreting the practice areas, but also often detailed lessons on the purpose of the processes and practices. You'll notice that the "purpose" statement has been eliminated from the PAs. Before, these purpose statements were weakly defined as somewhere between "normative" and "informative". Now, when needed, "Additional Required Information" provides the purpose, and using this definition is required when determining whether or not an organization has achieved the value and intent.

More on this in the sections below on Required, Informative, Ignorative as well as the detailed discussions in other sections.

BACK TO Top | BACK TO What's new | BACK TO What's new in the model? | BACK TO Architecture, Structure, Levels (name changes, terminology, definitions)


WTF did they do to the levels?

Let's face it. Capability Levels in v1.3 and prior were confusing and lacked much needed market credibility. Most organizations were interested in "Maturity Levels" and basing "Capability Levels" on Generic Practices was a weak expression of business capabilities.

V2.0 blows that away and now "Capability Levels" are tied more directly to business performance and less to the depth of Process Area implementation, such as it was.

This change is now also architecturally supported by the removal of the Generic Practices and the creation of the new Practice Areas, "Governance (GOV)" and "Implementation Infrastructure (II)". More on that next.

But first, let's briefly explain how "levels" work in CMMI V2.0.

There are still two types of "levels" in CMMI V2.0. However, instead of being defined by a confusing cocktail of Process Areas + Generic Goals they're defined internally by groups of practices within the Practice Areas.

Within Practice Areas, practices are grouped into levels. (For the Nerds, these are called "Practice Group Levels".) "Capability Levels" are then defined by the practices included at that level of a Practice Area + the same level of practices in the Governance (GOV) and Implementation Infrastructure (II) Practice Areas.

Maturity Levels are now defined by the CMMI Institute, and include a selection of PAs and levels of practices within them. "Maturity Levels" now correspond to "benchmark" model views because they align with the "benchmark" appraisal types discussed later.

As you might guess, in both level types the "level number" is taken from the maximum level of the practices performed from within the PAs.

As stated at the beginning. This would be so much easier with diagrams, but alas.

Changes in names and practice details aside, the definition of "maturity levels" hasn't changed significantly from prior versions of CMMI—that is, in terms of PAs and their expected details. The biggest change is really the inclusion of Governance (GOV) and Implementation Infrastructure (II), and their role in replacing the GPs.

BACK TO Top | BACK TO What's new | BACK TO What's new in the model? | BACK TO Architecture, Structure, Levels (name changes, terminology, definitions)


Where did the GPs go?!?!

One of the many valid criticisms of (prior versions of) CMMI were the "Generic Practices". Not so much what the intent behind them were but in how to incorporate them into an organization's processes and how to account for them in appraisals.

The idea behind the Generic Practices was to encourage organizations to have underlying capability for continuous improvement. And, that this capability would be universal. Problems with this idea arise when immature organizations are allowed to believe that artifacts and interview answers for practices are the same as actual capabilities. Further problems arise when appraisal teams look for artifacts and interview answers for practices in lieu of actual capabilities.

A capability for continuous improvement wouldn't be much of a capability if it only applied to some processes and not others. Would be less effective if they only applied to specific practices of processes. And, would be even less effective if they weren't broadly consistent across all types of work and all processes.

Adding to this confusion, generic practices within each process area became self-referential. A form of circular logic. An attempted work-around was to connect process areas to generic practices, but since all process areas had generic practices this merely amplified the circular logic. The metaphor for the model of a maze of mirrors wouldn't be far from the mark.

So there we have it. The placement of this capability for continuous improvement into the construct of "generic practices" and then their misinterpreted use created another unintended handicap on actual improvement.

Many long-time advanced users of CMMI have voiced the notion that "the generic practices should be their own process area." And similarly, that "there ought to be a way to ‘get credit' for an appraisal just on an organization's capability for continuous improvement." And, "what about an organization's processes that aren't covered by CMMI. Shouldn't they benefit from a capability for continuous improvement?" And finally, "every process expert says that leadership accountability is critical to improvement yet there's nothing truly strong enough in CMMI to hold leadership accountable for taking performance improvement seriously."

All of these ideas are addressed in V2.0.

Instead of generic practices we now have Governance (GOV) and Implementation Infrastructure (II) practice areas. And, these are both required in any benchmark appraisal regardless of scope. Furthermore, they can be appraised as stand-alone practice areas in their own Capability Level benchmark appraisal.

Importantly, these two practice areas are not self-referential. They don't myopically apply to Practice Areas, they apply to all of an organization's processes, NOT only to those covered by CMMI practice areas. And in fact, they are entirely about leadership putting its energies and resources into process improvement.

We repeat this important distinction. These two practices areas are not about CMMI practices, they are about the organization's processes and their senior management. In fact, in keeping with the concerted effort to separate CMMI practices and practice areas from an organization's processes we should note, again, that PAs in CMMI V2.0 (as they were previously) are not the definition of processes, just a means of improving organizational performance. Not only those processes associated with PAs in CMMI, but all of their processes. (The scope of an appraisal will limit the processes being evaluated, but the intent and value of the II and GOV PAs are universal and will be evaluated as such.)

BACK TO Top | BACK TO What's new | BACK TO What's new in the model? | BACK TO Architecture, Structure, Levels (name changes, terminology, definitions)


Required, Informative, Ignorative

In prior versions of CMMI there's a perennial ping-pong duel between minimally required material and the add-on material that provided context and relevance to the required materials. Required materials were aptly referred to as "required" and the context and relevance-making materials were referred to as "informative". The latter received the tongue-in-cheek nickname of "ignorative" since that's what too many organizations (and appraisal teams) did with it.

In fairness, architecturally, how can necessarily and functionally incomplete information be "required"? How can anyone enforce a single view of reality to be everyone's view?

No static text can anticipate the needs of every organization for all time. More words result in more interpretations. There's functionally no way that CMMI materials could be expected to be followed in their entirety, or, for CMMI materials to cover every combination and permutation of an organization's reality.

All of this, of course, only matters in appraisals. No one argues any of this when improving performance.

CMMI V2.0 improves on this by clearly separating required from not-required content with easily identifiable labels: Required and Explanatory.

Further, instead of expending tremendous emotional energy trying to provide explanation and context, CMMI V2.0 now relies on more local context, experience, and judgement by trimming the amount of information provided in the model material and adding simple touchstone content against which to compare the organization's outcomes in context-specific needs.

Now, among the required content, CMMI V2.0 includes "Intent" and "Value" at the practice area, and "Value" at the practice level of the architecture. When deciding on the effectiveness of improvement efforts (as well as on the "satisfaction" of the practices in an appraisal), users are required to ask themselves questions similar to, "Are we getting the value out of doing this?" and "Are we achieving the intent of doing this?" Ideally, the answers would be looked at together and in practical business terms, not process-centric terms.

BACK TO Top | BACK TO What's new | BACK TO What's new in the model? | BACK TO Architecture, Structure, Levels (name changes, terminology, definitions)


What's new > What's new in the model? >
Higher Maturity

To be honest, with everything that's changed with the CMMI V2.0 product suite, the changes associated with higher maturity are likely to have the most effect on separating the wheat from the chaff, as they say.

But let's start with the pragmatic.

The new architecture that associates "levels" with practice depth and not exclusively with number of PAs is a welcomed shift. To this end, the needless hair-splitting of measurement and improvement concepts among PAs is now irrelevant.

It never truly made sense to have some measurement and improvement ideas in "lower" maturity levels and other measurement and improvement ideas in "higher" maturity levels. This pointless separation saw organizations only performing causal analysis at "maturity level 5" because "CAR" was a "maturity level 5 process area" while simultaneously having one set of measures as artifact in "Measurement and Analysis" and a completely unrelated set of artifacts in "Organizational Process Performance." Equally dumb were the dreadful lack of any business performance-related metrics at any level and the limiting of "measures" to schedule tracking and resource utilization, which are already covered in other areas and have little to do with process performance.

We could go on . . .

In CMMI V2.0, "higher maturity" takes a pragmatic view of the practice areas and their contents. One can see that as the architecture of V2.0 was coming together, contributors realized that the idea of depth of practice applied to many processes.

If the purpose of measuring is to improve performance, then measurement and performance make no sense as separate concepts. It therefore makes perfect sense that these ideas be combined and that the density and challenges of higher maturity practices be in one place.

Managing Performance and Measurement (MPM) is now a single Practice Area for all things performance and measurement related. Doing this inextricably links measurement with performance and also establishes a distinct path of increasingly challenging and sophisticated use of measures and understanding of performance. Together with increased sophistication of analysis techniques (CAR), maturity and capability level 5 merely tie to these two PAs. No other PAs have level 5 practices in them.

In addition, what was ML4 is no longer a pointless placeholder (or rest stop) between levels 3 and 5. And, the new level 4 has a relevant value all on its own. This was accomplished once again thanks to the new architecture. Having said that, there are now "level 4" practices in four (4) PAs in addition to the two PAs that go up to 5.

Logically enough, those PAs where it makes direct sense to tie process performance to operational performance now have highly similar practices at level 4. In organizational processes where "statistical" or "quantitative" techniques are commonly found to separate high performing operations from lesser performing operations, CMMI V2.0 has level 4 practices. These will be found in the Process Management (PCM), Supplier Agreement Management (SAM), Planning (PLAN), and Governance (GOV) practice areas.

As usual, nothing precludes an organization from growing the capabilities of any of its processes using "statistical" or "quantitative" techniques. Rather than leaving the concept entirely unstructured, CMMI V2.0 sets explicit expectations on a minimum list of processes it expects to see higher maturity practices.

BACK TO Top | BACK TO What's new | BACK TO What's new in the model?


What's new > What's new in the model? >
Important Details

By going "online" with the official model and appraisal method, CMMI V2.0 can be (as has happened already) be updated more frequently than any prior version of CMMI. CMMI Partners and certified individuals will be informed of updates. If you're not working with someone who will be informed early, be sure to check regularly to ensure that you have the latest version of the model.

The new model architecture impacts appraisals in important ways. If you're planning to conduct an appraisal be sure to understand these changes and the implication to your appraisal(s). Architecture and terminology aside, appraisal rules have undergone significant changes in addition to new options so be sure to read up on these changes below.

One such change is this notion of a "view". CMMI V2.0 now supports different "views" of the model. These views are loosely based on the prior "Category" concept which is now evolved into "Capability Areas". Thanks to the online viewer now used to assemble models for specific uses, these views are much more tailorable to an organization's needs and are more than merely different arrangements of PAs. Views are more relevant as well because they also align with appraisal terminology and scoping options in ways that prior version of CMMI products were not co-aligned.

This FAQ isn't everything. Do not rely on this FAQ as your sole source of CMMI V2.0 material or for detailed differences between V2.0 and prior versions.

BACK TO Top | BACK TO What's new | BACK TO What's new in the model?


What's new > What's new in the model? >
Our Opinion of these changes

Overall, the contributors to this FAQ see the model changes as predominantly positive moves. We will not comment on business decisions about how the model is distributed or financial aspects of using the model.

Creating entire PAs out of handfuls of practices is not new to V2.0. Neither is consolidating practices from across PAs into their own PA, nor eliminating practices or entire PAs. It is noteworthy to examine these changes in more depth.

For example, the "Planning" ("PLAN") PA is a nod toward the idea that the concept of "planning" isn't just for the "project". All work must be "planned". Similarly, the idea that "Estimating" (EST) isn't just something done once during "planning", and only for macro project activities. Creating less constrained and more generalized PAs for these activities emphasizes the universal nature of these processes and their effect on performance. It signals that limiting planning and estimating (in these examples) to early parts of the work is a short-sighted attitude and misses the point of doing these activities in the first place. On the other hand, making them more broadly applicable also widens the pool of potential appraisal artifacts to allow for better alignment between the types of work real companies do and the nature of their processes.

Once again, as in prior models, this model will not satisfy people who want to be told exactly what to do. To such people we firmly adhere to our prior admonishments that the mere desire for such prescriptions belies their sincerity for improvement. And to such users, we sincerely hope and expect that they will be buried by II and GOV.

V2.0 is much easier to read and follow. An evaluation of CMMI v1.3 found the readability to be well above the 4-year college degree level. The readability of the appraisal method definition was even more advanced. V2.0 comes in at a level aligned with that of someone just prior to entering college. This is an improvement, clearly, and we feel that attempts to improve it further would not yield high returns at this time.

BACK TO Top | BACK TO What's new | BACK TO What's new in the model?


What's new >
What's new in the appraisal?

This section focuses on the replacement for what was previously known as the "SCAMPI A". "CMMI Appraisal Method" replaces the term "SCAMPI" which renders that term no longer relevant. Therefore the labels "SCAMPI A", "SCAMPI B", and "SCAMPI C" no longer apply. The ideas behind these different appraisals remain (and are enhanced, see below) but the terms are, "Benchmark" appraisal, "Sustainment" appraisal, "Action Plan Reappraisal", and "Evaluation" appraisal.


What's new > What's new in the appraisal? >
Appraisal Types

For practical purposes, that is ignoring the changes in methods or rules, "SCAMPI A" and "Benchmark" are effectively interchangeable, conceptually. Nerds go away. You have your own area.

The Evaluation appraisal replaces both the SCAMPI B and SCAMPI C. We won't get into the philosophical debate over this change. Suffice it to say that we support the business logic behind it.

The "Action Plan Reappraisal" isn't exactly new, but in the prior version of the appraisal method, this was added later. Now it's formally integrated into the CMMI Appraisal Method and the definitions for when these appraisals are warranted and allowed are formally folded into the overall method with more clearly delineated role of the CMMI Institute.

Action Plan Reappraisals are essentially what they sound like. A "re-appraisal" for parts of an immediately prior appraisal that failed to achieve its appraisal goals. Action Plan Reappraisals happen within a few months of a "failed" appraisal and are only valid in specifically defined situations.

The Action Plan Reappraisal qualifications and rules are detailed in the CMMI V2.0 MDD (Method Definition Document) and are not included in this FAQ. If you're in that boat—that is, you've just "failed" an appraisal—you should be working with your lead appraiser already.

Action Plan Reappraisals require that you use the same Appraisal Team Leader as the appraisal being "re-appraised". But if you feel that starting that conversation with a different person or company might be beneficial, then find another or contact us to help you find someone else to talk to]. But hurry, Action Plan Reappraisals must be completed within four months of the end of "failed" appraisal.

This leaves the "Sustainment" appraisal which is entirely new with V2.0.

The idea of a Sustainment appraisal is to allow a formal, registered appraisal of "Benchmark" integrity to (if successful) extend a prior appraisal rating for two more years. The Sustainment appraisal can be done any time prior to the end of the validity period of a prior Benchmark or Sustainment appraisal. No more than three Sustainment appraisals can be conducted in a row.

Sustainment appraisals only are allowed if the OU meets certain stability criteria detailed in the new MDD. The benefits of a Sustainment appraisal stem from the fact that the scope of practices is limited to as little as 1/3 of the original scope (best case scenario). Therefore the size of the appraisal team and the duration of the appraisal can be significantly lower. On the other hand, the lower effort to prepare for a Sustainment appraisal in combination with the lower resources, expenses, and time, lend themselves to more objective frequent insight into process performance.

Many organizations are constrained in their ability to bring in outside consulting for general process help. These same organizations often have no trouble allocating funds for certifications and appraisals. Such organizations use certifications and appraisals as their opportunities to pick the brains of the independent experts brought in to perform the official events. Sustainment appraisals create a low-cost, low-barrier mechanism for organizations serious about improvement to justify having more frequent visits by these outside experts they'd otherwise not be authorized to bring in.

NOTE: the details of qualification and scope for Action Plan Reappraisals and Sustainment appraisals are spelled out in the new CMMI V2.0 MDD. Doing so here is beyond the scope of the FAQ. If you have not yet experienced a V2.0 appraisal, you are too early to be thinking about either an Action Plan Reappraisal or Sustainment appraisal. You will need to have experienced a V2.0 Benchmark appraisal first and are advised to discuss your organization's particulars with a (if not your) certified lead appraiser.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >

The elephant in the room of changes to the appraisal is the wholesale reset on how an organization's work is sampled for the appraisals.

Sampling is now more robust and less easily "gamed" by appraisal stakeholders. Projects can no longer be "pre-ordained" to appear in an appraisal while other projects hide under the rug. An entire organizational unit will no longer be able to lean on the results of a few hand-selected golden children to represent the purported process prowess of the entire operation.

For example, an organizational unit claiming 37 in-scope projects (previously, "basic units") will have to show evidence from a relatively random sample of those 37 projects. While the total number of projects may still be low under the new sampling rules, except where the entire OU is just one project it will be unusual for a single project to represent all PAs. This not-really-random sample is determined by an algorithm behind the scenes at the CMMI Institute.

Appraisal stakeholders (e.g., sponsors, lead appraisers, etc.) can request exceptions, designate focus projects, and other edits to the sample, but the general idea is that several projects in an Organizational Unit should be prepared to provide objective evidence of practice performance. The selection of projects is provided to the OU 60 days in advance of the appraisal on-site, i.e., "Conduct Appraisal Phase". Changes to the sample can be requested up to two iterations, and, if the schedule slips more than 30 days, a new sample must be requested.

The sampling algorithm chooses a sample of organization and in-scope projects such that PAs are mostly spread out among the in-scope projects and the organizational elements that support them. Despite the use of a discrete algorithm, the sample is nonetheless referred to as the "Randomly Generated Sample" (RGS). As far as the OU knows, the sample is random. Even though in the strictest sense of the term, it's not "random". Yes, don't worry, projects that don't execute particular processes relevant to a PA can be flagged for exclusion from being sampled.

Organizations with 1 – 10 in-scope projects can expect one of these projects per appraisal PA. Obviously, with more PAs than projects, some projects will provide objective evidence for more than one PA. Nonetheless, some projects could be entirely not in the sample. Organizational units with 11 – 40 projects will be expected to cover PAs with two projects, each. And, organizational units with 40-200 projects will demonstrate each PA with up to three projects. (The MDD accounts for OUs with hundreds of projects, but if your organization is in that situation, you're probably not seeking education from this FAQ.)

As in earlier versions of the appraisal methods, "Subgroups" are still a thing and the delineation of Subgroups is still determined by the same "sampling factors" as before:

  • Location
  • Customer
  • Size
  • Type of Work, and
  • Organizational Structure

Having said that, although the RGS algorithm attempts to maximize PA coverage across subgroups, the number of PAs required for the sample is not affected by the number or relative size of subgroups.

NOTE: the details scope and sampling for Sustainment appraisals are spelled out in the new CMMI V2.0 MDD. Doing so here is beyond the scope of the FAQ. If you have not yet experienced a V2.0 appraisal, you are too early to be thinking about a Sustainment appraisal. You will need to have experienced a V2.0 Benchmark appraisal first and are advised to discuss your organization's particulars with a (if not your) certified lead appraiser.

More details about sampling and the Randomly Generated Sample will be found in the section below, for Experts, Nerds, and Power Users.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >

Also new in V2.0 appraisals is the elevation of the concept of a "role" to the relevance of process activities as evaluated in an appraisal. We mention "roles" here without elaboration to the details that are better explained in discussion with a lead appraiser.

"Roles" come into play because the sampled projects and the PAs covered by them must be accounted for in the appraisal planning in terms of people and artifacts whose role are to perform the processes and create or use the artifacts.

The concept of a "process role" is introduced to (a) further strengthen the connection between processes and resources and responsibilities for performing processes, and (b) to avoid appraisal results based solely on project-level personnel and somehow fail to incorporate organizational staff who work with process management.

In other words, the appraisal expects to see staff whose roles include process activities and not just staff who are users of nameless, faceless process stick figures.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >

In addition to technological reasons justifying the demise of SAS, the new appraisal sampling process, timing requirements, and planning tasks along with several other changes necessitate a new appraisal system. This new system is simply called, "CMMI Appraisal System", or CAS. Also, whenever the model changes, aspects of the appraisals can change. The new system ties CMMI product suite components more effectively than before.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >
Team requirements

Appraisal team members (ATMs) must still have management and field experience relevant to the scope of the appraisal and the projects within it, and ATMs must still have more diverse roles than exclusively be from those who wrote processes being appraised. V2.0 clarifies the distinction of how to count the total or average experience of team members vs. the leader as well as limits the number of lesser experienced team members.

In particular, excluding the appraisal team leader, at least one ATM must have not less than 5 years of management experience and as a team have at least 10 years' experience.

And, each team member (again, not including the team leader) must have at least 3 years of experience performing the type of work being appraised and—as a team—must have a minimum total of 30 years' field experience in performing processes in the domain or discipline in scope with and a team average of 10 years. (The appraisal team leader (ATL) may admit no more than one ATM with less than three years' experience if the rationale is included in the plan.

Training requirements for appraisal team members is covered here.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >
High Maturity (HM) appraisals

Changes to the appraisal unique to high maturity are predominantly in the training requirements.

On high maturity appraisals, all team members must have High Maturity related training as specified by CMMI Institute. This is a specific high maturity course that is either provided or approved by the CMMI Institute. And note that it's not just the High Maturity mini-team members who need this training.

As for High Maturity PA mini-teams, only a CMMI Institute Certified High Maturity Lead Appraiser (CHMLA) or an ATM with High Maturity-related training and a minimum of two years of CMMI process and performance improvement-related statistical analysis experience may lead the mini-team.

ATMs participating as a HM mini-team member must have a minimum of one year of direct experience in applying statistical analysis and CMMI process and performance improvement-related High Maturity concepts and techniques. However, *all* ATMs on an appraisal that includes High Maturity practices must have specifically-designated "High Maturity" training.

High Maturity appraisal plans must show that the appraisal team has, as a "Collective", experience implementing high maturity practices. It's unclear to us whether "collective" means that each team member must have this experience or that something less than everyone having high maturity experience is be sufficient. We'll report back as we gain clarification on this expectation.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >
Performance Reports

One evergreen criticism of CMMI appraisals has been their focus exclusively on CMMI practices and near total avoidance of whether the practices contribute business value to the organizations using CMMI. CMMI V2.0 appraisals, do not have a perfect solution to that—yet.

One step in the right direction is in the model itself. Specifically, the new architectural elements of intent and value, and the much clear-er expression of required information. Appraisal teams are now instructed to judge practices, processes, and results by whether the outcomes meet the intent and achieve the value defined in the model.

Another step in is more clear definitions of adequacy and sufficiency. We will leave this detail for the section below.

None of the above are completely new. They are improvements to prior aspects of the appraisal. Some improvements are bigger than others and some may have a greater impact on users than others. Some codify how the best users have been operating all along, and some push the guesswork out.

One completely new step in bridging CMMI practices as exposed in appraisals with business benefits is the creation and requirements of a "Performance Report." A "Performance Report" is now required as part of the Benchmark and Sustainment appraisals.

The Performance Report does not have any effect on the ratings. The report acts as focusing mechanism for the appraisal team to summarize their view of an organization's approach and effectiveness towards performance.

Too often, organizations use CMMI for the sole purpose of the appraisals and they have no objective connection between CMMI practices, appraisals, and their own positive performance. The Performance Report aims to create this connection, hopefully—if needed—educating users to the benefits of using CMMI for the greater purpose of actually improving. (Imagine that!) A real connection between business performance and process performance would improve their experience and outcomes. CMMI practices are just part of that and a CMMI appraisal is an opportunity to bring to light the organization's connections between processes and performance.

The performance report is not a repeat of appraisal results and DOES NOT tie specifically or exclusively to CMMI practices. However, an appraisal can identify both positive and negative influences in the relationship between CMMI practices and the organization's performance in this report.

The report is currently a templated Excel spreadsheet. The details of the which are beyond the scope of this FAQ. It can be started before the appraisal and completed after the ratings are determined. The report is expected to be consistent with results and rationale provided for any inconsistencies.

Suffice it to say that the report has fairly reasonable questions and qualitative and quantitative data fields. It is a step in the right direction. We look forward to when serious users will be able to use a similarly official artifact to differentiate themselves by using the report to record and communicate quantitative "before and after" performance data as resulting from their use of CMMI practices. And then further, to a point when all users will be required to do so.

We'll provide much more details (and examples!) about Performance Reports in the section below, for Experts, Nerds, and Power Users.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >
Important Details

The architecture of CMMI V2.0 was created with the idea of more comprehensive appraisals in mind. That is, appraisals that cover different "views", e.g., Development, Services, etc. And, the new sampling approach brings greater rationale to the appraisal results of more complex organizations. Adding to this the broader angle taken by the II and GOV PAs, we find ourselves in a situation where explaining the "roll-up" of characterizations into ratings challenging without visuals.

Were we to explain how practice characterizations roll up (individual characterizations within practice groups or practices across projects), we'd basically be repeating (at worst) or rewriting (at best) the appraisal method definition document. Neither of which serve much purpose. The value we'd be adding is fairly low.

Due to the above, in this FAQ we will merely point out a few observations:

  • The "intent" and "value" aspects of practice and PA definitions improve an appraisal team's ability to debate the merits of an organization's work.
  • The absence of "Goals" in V2.0 certainly makes it easier to normalize ratings for levels.
  • V2.0 of the MDD has removed at least some (we believe, a good bit) of the ambiguity surrounding how to characterize practices that don't fully meet the intent and value of practices.
  • The handling of "Not Yet" ("NY") characterizations is clear. And
  • Characterizing the work of projects whose processes are different is now one of the use-cases distinctly addressed.

If you don't yet have a relationship with a certified lead appraiser, we refer you to the MDD for details. The language is fairly accessible.

Another important detail about appraisals is how "affirmations" are handled. Again, too much to get into in this FAQ. We will point out that the number and variety of affirmations collected are different for "sustainment" appraisals compared to "benchmark" appraisals. However in general, people familiar with v1.3 SCAMPI A appraisal rules will find V2.0 appraisal requirements for affirmations familiar.

Next detail to cover is how II and GOV are handled.

II applies to the organizational unit's implemented, performed, and supported processes—NOT the PA practices.

GOV, meanwhile, applies to the organizational unit's senior management and affected stakeholders as they relate to the set of processes performed within the organizational unit.

In addition to slightly different practice characterization and organizational unit roll-up rules, objective evidence for II and GOV must clearly reflect work at a higher level than previously explicitly spelled out. Organizations will no longer be able to "get by" without convincing an appraisal team (and it's leader) that performance improvement is more than "lip service".

Finally, we reiterate that evidence adequacy and sufficiency are clearly defined in the MDD. Section 1.1.5, to be exact. If, after reading this section, you're still unclear, please contact us with your specific question and we'll do our best to answer you.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new > What's new in the appraisal? >
Our Opinion of the above

In general, our opinion falls into two categories of thought. On one hand, we’re very pleased with all the changes and new elements of the appraisal. Much has been done to move towards a serious effort at evaluating the business benefits of improvements in general and improving with CMMI, in particular.

On the other hand, there’s still room for CMMI appraisals to take a harder look at how organizations improve performance. Using process measures and business objectives, organizations using CMMI can show quantitative performance improvements in a way that is not easily "gameable" for the purposes of appraisals.

In the past, attempts to move the notion of "quantitative performance" into "lower" maturity levels were met with vociferous protests of how doing so blurs the lines between "high maturity" and "low" maturity. And, that doing so "forces" "lower maturity" organizations to use quantitative values when they weren’t trying to achieve "high maturity".

This attitude reflects a poison among CMMI users. But worse, this reflect a market value proposition of CMMI divorced from business relevance.

Right now, there’s nothing stopping an organization from taking such a quantitative performance-centric approach—with or without CMMI. However, doing so does not appear in appraisal results and there’s no distinction brough to any organization doing so. In our opinion this is a missed opportunity. It would require a more calibrated appraisal life cycle and a more experienced appraisal team and leader.

The performance report is a step in the right direction and the overall improvements to the model and the method support trajectories in this direction. The appraisal value proposition (and by association, the model’s value proposition) would be enhanced with this next piece. Then, CMMI users would be able to market their own use of this more rigorous appraisal, thus not only demonstrating their ability to operate with CMMI practices, but that they’re truly improving performance—at any level.

Agile methods become wildly popular because they were shown to deliver business results more effectively than many "traditional" development approaches. CMMI must also be able to be shown to deliver better business results. A quantitative performance approach that appears with any appraisal—with or without particular level ratings—would be a badge of honor only available to organizations with the true capability and maturity to set themselves up for such objective scrutiny and set themselves apart.

BACK TO Top | BACK TO What's new | BACK TO What's new in the appraisal?


What's new >
What's new with training and certifications?

As stated at the very beginning of the FAQ, this FAQ is focused on the CMMI model and CMMI Appraisals. There are several other beneficial components to CMMI products and services that we will not address in depth in this FAQ including training and certifications that are not on the critical path for appraisals or appraisal-related activities. The CMMI Institute already explains all of this and we’re not them so we won’t repeat their materials.

We will, however, provide you with some ideas on:

More details about training and certification will be found in the section below, for Experts, Nerds, and Power Users.



What's new > What's new with training and certifications? >
Training requirements for being on an appraisal

Each ATM shall have completed a CMMI Institute-licensed offering of the introductory course or upgrade training related to the current version of the model in the model scope. — MDD §1.2.3.

For most practical purposes, ATMs will need a minimum of 3 days of training and 4 days to be on a high maturity appraisal.

The introductory course is now in several parts. At a minimum there’s the "Foundations of Capability," a 2-day class that covers the very basics about the model and the practices common throughout the model. There’s a tiny bit about appraisals, and modules about the other training and certifications available. There’s a required test at the end and after successfully passing the test participants are formally "Certified CMMI Associates." To be an ATM, one must be a "Certified CMMI Associate." That is, pass the end-of-training exam.

This class, alone, does not qualify someone to be on an appraisal that includes DEV, SVC, Supplier, People, or other model view PAs, or High Maturity. To be an ATM on an appraisal for a specific view, one needs to also add a 1-day class for each view. (This makes the 3 day minimum.)

NOTE: The exam is online and administered by the CMMI Institute. If you received training that did not include an exam on the CMMI Institute web learning portal, you likely did not receive an authorized, certified course and would not qualify to be on an appraisal. After passing the exam, your name will appear in the CMMI Appraisal System as someone who can be selected as an ATM. There are back-office mechanisms that make this connection. If you don’t take the authorized, certified course your name will not be in the database.

In addition to minimum training for model content, ATMs must be trained by the appraisal team leader on the appraisal method. Experienced ATMs do not need to repeat a formal appraisal team training course for each appraisal.

BACK TO Top | BACK TO What's new | BACK TO What's new with training and certifications?


What's new > What's new in the appraisal? >
Training requirements for being on a high maturity appraisal

To be on an appraisal that includes "High Maturity" levels, in addition to the training described above, one also must participate in a "High Maturity Concepts" course. Another one-day class. This is for *all* ATMs on the appraisal, not just those on the high maturity mini-team(s). (This makes the 4 day minimum.)

In addition to minimum training for model content and high maturity, ATMs must be trained by the appraisal team leader on the appraisal method. The appraisal team leader must ensure that ATMs have been trained in the appraisal method. Experienced ATMs do not need to repeat a formal appraisal team training course for each appraisal.

BACK TO Top | BACK TO What's new | BACK TO What's new with training and certifications?


What's new > What's new with training and certifications? >
Upgrading from earlier versions to V2.0

This is honestly a bit more complex and depends on a number of factors. It’s also changing dynamically due to market and global complexities.

Some people will be able to take an "upgrade" course that includes the exam. For some this can be accomplished entirely online. At a minimum, only current ATMs under v1.3 are minimally eligible. There was a deadline for this in 2019, but it has been extended. It’s best if you contact the CMMI Institute to discuss your options.

BACK TO Top | BACK TO What's new | BACK TO What's new with training and certifications?


What's new > What's new with training and certifications? >
Certificate Options

There are training and certification pathways that lead to becoming certified lead appraisers and instructors, as well as to become more advanced users of CMMI. Certified lead appraisers and instructors all go through these same steps with additional training specific to leading appraisals and being instructors, respectively. People who don’t need to be instructors or appraisers can benefit from the deeper dive into all versions of model content and can add these training accomplishments to their certification repertoire for their own professional advancement.

Quickly, there's now a defined requirement and pathway for people to be "Certified" language interpreters/translators during appraisals. Take note of it if this is relevant to you.

BACK TO Top | BACK TO What's new | BACK TO What's new with training and certifications?


What's new > What's new with training and certifications? >
Our Opinion of the above

Changes to training for appraisals is necessary for (we hope) obvious reasons. Explicit training for high maturity merely codifies what good lead appraisers do for the teams they lead. Our guess is that there must have been too many high maturity appraisals with poorly qualified team members.

The certifications for people who don't have a target to be lead appraisers or instructors is a personal or business decision. Our focus in this FAQ is on practical implementation and value-added appraisals. Even though certifications don't play a role in that for us, it can for others and we won't stake a position on them because it's—quite literally—none of our business.

Over all, the training changes are consistent with the needs of the product suite and the users. As always, however, we caution that training alone will not answer all questions for all combinations and permutations of any given organization's reality. Only experience can provide that. The best training for CMMI is using CMMI. You don't "need" training to use CMMI.

Getting (good) training is scarcely a bad idea and this FAQ is not a substitute for good training.

Now. Go study.

BACK TO Top | BACK TO What's new | BACK TO What's new with training and certifications?



What's new >
Other new things arriving with V2.0

Our friends over at The Process Group have put together this very handy concise page of changes coming with V2.0.

Among other useful bits and bobs, their page also conveniently lists practices deleted and added to the model and deleted from the model.

Their page also includes some of the marketing talking points for V2.0 which we've left off this site.

Watch this space as the CMMI V2.0 product suite continues to evolve and we'll post updates as they happen.



What's new > Other new things arriving with V2.0 >
What Happened to CMMI for Services? Development? Acquisition?

The distinctions for Services, Development and Acquisition (and soon, others) still exist. However, they are no longer their own separate models.

There's only one CMMI now. The CMMI contains all the PAs for all the disciplines.

Users select PAs from among the 25. Certain selections—when appraised and resulting in a Maturity Level—can be identified as being associated with Services, Development, etc.

As noted earlier, instead of previously called "constellations"there are particular "views" of the 25 CMMI PAs for seeing the Services, Development, etc., disciplines as appropriate for the needs of the organization.

The de-emphasis of the different disciplines is consistent with the de-emphasis on compliance and pivot to performance.

BACK TO Top | BACK TO What's new | BACK TO Other new things arriving with V2.0




If you're reading this you might be wondering something along the lines of,
"How much of what we've already done is now trashed?"
That question is the inverse of
"How much of what we've already done can we still use?"

Our brief answer to either of these questions is: Don't Panic!

Enough of the model, appraisal method, and training from v1.3 will be familiar under v2.0. We've said it here, above, we've said it in the older version of the FAQ, and we've said it every.single.time we're asked to speak publicly about it:
The CMMI's true value is as one tool in the pursuit of new capabilities towards improved performance.

The long answer is—

It's sad that in too many organizations CMMI is their only formal improvement activity. And in these organizations their only way to even remotely pretend that they're living as a "continuous improvement" operation is that they have (had) appraisals every three years or so.

The "Generic Practices" in prior versions of CMMI were supposed to provide an outline for the elements of effective processes and continuous improvement. These practices—when done well—would be common among organizations whose leaders were involved in improvement; where having something to learn and improve was not a sign that people are incompetant; among companies where experimentation and the ability to voice unpopular ideas were not shot down; and where helping the operation improve performance was seen as part of everyone's jobs.

These characteristics showed up as people having the time and resources to think about processes and the authority to make changes. These showed up as companies having comprehensive, regular quality-oriented activities and internal audits. And these showed up as people being assigned to oversee processes and staff being regularly trained in process-related disciplines.

Unfortunately, too many companies were no more invested in organizational continuous improvement than they were in improving the performance of the capabilities core to their work. They were only invested in the least they could possibly do to achieve their target appraisal rating.

While it's easy to bash these companies, it's not like they're not behaving in entirely predictable ways. [Sorry for the double-negative.] Their behavior is entirely predictable. So predictable it's cliché.

It's also not their fault that they exploit whatever they need to minimize disruptions to their revenue models. Looking at it objectively and in a non-zero-sum ecosystem, companies—especially smaller ones—are often pushed into these modes of behavior.

CMMI has long suffered the under the legacy of paradigms that only saw the world through lenses whose focal length could only allow larger companies to be seen. The smaller companies live in a different relationship with their customers and they have different operating models than most larger companies. In many ways and in as many situations CMMI assumes the operations of smaller companies require more robust capabilities than their operational models require.
Imposing a requirement to run an operation with significantly more process overhead than is needed to keep its customers happy and the business afloat is preposterous.

But this truth was often lost on many parties to the CMMI table.
This baby has many fathers. Though, no one of them can be said to be deliberately making it difficult.

What we're all seeing is a confluence of several unintended consequences—.

Much of the CMMI's content came from contributions from large companies. Even companies that were considered "small" weren't particularly small. And by and large, all contributing companies were in the fields of large and complex engineering and IT services operations. Beyond this, these companies were often the beneficiaries of deep industrial roots and extensive process infrastructures.

Both the model and the appraisal method inherited underlying assumptions about how businesses operate and how users would respond to anticipated appraisal characteristics. The materials included embedded cognitive bias against seeing possibilites these contributors could not see themselves experiencing in the future. Dissent from less "established" voices was easily dismissed as being "unlikely edge cases".

As a result the appraisal method, with its focus on evidence and de-emphasis on whether CMMI practices were relevant or adding value, became easily gamed by companies' abilities to serve artifacts and answer questions entirely devoid of value-added performance. (In full disclosure, the artifact focus of CMMI v1.x appraisals was itself a response to earlier issues found with inconsistent process evaluations attributed to a stronger reliance on the individual idiosyncrasies of evaluators and evaluation teams.)

Also, the manic emphasis on maturity levels suffered dreadfully from a measurement affliction credited to British economist, Charles Goodhart. Goodhart's law states,

Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.

Later, this was restated more coloquially in its more popular form as,

When a measure becomes a target, it ceases to be a good measure.

In other words: Forced measures get gamed.

Many of the contributors to the prior versions of CMMI came from organizations with sufficient resources if not also intrinsic incentives to have robust improvement infrastructure. As one engineering VP of a small engineering firm once put it to us,
"If I hire you as an engineer you'd better f**king know why you should be doing configuration management!"

As such, these companies saw themselves as "serious" CMMI users and thought all other such "serious" users would emulate their same attitudes and behaviors. In many ways, they were and still are correct. "Serious" CMMI users don't typically have to be fed process and performance improvement practices with a baby spoon.

Having said that, the premise of the assumption is false. In particular, the assumption conflates the idea of being "serious" about CMMI with being "serious" about improving performance.
An organization can be serious about improving performance without invoking CMMI at all!

Unfortunately, the model and appraisal method made it hard for organizations outside the contributors' paradigms to easily follow CMMI's ideals.

Having said that, there were still plenty of companies scarcely putting in enough effort to even "game the system" effectively! So one can forgive the contributor's attitudes about "non-serious" companies. As we all know, it doesn't take too many apples to spoil the bunch.

In sum, what doesn't change with CMMI V2.0 is the idea that remains: Fostering new capabilities and improved performance. V2.0 hasn't deviated from this. However to help companies be more honest about their improvement efforts, the changes to the model architecture, contents, and appraisal method described above are designed to be more adaptable to more kinds of companies and to truly weed out the bad apples.

Oh, P.S., If you're a very small operation that would ordinarily use only one project in your appraisal—often because you only have one big project going on at a time—the sampling rules and "Randomly Generated Sample" won't really change much of the experience for you.





Welcome weary traveller! We've made this site just for you, really.

Let's start you off here.

We've broken up the FAQs into the following sections (there will be much cross-over, as can be expected):


Model FAQs for Newbies


Appraisals/Ratings FAQs


CMMI, Agile, Kanban, Lean, LifeCycles and other Process Concepts FAQs


SEI / CMMI Institute / ISACA


Training FAQs


Specific Model Content FAQs



Model FAQs


A: CMMI stands for "Capability Maturity Model Integration". Why "integration"? It's the integration of several other CMMs (Capability Maturity Models). By integrating these other CMMs, it also becomes an integration of what used to be called "process areas" (now called "practice areas") and the practices within the model in ways that previous incarnations of the model(s) didn't achieve. The CMMI is a framework for business performance improvement. In other words, it is a model or framework for building improvement systems. In the same way that models are used to guide thinking and analysis on how to build other things (algorithms, buildings, molecules), CMMI is used to build improvement systems.

It is NOT an engineering development standard or a development life cycle. Please take a moment to re-read and reflect on that before continuing.

There many "views" of CMMI. These views are organized around particular areas of interest such as product development, service delivery, supplier management, and people. The most famous view is the CMMI for Development—i.e., "DEV". When including its CMM ancestors, it has been around (in one version or another) since the 1980s. (Which explains a lot.)

All views share many things, but fundamentally, they are all nothing more than frameworks for assembling improvement systems. Each view has content that targets improvements in particular areas, tuned to organizations whose primary work effort either:

  • Develops products and complex services, and/or
  • Sources/Acquires goods and services from others, and/or
  • Provides/delivers services, and/or
  • Effectively works with People and their relationships to the organization and the work, and/or
  • Deal with Security.
  • Other disciplines in the future.

CMMI does not actually contain processes in any form. CMMI alone cannot be used to actually develop products, acquire goods, fulfill services, or operate a real business. The assumption with CMMI is that the organization has its own operating models, standards, processes, and procedures by which they actually get things done for real. The content of CMMI is to improve the performance of all that stuff—not to define them.

Having said that, it should be noted that there will (hopefully) be overlaps between what any given organization already does and content of CMMI. This overlap should not be misinterpreted as a sign that CMMI content *is*, in fact, process or performance content. It can't be over-emphasized: CMMI, despite being chock-full-o examples and explanations, does not contain "how to" do anything. The only stuff in CMMI are areas where there are known practices that can help deliver value to performance improvement activities.
The overlap between actual work and CMMI content is easy to explain: activities that help improve performance can also be activities to effectively perform a process.

Think of it this way: a thermostat's function is to work with the heating appliance to maintain the temperature of a room. It is merely a sensor on a switch. (Or a switch on a sensor?) It has a specific purpose. This same item can also be part of one's effort to improve environmental impact, control costs, and manage energy. The object hasn't changed, but its use is enhanced to deliver greater value.

CMMI practices are the same in many ways. Where things get hairy are where not every organization performs even the basic activities necessary to perform the processes associated with practice areas well. This is much like an organization occupying a room and leaving that room's thermostat at the same temperature all day, all week, every week, all year despite the surrounding environmental realities or occupancy of the room indicating that the set temperature is only required for 20% of the total time consumed.
So, to one organization what seems trivial and commonplace to another is salvation from despair.

Before we get too off-track... CMMI is meant to help organizations improve their performance of and capability to consistently and predictably deliver the products, services, and sourced goods their customers want, when they want them, and at a price they're willing to pay. From a purely inwardly-facing perspective, CMMI helps companies improve operational performance by:

  • lowering the cost,
  • raising the predictability,
  • increasing the productivity

of production, delivery, and sourcing.

Without some insight into and control over their internal business processes, how else can a company know how well they're doing before it's too late to do anything about it? And if/ when they wait until the end of a project or work package to see how close/far they were to their promises/expectations, without some idea of what their processes are and how they work, how else could a company ever make whatever changes or improvements they'd want/need to make in order to do better next time?

CMMI provides the framework from which to pursue these sorts of insights and activities for improvement. It's a place to start, not a final destination. CMMI can't tell an organization what is or isn't important to them. CMMI, however, can provide a path for an organization to achieve its performance goals.

Furthermore, CMMI is just a model, it's not reality. Like any other model, CMMI reflects one version of reality, and like most models, it's rather idealistic and unrealistic—at least in some ways. When understood as *just* a model, people implementing CMMI have a much higher chance of implementing something of lasting value. As a model, what CMMI lacks is context. Specifically, the context of the organization in which it will be implemented for performance improvement. Together with the organization's context, CMMI can be applied to create a performance improvement solution appropriate to the context of each unique organization.

Putting it all together: CMMI is a model for building improvement systems from which (astute) organizations will abstract and create improvement solutions that fit their unique environment to help them improve their operational performance.

At the risk of seeming self-serving, the following addresses the question of what CMMI is:

Keys to Enabling CMMI.

Back to Model FAQs for Newbies


Is CMMI for us?

A: We should start the answer to this question with a quick sentence about what CMMI itself *is*.

CMMI is about improving performance through improving operational activities. In particular, it's improving work associated with managing how organizations develop or acquire solution-based wares and define and deliver their services. So we should ask you a few questions before we answer yours:

  • Do you feel that you ought to be looking at improving your performance?
  • What business performance improvements would you like to see from your operations?
  • What worries you most about your business performance?
  • Could you handle all the work if you won all the work you're pursuing?
  • What's the first thing that will break as soon as you reach capacity?
  • Is your workstream continually delivering or is there more work piling up in your workstream than what's being delivered?

SO, is CMMI right for you? Obviously this depends on what you're trying to accomplish. And to be perfectly honest, CMMI doesn't always have the fastest or best answers to these questions. But that doesn't mean you can't use CMMI to shed light on what's going on.

Now, for the rest of the response... Sometimes it's best to "divide and conquer". How you view CMMI is frequently influenced by an organization's world view. We'll divide the world into two groups: those who develop wares and provide services for US Federal and other governmental agencies (or contractors) and those who don't.

Those of you in the former group will probably come across CMMI in the form of a pre-qualifier in some RFP. As such, you're probably looking at the CMMI as a necessary evil regardless of whether or not you feel your processes need to be addressed in any way. If you're in this group, there aren't many loop-holes.

One strong case for why your company might not need to mess with CMMI would be if you are selling a product of your own specification. Something that might be called "shrink-wrapped" or even COTS (Commercial Off-The-Shelf). While looking at CMMI for improvement wouldn't be a bad idea, the point is that unless you are developing wares from scratch to a government (or a Prime's) specification, you ought to be able to elude having someone else require or expect you to pursue CMMI practices when you otherwise might not do so.

A couple of exceptions to this "rule of thumb" would be (a) if you are entering into the world of custom wares for the Feds, even though you currently aren't in it, and/or (b) if the extent to which your product might need modifications or out-of-spec maintenance for it to be bought/used by the government. Governments have an all-too-regular habit of buying a product "as is" functionally, and then realizing that what they need kinda only looks like the original product but is really different. Knowing this, some agencies and prime contractors are using the CMMI's appraisal method as part of their due diligence before wedding themselves to a product or vendor. (Though this is fading from common practice.

If you're in the latter group, (remember... those who don't sell to the Feds or their Primes) then the question is really this, "what's not working for you with your current way of running your operation?" You'll need to get crystal clear about that. Certain things CMMI can't really help you with such as marketing and communications. OK, it could, but if managing your customers and marketing are your biggest challenges, you've got other fish to fry and frying them with CMMI is a really long way around to get them into the pan. Don't get us wrong, there are aspects of CMMI that can be applied to anything related to *how* you do business. But, if you are worrying about where the next meal is coming from, you might be hungry for a while before the ROI from CMMI will bring home the bacon.

Having said that... If you're finding that

  • customer acquisition, satisfaction, or retention, and/or
  • project success, profitability, predictability, or timeliness, and/or
  • employee acquisition, satisfaction, or retention, and/or
  • service level accuracy, predictability, cycle or lead time

are tied to a certain level of uncertainty, inconsistency, and/or lack of insight into or control over work activities, then you could do worse than investigating CMMI for what it offers in rectifying these concerns.

Let's be very clear up front: There are many business performace issues. CMMI can help put you into a better position to deal with these issues, but it can't solve all issues. CMMI can shed light onto issues and you can even be applying practices as intended and achieving the value of the practices as desired. If performance is still not improving then you must ask yourself whether you're learning from your performance issues or just covering them with practices. Performing practices without measuring the performance of the practices on the operation misses the opportunities to improve any business results afforded by CMMI. The key is to ask "WHY" the performance is what it is. And to keep asking this question to more deeply understand the contribution of decisions that lead to the results.

Back to Model FAQs for Newbies


Is CMMI Dead?

A: NO.

NOTE: This answer assumes you know a thing or two about CMMI, so we won't be explaining some terms you'll find answered elsewhere in this FAQ.

In the US, the DOD no longer mandates use of CMMI as a "minimum pre-qualification", but it does view CMMI as a benefit in offerors' proposals. In addition, CMMI continues to be among the ubiquitous differentiators in proposals among non-defense agencies. And, most (if not all) of the "big integrators", defense, infrastructure and aerospace firms who use CMMI continue to use and expect the use of CMMI by their subcontractors.

In short, CMMI is far from dead, and, with new initiatives (in content and appraisal approaches) under way and planned for at the CMMI Institute, the relevance and applicability of CMMI to the broader market is expected to continue for the foreseeable future.

Back to Model FAQs for Newbies


How many processes are there in CMMI?

A: NONE. Zero. Zip. Nada. Rien. Nil. Bupkis. Big ol' goose-egg. There's not a single process in all of CMMI. They're called Practice Areas (PAs) in CMMI, and we're not being obtuse or overly pedantic about semantics. It's an important distinction to understand between processes and Practice Areas (PAs).

So contrary to anything you've heard before, there are *no* processes in CMMI.  No processes, no procedures, no work instructions, nothing. This is often very confusing to CMMI newcomers. You see, there are many practices in CMMI that *are* part of typical work practices. Sometimes they are almost exactly what a given project, work effort, service group or organization might do, but sometimes the practices in CMMI sound the same as likely typical practices in name only and the similarity ends there. Despite the similar names used in typical work practices and in CMMI, they are *not* to be assumed to be referring to one-in-the-same activities. That alone is enough to cause endless hours, days, or months of confusion. What CMMI practices are, are practices that improve the performance of existing work, but do not *define* what that work must be for any given activity or organization.

The sad reality is so many organizations haven't taken the time to look at and understand the present state of their actual work, so as a result not only do they not know everything they would need to know to merely run their operation, they then look to CMMI as a means of defining their own practices! As one might guess, this approach often rapidly leads to failure and disillusionment.

How you run your operation would undoubtedly include practices that may happen at any point and time in an effort and during the course of doing the work. Irrespective of where these activities take place in reality, the CMMI PAs are collections of practices to improve those activities. CMMI practices are not to be interpreted as being necessarily in a sequence or to be intrinsically distinct from existing activities or from one CMMI practices to another. Simply, CMMI practices are the activities intended to achieve some value. Value, we might add, that ought to be tied to business objectives more substantial than simply achieving a level rating. There's so much more to say here, but it would be a site unto itself to do so. Besides, we never answered the question....

... in the current version of CMMI there are 25 Practice Areas (PAs).

These Practice Areas can be sliced and diced and viewed in levels, capability areas, practice groups, and ways no one's thought of in many combinations and permutations to suit a user's needs.

Each Practice Area (PA) is broken down into:

  • Required Information, and
  • Explanatory Information

The Required information is further broken down into:

  • Intent,
  • Value, and
  • Additional Required Information.

For example, there's a Practice Area called "Decision Analysis and Resolution" abbreviated, "DAR".

The intent of DAR is to "Make and record decisions using a recorded process that analyzes alternatives."

The value of DAR is to "Increases the objectivity of decision making and the probability of selecting the optimal solution."

When trying to understand your processes, you are encouraged to ask yourselves, "are we getting this value?" and, "are we meeting this intent?"

If you're planning on having an appraisal—and we're guessing you're here because you are—you can be that the appraisal team will be asking these questions about your processes.

Then, there's another layer inside the PA. As stated above, there are practices. The practices are, by far, the majority of the CMMI model material. Each practice is labeled and broken into required and explanatory information. And the required information is then broken into a practice-specific value tied to its practice statement. The explanatory information provides examples and guidance and other useful tidbits for understanding the practice.

For example, the first practice in DAR is labeld, DAR 1.1.

The Practice Statement is: "Define and record the alternatives."

The Value is that "A clear definition and understanding of the alternatives to be made reduces potential rework." You might already notice that the practice isn't telling you how to do the practice. If you're thinking ahead, you'll realize that a process that merely states "Define and record the alternatives" probably won't work very well in practical terms. Details such as when, why, how, and where aren't explained in "Define and record the alternatives". You can bet that an appraisal team will pick up on that.

On the other hand, if your organization has a process for doing trade-off analyses (for example), it likely already does this practice. And, if doing this practice is part of making decisions objectively and selecting optimal solutions, you're likley all set with that one.

The 25 Practice Areas and their intent are provided for you here in alphabetical order by PA. Keep in mind, these are all of the current practice areas. It is highly unlikely that you will be using all of them. We have not yet explained which are reqired for you. There is no requirement that you use all of them. *Some* organizations may use all of them, but that is extremely rare. You are more likely to use a subset of them targeted at particular capabilities (or requirements) relevant to your organization.

You are advised to obtain a copy of the "Model At-A-Glance" for the next step in obtaining further details.

    Causal Analysis and Resolution (CAR)
    Identify causes of selected outcomes and take action to either prevent recurrence of undesirable outcomes or ensure recurrence of positive outcomes.
    Configuration Management (CM)
    Manage the integrity of work products using configuration identification, version control, change control, and audits.
    Continuity (CONT)
    Plan mitigation activities for significant disruptions to business operations so that work can continue or resume.
    Decision Analysis and Resolution (DAR)
    Make and record decisions using a recorded process that analyzes alternatives.
    Estimating (EST)
    Estimate the size, effort, duration, and cost of the work and resources needed to develop, acquire, or deliver the solution.
    Governance (GOV)
    Provides guidance to senior management on their role in the sponsorship and governance of process activities.
    Implementation Infrastructure (II)
    Ensure that the processes important to an organization are persistently and habitually used and improved.
    Incident Resolution and Prevention (IRP)
    Resolve and prevent disruptions promptly to sustain service delivery levels.
    Managing Performance and Measurement (MPM)
    Manage performance using measurement and analysis to achieve business objectives.
    Monitor and Control (MC)
    Provide an understanding of the project progress so appropriate corrective actions can be taken when performance deviates significantly from plans.
    Organizational Training (OT)
    Develop the skills and knowledge of personnel so they perform their roles efficiently and effectively.
    Peer Reviews (PR)
    Identify and address work product issues through reviews by the producer’s peers or Subject Matter Experts (SMEs).
    Planning (PLAN)
    Develop plans to describe what is needed to accomplish the work within the standards and constraints of the organization, including the:
    » Budget
    » Schedule
    » Resource demand, capacity and availability
    » Quality
    » Functionality requirements
    » Risks and opportunities

    Plans also describe:
    » The work to be performed
    » Applicable organizational set of standard processes, assets, and tailoring guidelines
    » Dependencies
    » Who performs the work
    » Relationships with other plans
    » Stakeholders and their role
    Process Asset Development (PAD )
    Develop and keep updated the process assets necessary to perform the work.
    Process Management (PCM)
    Manages and implements the continuous improvement of processes and infrastructure to:
    » Support accomplishing business objectives
    » Identify and implement the most beneficial process improvements
    » Make the results of process improvement visible, accessible, and sustainable
    Process Quality Assurance (PQA)
    Verify and enable improvement of the quality of the performed processes and resulting work products.
    Product Integration (PI)
    Integrate and deliver the solution that addresses functionality and quality requirements.
    Requirements Development and Management (RDM)
    Elicit requirements, ensure common understanding by stakeholders, and align requirements, plans, and work products.
    Risk and Opportunity Management (RSK)
    Identify, record, analyze, and manage potential risks or opportunities.
    Service Delivery Management (SDM)
    Deliver services and manage the service delivery system.
    Strategic Service Management (STSM)
    Develop and deploy standard services that are compatible with strategic business needs and plans.
    Supplier Agreement Management (SAM)
    Establish an agreement with selected suppliers, ensure that the supplier and the acquirer perform according to the terms over the course of the agreement, and evaluate the supplier’s deliverables.
    Supplier Source Selection (SSS)
    Develop and keep updated a package of materials used to seek proposals from potential suppliers and select one or more suppliers to deliver the solution.
    Technical Solution (TS)
    Design and build solutions that meet customer requirements.
    Verification and Validation (VV)
    Verification and validation includes activities that:
    » Confirm selected solutions and components meet their requirements
    » Demonstrate selected solutions and components fulfill their intended use in their target environment

Back to Model FAQs for Newbies


How are the processes Practice Areas organized?

A: As they say in American pop culture, this is the 64 Thousand Dollar Question.

The Practice Areas can be viewed in several constructs.

Depending on your needs, you can organize the PAs however suits those needs. If an appraisal is among your needs, there are several ways to view the PAs that align to the scope of the appraisal. In fact, *unless* you need to have an appraisal, the formal organization of the PAs has little impact on how you use CMMI to improve your performance.

To help provide some food for thought not necessarily related to appraisals, CMMI V2.0 organizes the PAs according to ten (10) "Capability Areas". You can think of Capability Areas as groups of 2-4 PAs that work together in somewhat distinct ways. (One Capabiity Area currently only has one PA in it, but that's likely to change sometime in the near future.) You can further break the Capability Areas down into five (5) broad activities. This structure is best captured in this image taken from the CMMI Institute's "Model At-A-Glance":
CMMI Capability Areas

Because we this site is not a replacement for structured, official CMMI training, we won't get into all the different ways of viewing CMMI. And again, because any formalized organization of the PAs only matters—materially—for the purposes of appraisals, we'll leave any further discussion to that section of this FAQ.

Back to Model FAQs for Newbies


What is each process Practice Area made up of?

A: How'd you miss this? Go back to here and read again.

We will, however, take this opportunity to expound upon the matter of the "Explanatory" information found in each of the PAs and practices. This informative material is very useful. It's specific to each PA and their practices. Readers are well-advised to focus on the explanatory information in the context of the Intent and Values in each PA. When it comes time to be appraised, the teams determine whether your work demonstrates the practices meet the intent and achieve the value. WORD TO THE WISE: DO NOT merely read the practice area names and practice statements and just make up your own meanings. If your objective includes an appraisal to demonstrate that you are effectively using CMMI, not reading the explanatory information is suicide.

For more on what matters in an appraisal go here. If all you want is improvement, and appraisals are not necessarily important, then it doesn't really matter how the model is organized. Use anything in it to make your operation perform better!

Back to Model FAQs for Newbies



How do the Levels relate to one another and how does one progress through them?

A: At the risk of invoking the need to have knowledge of prior versions of CMMI, we'll point out that V2.0 greatly improves on the concept of levels. If you know about prior versions you'll see what we mean. If you aren't familiar with "levels" from prior versions of CMMI, don't worry. You don't really need to know anything about them for our purposes.

This is really a two-part question. The first question is, "how do the levels relate to one another". Let's take that firt.
Levels are tied to practices. Each PA is comprised of practices. These practices are broken out into levels 1-5 with one or more practices in each level. Practices increase in sophistication as the levels increase. For example, a level 1 practice might be something like, "Identify and record [things] and keep them updated." Then the level 2 practices are "Analyze [the things recorded in the level 1 practice]", and "Monitor [the things recorded in the level 1 practice]". And the level 3 practices are something like, "Use the analysis...; Make plans based on...; Establish strategies for..." You can intuitively see that Analyzing and Monitoring are more sophisticated than merely Identifying and Recording and basing decisions and plans on the monitoring and analysis are even further sophistication. These levels within the PAs are essentially levels of Capability.

These levels accumulate an operation's capabilities in any particular area of work. This isn't terribly different from an individual's capabilities as they grow with experience and new knowledge.

When an operation is capable in several different areas (much like any one of us) it has greater maturity. For purposes of establishing a consistent reference system (and, one that is somewhat backwards compatible with prior versions of CMMI), the CMMI Institute defines a "maturity level" as a specific set of PAs with each PA in that set being performed at a defined level of capability—regardless of the total levels available within the PA.

While the maximum Maturity Level is 5, Capability levels do top out at level 3. Since you're new, it probably won't be obvious why this is so. The simplest explanation is that the capabilities required to mature beyond level 3 include too many dependencies across PAs (i.e., maturity). Another piece of the explanation has to do with the applicability of practices in levels 4 and 5 and where these practices are typically applied. Again—a discussion that's a bit advanced for beginners.

Levels are also covered in a nice page by the CMMI Institute.

We don't want to forget the 2nd part of the question, "how does one progress through them?"

Referencing back to our repeated mantra about performance improvement vs. appraisals, we'll again remind you, dear reader, that when improving performance is the desire, certain aspects of the CMMI only ever matter in the context of appraisals. Yes, there's a logical argument one can make about how the levels provide a rationalized path for that improvement, but under scrutiny that argument feels too much like a schill for the product itself. OK. Stepping off the soapbox... The defined maturity levels can be worked through as you'd expect: start with something lower and work your way to something higher. The contents of the maturity levels are nicely depicted about a third of the way down this page.

Since capability levels only depend on the levels within PAs (plus some other appraisal-related requirements we'll address later), the "progression" through capability levels is merely starting in any PA with level 1 practices and adding level 2 practices, then level 3, etc.

Incidentally, all current PAs have at least 3 levels to them except one, Configuration Management (CM).

And, we might as well be complete with our answer, the following PAs have up to four (4) levels:

  • Governance (GOV),
  • Process Management (PCM),
  • Planning (PLAN), and
  • Supplier Agreement Management (SAM)

And, these are the only PAs with five (5) levels:

  • Causal Analysis and Resolution (CAR) and
  • Managing Performance and Measurement (MPM)


Back to Model FAQs for Newbies



I've heard the term process institutionalization. What does that mean?

So, you're wondering what's this business about institutionalization. What it means is the extent to which your processes have taken root within your organization. It's not just a matter of how widespread the processes are, because institutionalization can take place in even 1-project organizations. So then, it's really about:

- how they're performed,
- how they're managed,
- how they're defined,
- how they're supported,
- the attention the receive from leadership,
- the extent to which people have the time and resources to deal with them,
- who can suggest and make process changes,
- what you measure and control the processes by,
- how you go about continuously improving upon them, and
- how they're keyed to make decisions about how business performs because of them.

If we look at what it takes to manage any effort, service, or project, we will find that it takes these same things to manage a process.
In fact, these are just the beginning.
In CMMI V2.0, process institutionalization refers to the organization's processes, NOT the CMMI itself—as was long misunderstood in prior iterations of CMMI. And, in V2.0 (not that you care about V2.0 compared to earlier versions) there are two entire Practice Areas explicity authored to deal directly with institutionalization:

  • Implementation Infrastructure (II), and
  • Governance (GOV)

Earlier we eluded to a detail we left out about Capability Levels in appraisals. Now is the palce to tie off that loose end.

II and GOV are the only PAs that are absolute minimum requirements for any appraisal at any level in any construct of the CMMI. For example, let's say you want just an appraisal for Capability Level 2 in Planning. That appraisal would include the Level 2 practices in Planning PLUS the II and GOV PAs up to and including the Level 2 practices in both II and GOV.
(This paragraph applies to certain types of appraisals. There are several types which, if you've been reading from the top of this section, we haven't yet explained. Nonetheless, if you've heard anything about CMMI, it's probably included some rumors about appraisals and the statements in this paragraph apply to the type you're likely thinking about.)

This poses an interesting thought: if these two PAs are all about showing that your operation deals effectively with processes—as an abstraction—and these PAs are the minimum scope of even the smallest "Benchmark" appraisal (that's what the the most common appraisal is called—when it matters) this absolutely means that they can be appraised as stand-alone practice areas in their own Capability Level appraisal, demonstrating to the world that your operation knows its stuff when it comes to managing processes.

Importantly, keep this in mind: These two practice areas are not self-referential. They don't myopically apply back to Practice Areas. Yes at some point they do interoperate with other PAs, but the focus of these PAs are your organization's operating processes, not the CMMI practice areas or practices. And, they can apply to all of an organization's processes, NOT only to those covered by CMMI practice areas. And in fact, they are entirely about leadership putting its energies and resources into performance improvement.

We repeat this important distinction. These two practices areas are not about CMMI practices, they are about the organization's processes and their senior management. In fact, in keeping with the concerted effort to separate CMMI practices and practice areas from an organization's processes we should note, again, that PAs in CMMI V2.0 (as they were previously) are not the definition of processes, just a means of improving organizational processes. Not only those processes associated with PAs in CMMI, but all of their processes. (The scope of an appraisal will limit the processes being evaluated, but the intent and value of the II and GOV PAs are universal and will be evaluated as such.)

This is a BFD.

This puts your organization's leadership of process in the hot seat. And, with the Intent and Value of the PAs up to the appraisal team to judge, CMMI has taken off its gloves when it comes to being serious about true improvement commitment. We dare say no other broad-spectrum performance improvement framework has taken this step. Not even ISO 9000, for those familiar with it. Not the way CMMI has.

Be warned.

Back to Model FAQs for Newbies


What's High Maturity About?

A: "High Maturity" refers to practices at level 4 and level 5. Even though a Maturity Level 4 or Maturity Level 5 appraisal includes a predefined set of PAs, and even though there is no such thing as Capability Level 4 or 5, in casual conversation any practices above level 3 are likely to be referred to as "High Maturity". Interestingly, there's nothing stopping an organization from using level 4 and 5 practices regardless of the scope of their efforts. In fact, regardless of the scope of an operation's improvement needs, we strongly encourage all users to deeply study all 5 practice levels of the Managing Performance and Measurment (MPM) and Causal Analysis and Resolution (CAR) PAs. Organizations that understand what's going on in these PAs—even if they're not going to have an infrastructure for continually operating them—will be in a better frame of mind and will approach other PAs more comprehensively and more efficiently. We are also confident that organizations that internallize the ideas in these PAs will address II and GOV more professionally and effectively.

Collectively, the level 4 and 5 practices are all about making decisions about projects, work, processes, and operational performance based on a data-driven, quantitative understanding of the business. Not opinions, not compliance, not politics, not emotion, not instinct alone, and eventually not on "rearward-looking" deterministic data, or empirical data alone rather, forward-looking and predictive analysis, heuristics, and probabilities.

They're not just any numbers, but numbers that tie directly into the organization's business and performance goals, and not just macro-level goal numbers but numbers that come from doing the work on projects and services that are fed back into the planning of those projects and services to predict performance and direct activities.

This sort of quantity-centric ability is enabled by a baseline understanding of the organization's current capabilities coupled with predictive analysis of the organization's processes. These provide the organization with an idea of what their processes are really doing and what they can actually do for the bottom line. This sort of quantitative modeling is not typically based on macro-level activities, but are based on activities that can be contained and controlled and whose systems effects are understood.


How many different ways are there to implement CMMI?

A: Infinite, but 2 are most common.

But before we get into that, let's set the record straight. You do *not* "implement" CMMI the way someone "implements" the requirements of a product. The only thing getting "implemented" are your organization's work flows along with whatever "standard processes" and associated procedures your organization feels are appropriate—not what's in CMMI. CMMI has nothing to do with how you actually make money. CMMI is behind the scenes. It's not part of your value stream. If it is, it's in the way. You're doing it wrong.

CMMI is just a set of practices to help you *improve* the performance of whatever you've got going on. CAUTION: If whatever you've got going on is garbage, CMMI is unlikely to help. The most CMMI will do is brutally and painfully expose that your operation is in flames. AND, if you create your organization's processes only using CMMI's practices as a template you'll not only never get anything of value done but your organization's work flows will be dreadfully lacking all the important and necessary activities to operate the business!

Let's say that again: You need to know what makes your business work. You need to know how to get work done. You need to know what your own work flows are. And your need to know the measures that you control that are important to your success BEFORE you will get anything good from CMMI. CMMI is awful as a process template! The *BEST* way to use any CMMI practice is to read that practice and ask yourself any of the following questions:

  • "Where in our workflow does *that* happen?"
  • "How does *that* show up?"
  • "What do we do that accomplishes *that*?"
  • Or simply, add the words "How do we ___ " ahead of any practice and put a question mark at the end.

Then you compare your answers to these questions to the Intent of the Practices and Intent and Value of the PA and ask yourself, "Are we meeting that intent? Are we getting that value?" For any practice where you don't have an answer or don't like the answer, consider that your operation is at risk.

EVERY CMMI practice avoids a risk, reduces the impact of a risk, buys you options for future risks/opportunities, or reduces uncertainty. EVERY.SINGLE.ONE.

You might need a bit of expert guidance to help you refactor the practice so that it appears more relevant and useful to your particular needs, but there is a value-add or other benefit to every practice. Truly.
(Admittedly, whether or not there's value to *your* business to modify your behavior to realize the benefit of a given practice is an entirely different question. The unfortunate reality is that too many businesses are in the unenviable position of having to demonstrate CMMI ratings despite many practices in the model being dimly relevant to their success or risks.)

Now, as far as the "2 most common approaches". There's what we call the blunt-object (or silo'd or stove-piped or pathological box-checking) approach, which is, unfortunately, what seems to be the most common approach—in our observation—among companies who don't really need CMMI for anything useful as well as among those companies who give marketing lip-service to just about everything they do anyway.

In this approach CMMI is implemented with the grace and finesse of a heavy, blunt object at the end of a long handle—impacting organizations and managers' collective craniums. This is most commonly found among organizations who care not one wit about actual performance improvement and only care about advertising their ratings.

And then, there's the reality-based approach. In which, processes are implemented in such a way that work and service personnel may not even know it's happening. And, the way in which they're implemented convey that they meet the intent of the model practices. Can you guess which one we advocate?

The blunt-object approach resembles what many performance improvement experts call "process silos", "stove pipes", or "layers". Psychologists will often refer to this behavior as pathological box-checking. (Appraisers can be this way as well.) This approach is also often implemented *to* a development team *by* some external process entity with brute force and very extreme prejudice. So, not only does the blunt approach employ some very unsavory techniques, subjecting its royal subjects to cruel and unusual process punishment, it also (in its design) is characterized by a "look and feel" of a process where each process is in its own vacuum, without any connection to other processes (or to reality, for that matter), and where the practices of the processes are somehow expected to be performed serially, from one to the next, in the absence of any other organizational context. Not only that, but the organization's processes somehow magically mirror the names of the CMMI PAs, and the practices within them. In order. Gee, how did that happen?

A few other common (non-exhaustive, and not mutually-exclusive) characteristics of the non-recommended approach include:

  • Heavy emphasis on compliance irrespective of performance.
  • Little or no input from staff on what the processes should be.
  • Using CMMI practices as project or process "requirements".
  • Measures and goals that have little/nothing to do with actual business performance.
  • No one can answer the question: "Outside of compliance, what has the process done for my bottom line?"
  • Complaints about the "cost of compliance" from people who actually watch things like the bottom line.

If so many implementations of CMMI are guided by an (internal or external process) "expert", one might (justifiably) wonder how and why CMMI processes could ever be implemented in such an obviously poorly conceived approach!

There are two (sometimes inter-related) reasons:

  1. Lack of understanding of the model, and
  2. Being an expert process auditor, and not a performance improvement expert.

Unfortunately, being an expert process auditor does not make someone a performance improvement expert. However, one need not prove themselves an expert in performance improvement to train, consult, or appraise in the CMMI. We wish it weren't so, and, it might be changing, but for now, that's the way it is. So, what you have are many people who become "experts" in CMMI, but they're really only experts in the model's text and in appraising an organization's ability to read the text and produce text-book artifacts. They're not necessarily experts in performance improvement or performance excellence, in general, or in implementing CMMI in particular.

We've come across countless examples of organizations' attempts to implement CMMI while being led by someone (or plural) who was at least one of the two types of persons, and too frequently, both at once. Frightening, but true. The jury is still out on whether it's worse to be led by such a non-expert or to attempt "Do-It-Yourself" CMMI implementation. What the jury is definitely in agreement on is that if your focus is on CMMI and not on improving business performance, you're really wasting your time. Again, we digress....

Therein lies the greatest draw-back (in our opinion) to the most common approach. Instead of performance improvement being an integral and transparent characteristic of everyday work, it becomes a non-productive layer of overhead activity superimposed on top of "real" work. And yet, this seems to be the prevalent way of implementing CMMI! Crazy, huh?

Why is it so prevalent?

That's where the two reasons of poor implementation, above, come in. People who don't understand the model as well as people who don't know process performance from business performance (and therefore may have a weak understanding of the model) don't truly "get" that the model is not prescriptive, and so they attempt to make it a prescription. Auditing and appraising to a prescription is far easier and less ambiguous than auditing and appraising to a robust integrated process infrastructure. Frankly, the "common" approach suits the lowest common denominator of companies and appraisers. Those companies and appraisers who aren't after true improvement, and are only after level ratings, and who are willing (companies—unknowingly—(sometimes)) to sacrifice the morale and productivity of their projects for the short-term gain of what becomes a meaningless piece of paper—a testament to the lengths some companies will go to achieve the disenfranchisement of staff.

Alright already! So what's the reality-based approach about?!

The reality-based approach starts with a premise that a successful organization is already doing what it needs to be doing to be successful, and, that performance improvement activities can be designed into the organization's existing routines. Furthermore, the reality-based approach also assumes that, as a business, the organization actually *wants* to increase their operational performance. Note the use of "designed into". This is crucial. This means that for reality-based performance improvement (reality-based CMMI implementation), the operational activities must be known, they must be definable, and, they must be at work for the organization. Then, activities that achieve the Intent and Value of CMMI can be designed into those pre-existing activities.

This whole business of designing performance improvement activities into product/project activities illuminates a simple but powerful fact: effective performance improvement (CMMI included) requires processes to be engineered. Sadly, a recent Google search on "process engineering" turned up few instances where the search term was associated with software processes, and most of those positive hits were about software products, not performance improvement. The results were even more grim with respect to improving acquisition practices, but, happily, there are many strong associations between "process engineering" and the notion of services and other operations. There is hope.

Besides the reality of what's already working, other attributes of our preferred implementation approach is that we don't expect the processes to be done by someone else, and, we don't expect them to magically apparate into existence. For both of those attributes to be in place, the reality-based approach doesn't rely on process descriptions to make the processes happen. Instead, the practices that achieve the value and meet the intent of the practice areas are built into the very product, service and project activities of the organization's work, and, the process descriptions simply describe where in that work to find the practices happening.

For what it's worth, this approach is what we at Entinex call AgileCMMI

Back to Model FAQs for Newbies


Do we have to do everything in the book? Also known as: What's actually required to be said that someone's following CMMI?

A: You have to have processes that operate your business effectively. Everything else is mostly details.



Let's be frank (as if we haven't been frank thus far). The only time whether or not you're doing what's in CMMI (or not) matters is if/when you're aiming to be officially appraised. Otherwise, you'd just do whatever you want to get the most improvement out of and ignore what you don't need.

Having said that, the context of this answer is then about what's required for people who want bragging rights for "doing" CMMI, and for the most part, this means that they're going to determine this via a benchmark appraisal. In fact, nowhere in the CMMI model literature does it discuss CMMI "requirements" for process or performance improvement. The II and GOV PAs come close, but they don't actually dictate anything black and white. The model (i.e., CMMI) is very careful to use terms that point the practices at the Intent and Value contained within the PAs. The PAs are broad references to an organization's activities. It's up to you which PAs you want to use to improve performance. Appraisals play a role in the choice of PAs, but ultimately the choice to do an appraisal is yours.

"Managing Performance and Measurement" isn't a single process. It's something that either gets done or doesn't get done. Your operation either manages performance and measurement or it lets performance and measurements flag in the wind. Oh, you *do* manage performance and measurement? Well, good then! Do you collect and record measures? Use them to identify performance issues? Yes? Great! Why do you do this? Can you show how you do that? Beautiful! You're a level 1 company and Managing Performance and Measurement! What?! No? Are you an idiot? (Seriously, you're an idiot if you're not doing this.) Then what they he!! do you do? Oh, you collect revenue and monitor expenses. Oh, you have a P&L and a balance sheet. Well... that's a start. Let's dig into that a little... (And so on.)

We went through the parts of the model that are "required" and not as much "required" here. Hopefully you've seen that. We won't repeat it here. However, the form factor of your processes, how you improve performance, and the manner in which you convey and demonstrate

Appraisals are based on evidence. You will have to show evidence for the practices in scope of the appraisal. Organization must be performing some kind of activity that accomplishes the intent of practices in order to objectively convey that the practice is done. How directly, indirectly, automatically, manually, cleanly, or convolutedly it gets done is not a scale against which the evidence is measured.

If an organization is *doing* something, then it must be resulting is some form of identifiable, tangible output. However, not every organization does the same thing, therefore not every organization produces the same outputs, or uses the same jargon and therefore additional explanatory information is provided, often including examples, to help an organization's understanding of the PAs and practices.

What does this mean for an appraisal or the appraiser? It means that in order to demonstrate that an organization's pursuit of a practice area is satisfied, they might not be able to solely rely on what's in the model. This means that not only might it be a good bit of work before an appraisal for the appraiser(s) to get up to speed and elbow-deep into an organization's processes, but it could even drag with it the need to be somewhat competent in the kind of work an organization does or tools they use. DANGER! That kind of in-depth involvement puts appraisers (and consultants) at some risk: they might be exposed for not being competent in the ways and means of modern operations! (Did we just say that?) Well, in for a penny... let's go the whole way... We have a saying around here, the first part most people have heard of [even if we, ourselves, don't like it]:
Those who cannot do, teach. [We added this next corollary:] Those who cannot teach, audit.

It's much easier on the appraiser if the text in the model were investigated as is and if some of the expanatory materials were also expected or required in order to fit into their comfort zone—at your business' expense. This is closely tied to our discussion above regarding the implementation approaches. But until now, we didn't have enough background to get into it. The blunt approach to CMMI is replete with verbatim practices (which is often fine—except where they're just floating out there without being tied to everyday work) and verbatim reproductions of examples. You can, we hope, see how this now smells a little fishy. Rote reproduction of model elements are a tell-tale signs of an organization that doesn't really understand CMMI, or an appraiser/consultant who's just plain lazy (or worse, incompetent)!

Back to Model FAQs for Newbies


Why does it cost so much?

A: Well that's a loaded and ambiguous question! What qualifies as "so much"? We'll just tell you what goes into the costs here and you can determine whether it's reasonable for you or how you can go about minimizing cost or maximizing value.

Here are the variables that go into the factors that affect cost:

  • Where you are *now* with respect to your implementation of performance improvement—whether or not using CMMI? (i.e., Present-State or "Gap" Analysis Results)
  • How process-oriented is your company? Do you understand performance improvement? Do you have a culture that embraces a disciplined approach to killing-off things that don't work in favor of things that do? Are you well-read on the subject? Do you have performance improvement professionals on staff? Are you dedicating explicit resources to managing your performance improvement activities?
  • How much performance improvement implementation work will your company do on its own? vs.
  • How much performance improvement implementation work will your company need outsider help doing?
  • How much progress do you think you'll be able to make? Meaning, how fast can you absorb change? Will implementing performance improvement always be competing for resources from other work? Will all the time for implementing a performance improvement system be outside ordinary billable hours?
  • Does your company tell a really good story about performance improvement or does it tend to buy certifications in a box?
  • Does your company look at process-related work as something to BS their way through on proposals, or do they genuinely see how it matters? And,
  • How quickly do you want to make progress?
Other considerations include your organization's size, the kind of work you do, the kind of products you build and, techniques and tools you employ to build them, the kind of contracts you find yourself in, your relationship with your clients, the way you manage your projects, skills your people have and the nature and composition of your organization and management structures. NOT trivial.

Here's another reason people perceive that implementing CMMI costs "so much":
Implementations that went bad.

There are far more bad implementation stories than success stories. By "bad" we simply mean those implementations that, while many of them did achieve a level rating, and all the while they were spending lots of time and money, they were also causing disillusionment, cynicism, and processes that fundamentally didn't work! It's very easy to screw-up performance improvement efforts—with or without CMMI. Because CMMI is a very comprehensive model, it has the side-effect of potentially complicating improvement. Users misconstrue CMMI as all they need. Therefore the easiest way to screw it up is to attempt to implement the CMMI model as a standard and/or as a checklist and/or by buying so-called CMMI-enabling "tools".

While there are also many ways to being a CMMI implementation "success story", what these stories share in common are the following attributes:

  • Treat performance improvement with the same rigor as a revenue work.
  • Have and follow an operating model for how real work is done, then find where/how that reality can be improved as a system.
  • Executive management understands and takes an active role in performance matters beyond "top line". They understand what's being done, what's going to change, how *their* jobs will change, and the meaning of commitment.
  • The organization is a learning organization. "Learning" includes experimenting, failing, and trying new things without fear.
  • Create and sustain a culture of improvement.
  • Recognize that improvement takes time and discipline, exactly like a nutrition and exercise program. And,
  • Improvement can't be done *to* the work or the people, it's done *by* the people by the very nature of their work, not by any explicit "CMMI activities"
But, we are not in a position to give numbers. We hope you now understand why.

Back to Model FAQs for Newbies


Why does it take so long?

A: That's a loaded and ambiguous question! What qualifies as "so long"? We'll just tell you what goes into the time frames here and you can determine whether it's reasonable for you or how you can go about minimizing time or maximizing progress. Please see the previous question.

Back to Model FAQs for Newbies


Why would anyone want to do CMMI if they didn't have to do it to get business?

A: Because they must be perceiving that the way they operate now isn't giving them everything they want or need to be confident in their ability to produce the results they want/expect (profit, happy clients, low overhead, etc.) and to do it in a consistent way. If that's not you, move on. Otherwise, give CMMI a shot and check back here for more elaboration on this topic soon.

Back to Model FAQs for Newbies


Isn't CMMI just about software development?

A: Nope. It can be used for any product developed, services delivered and supplier sourcing work as wel as other uses coming down the road soon.

Back to Model FAQs for Newbies


What's the key limitation for approaching CMMI?

A: This question comes to us from one of our readers. We love our readers!

There really are no size or scope limitations to CMMI as far as an organization is concerned. CMMI can be put to good use in any sized organization doing any kind of solution development. The real driver (or limitation) is whether there's any need for improvement—and what part(s) of the operation need improving. It's a limitation based in business. As we've said here, CMMI is used as a basis from which to create improvement solutions that fit your particular organization in all its particular particularness. If an organization doesn't have an improvement need, there's no need for an improvement solution, we suppose.

The one limiting attribute of the model is that the organization pick the correct bits for the type of work they do. On the practical, implementation side however, there are many limitations. CMMI has broad applicability which necessitates a certain level of expertise about the decisions that go into affecting performance changes. If we were pressed to pick only one, we'd have to say that misplaced expectations is the #1 common cause, or limiter, in approaching CMMI. Especially the misplaced expectations of senior level organizational management

Misplaced expectations explains a lot about why CMMI efforts fail to meet their goals, or, if they succeed, they do so only at the expense of disillusionment, cynicism, lost moral, and employee turn-over, on top of the high monetary cost of the so-called "achievement" of a level rating.

Misplaced expectations lead to bad decisions in every aspect of life, and it is no different for CMMI. If we look at CMMI implementation like any other project—which it shouldn't be—it becomes very easy to spot what misplaced expectations lead to: insufficient resources, improper allocation of responsibilities, unrealistic time and cost estimates, insufficient training, inappropriate measures of success, burnout, counterproductive incentives, fear, lack of trust, lack of leadership insight or oversight, lack of leadership accountability for owning the effort and instilling the necessary discipline to make the project succeed, and a dearth of enthusiasm or buy-in from internal stakeholders.

Do you get the picture?

Simply setting one's expectations appropriately with respect to CMMI must be the next step once it is determined that there is something going on that CMMI can help with.

So, in the end, the only solution to mitigating misplaced expectations is to get smart about what the model is, what the model requires, what are the collateral implications of implementation, and how the appraisal works.

After all. implementing CMMI must be a strategic decision. Every respect an organization pays to other serious strategic decisions must be afforded to the decision to pursue CMMI.

We hate that this is the answer, because we wish it were more cut and dry. But the fact that it's not cut and dry is the same issue that leads people to ask about implementation cost and time long before any such answers ought to be sought. There are so many factors involved that the only way to really get around these challenges (limitations) is to get educated in performance improvement, CMMI, and in the appraisal process and requirements.

This FAQ can help a lot. When we set out to create it, we thought it would be helpful. Feedback has indicated that it's been more than helpful, it's a bonefide resource. But the FAQ can't address specific questions from specific companies. After all, those types of questions wouldn't be the "F" in "FAQ", would they?

The authors here believe that these answers ought to be provided to a potential CMMI traveller before they set out on the path. Unfortunately, like an inexperienced canyon hiker who doesn't wear the right shoes or take enough water, we've found that not enough CMMI travellers avail themselves of this information. However, much worse than that, an appallingly high number of CMMI consultants do not volunteer this information as part of their own due diligence to evaluate a potential client and to help the prospect (or new client) arrive at decisions most suitable to their circumstances and context,  *That* is a true blemish on the industry.

Back to Model FAQs for Newbies


What's the key effort required in CMMI implementation?

A: This question also comes to us from one of our readers. We love our readers!

Be sure to read the above question and answer as it is closely related to this one.

A key effort, in our opinion, in CMMI implementation is the identification of the organization's actual operating model and all its internal life cycles. It starts with taking a systems view of the operation and using systems thinking to understanding a business' "performance levers" In other words, what is their reality? What affects their reality? Any organization successfully operating and to some extent, profitably, growing must be doing something that *works* for *them* and delivers on customer expectations. Figuring out what that is is key to implementing CMMI. There's really no magic to it.

Identifying the operating model reflects the reality and describes where and how improvement takes place within that reality. If improvement isn't taking place, the need is exposed to insert improvement activities, and, you have some insight into how to best design the improvement solution for the organization. It is highly recommended that the process of designing the improvement solution receive guidance from someone who knows how to do this, as well as someone who understands the CMMI and how it is appraised. Such a person could be a hired gun or a smart hire depending on the needs and resources of the organization.

The important consideration to note is that it requires the combined knowledge of each organization's reality-based context as well as knowledge of the CMMI model and of the appraisal to come together to implement CMMI well.

And therein lies an assumption we've made in answering this question: That the organization wants to implement CMMI "smartly", and, in a way that results in lasting value that persists beyond the appraisal. There are other ways to just make it through an appraisal, but since we at CMMIFAQ don't advocate those approaches we're not gonna write about them. If you keep reading, you'll probably figure out what those approaches are. Bottom line: design your improvement efforts into your revenue-generating activities and you will not only benefit from the improvements, you will also find yourself with a robust improvement system which requires no evidence production when it comes time to perform appraisals. It becomes a simple matter of just doing the work of the business and allowing the outputs of that work to speak improvement for themselves.

Back to Model FAQs for Newbies


How do we determine which bits of CMMI to use?

A: This question (paraphrased) also comes to us from one of our readers. We love our readers!

For many users, it's not immediately easy to determine whether they're providing services or whether they're developing products. Especially when they are providing development services, such as DevOps or maintenance, or only part of the development process such as Independent Verification and Validation! Furthermore, many operations do everything. So, there are a number of questions to ask when trying to make this decision:

  • What work generates our revenue?
  • Are we paid for output, outcomes, or just time?
  • How do customers receive from us what we do? •Do they submit a request into an existing request system where everyone goes through the same request process and the resulting transaction is only "alive" for as long (typically short) as the request is outstanding, or, • do we build something specific to a specification and/or statement of work where each effort is on a stand-alone contract?
  • How do customers pay for what we do? Do they pay per request or do they pay for effort/results over time?
  • Is there a "Service Level Agreement" in place that we have to meet?
  • Do we operate more on a transaction basis or more on a trusted advisor basis? (Ignore, for now, what your marketing people say.)
  • What are we trying to improve? How we manage and develop products, or how we provide services?

Hopefully, the answers to these questions make the answer to which CMMI PAs to use self-evident. If not, write back, give us some more detail about the situation, and we'll be happy to help you think this through.

Back to Model FAQs for Newbies



Appraisals/Ratings FAQs

How do we get "certified" rated, appraised?

A: OK, let's get something straight here and forever-after:  You do not get "certified" in CMMI. At least not yet. In the US, the concept of a "certification" carries a specific legal expectation and companies who are *rated* (and that *IS* the right term) to a level of the CMMI are not being "certified" to anything.

The official site for explaining appraisals is here. That site, not this one, is official. But, might not have what you're looking for. Our charm.

So the correct question is, 'how do you get "rated"?'. And an even more complete question is, 'how do we get rated to level X?'

The short answer for how to get rated still leaves a lot of information on the table. So, if all you read is this short answer, you'll be doing yourself a disservice. The really short answer on getting a level rating is that you get appraised by an appraisal team led by an CMMI-Institute-Certified Lead Appraiser who determine to what extent you are meeting the intent the practices of the CMMI.

This answer is so loaded with hidden terms it's frightening. So just so you know that you've been warned that this answer is too short, we'll point out each of the terms in our previous answer that has hidden meaning in it:

    get appraised
    appraisal team
    Lead Appraiser
    meeting the intent
There's a condition, requirement or definition in and of themselves for each one of these words. Don't get annoyed, SEI and those that owned CMMI after them aren't the first, last, only, or worst organization to create such things. Every non-trivial discipline is loaded with concepts that experts can do in their sleep but that requires effort to understand by everyone else. It's true of EVERY profession so, _CHILL_THE_F_OUT_. Need an example? Think of it like getting into shape. The short answer is "diet and exercise". Brilliant. Wonderful. What do you eat? How much? How often? What sort of work-out routine is right for you? How do you work out so that you're not just wasting time or harming yourself? See? Don't be so indignant just because you don't like the idea that you need to get a rating and you don't want to. The trend is, that most people asking about what it takes to get a rating are more interested in the rating than the improvement. That's OK... We understand. Sadly, too well.

Keep reading this FAQ. What else did you have to do today anyway?

Before you go, in case you don't read further, we should note that there are four (4) types of appraisals. Unless otherwise noted, the majority of this FAQ assumes readers are most interested in the type of appraisal that results in official ratings on the first try. These are called "benchmark" appraisals. We will briefly explain the other three (3) types later on. If you're impatient, you can see this page on the CMMI Institute's site.

Back to Appraisals/Ratings FAQs for Newbies


How long does it take?

A: Here's another one of those dead give-away questions that a company is more interested in the rating than the improvement.

OK, that's a little unfair. Let's just say that as often as we hear this question, our judgmental attitude holds for ALMOST everyone who asks it. Alright, so maybe you are the exception. The truth is, it's a fair question. For every company.

A rare few companies don't care how long it takes. Lucky them. Applying a generous dose of benefit of the doubt, we can assume that the question is asked not for "how soon can we get this out of the way?" as much as from "are there any rules that dictate a minimum time before performing an appraisal?" How we can tell whether the company is interested in the improvements vs. the rating is simply a linear function of how long into the conversation we are before it gets asked. All-too-often, the source of the question is less ignorance of the process and more ignorance of the point behind going through the process.

Performance improvement purists wish more people were more interested in the journey than in the destination. We are performance improvement pragmatists. We know you're not looking at CMMI because you had nothing better to do with your time and money. That's for Bill Gates and his very worthy charitable endeavors. The company he's famous for founding is still in business for the money. FAST. So, how long it takes is a real question regardless of how you spend your money.

Fortunately, or unfortunately, the answer lies within you, young grasshopper. Really. We can't give you a much better answer than that. What we can do, however, is give you a list of the attributes that you can use to estimate how long it will take you, and give you a few example cases and some very general time-ranges.

Let's start again with our favorite analogy. Say you're carrying around about 40lbs (18.18kg) of excess body fat you've lost use for. How long will it take you to lose the fat? A year? Two? 6 months? Can one person do in 6 months what another person needs 2 years? We all know the answer to these questions. "IT DEPENDS!"

EXACTLY! How quickly a company can become rated to a pre-determined point in the CMMI's rating scale depends entirely on them and their circumstances. It depends on:

  • their level of commitment,
  • their tolerance for and ability to implement change,
  • how busy they are,
  • what they know about performance improvement in general and CMMI in particular, and
  • it depends on where they are as a starting point,
  • their willingness to learn and try new things, even getting things wrong, and
  • how much of the organization they want to include in the rating.
Working backwards from the appraisal itself (NOT including changes to incorporate the CMMI practices or goals—only for planning and conducting the appraisal), the absolute minimum calendar time a company should expect between when the starting gun is fired and when they cross the finish line is a simple matter of logistics. Probably about a month if they're lucky. Two months would be more realistic. These 2 months, of course, are just the logistics and prep-work necessary to plan and conduct the appraisal and the activities that lead to an appraisal. Obviously, this time frame would only be realistic if the company was completely ready for the appraisal, had done all their homework, knew exactly what the state of their process implementation was and were literally trying to do nothing more than figure out how much time they had before they could conduct the appraisal. Of course, such a company wouldn't be asking the question. They'd already know.

So then there's almost everyone else. Everyone else needs time to first determine where they are in their meeting the intent of CMMI practices. This is like saying, first we need to find out how much excess fat we're carrying around. A trip to the right physician would answer this. For CMMI, it's often called a "Gap Analysis" (a term we, here, don't like because it presumes something's missing where we prefer to merely look at the "Present State") and can take a week or two. This event can be as formal or informal as the organization desires. The official term for the formal version of a "Present State Analysis"is called an "Evaluation appraisal". Then, depending on those factors bulleted earlier, the gap found by the analysis would need to be filled with practice implementation to meet the intent of the practices. This is the part where a company would need to figure out what it's optimum sustainable diet and exercise routine should be, and, how long to stick with it to see the desired results.

In CMMI V2.0 there are 25 Practice Areas. The duration of the gap closure activities would clearly be a function of how many Practice Areas (and which ones) will be appraised. Each of the Practice Areas (PAs) could be analogous to some aspect of a healthy lifestyle such as food choices, food quantity, shopping, cooking, meal planning, exercises, frequency, repetitions, technique, equipment, blood work, rest, stress management, work environment, time management, and so on. Obviously, the more of the lifestyle someone wanted to adopt, the longer it would likely take.

Once a gap is filled (i.e., the weight is lost and/or new muscle mass is added), an organization should give itself at least 2-3 months to 12-18 months to incubate and actually use their processes. The time depends on the type of work being done and the amount of work flowing through the operation required to cycle through all it's new processes. This would provide them with enough data to reasonably conduct an appraisal. However, the actual metric isn't the calendar, it's the cycle-time of their operating models. Clearly, work that gets from estimate to delivery ("life-cycle") quickly are going through their processes and generating artifacts of doing so. This is the value to index moreso than the clock.

On the fat-loss analogy, this would be like finding that point where diet and exercise are enough to keep the weight off and one is able to demonstrate to themselves (or others, as needed) that they can, in fact, live and sustain a healthy lifestyle—in the face of temptation and other uncertainties.

Once people internalize how performance improvement works, how long it takes to earn a rating is a question such people stop asking (hint hint). Like fat loss and getting into shape, performance improvement is a discipline backed by many recommended practices. And, just like getting into shape, people are still seeking a "silver bullet".

We, on the other hand, stick to a healthy diet and exercise program. When we're off track we know it. We gain fat and feel like crap. When we're on it, we see the results.

Make sense?

Back to Appraisals/Ratings FAQs for Newbies


How much does it cost?

A: If you've read the answer to the previous question and are still asking this question then you must really only be wondering about fees, attributes of cost or other general costs. Otherwise, go and read the answer to "How long does it take?" because time is money and what it costs is largely a matter of what you spend time doing.

As for fees, attributes of cost and other general costs, here's break-down of things that can or will cost you money towards being rated to a capability or maturity level of the CMMI:

    Lead Appraiser
    The Lead Appraiser will need time to meet with you to plan the appraisal, prepare a "performance report", perform some preliminary evidence review (called "Readiness Review") and then to perform the appraisal. The range of what Lead Appraisers charge is pretty wide. Most charge about $2000/day +/- $1000 + on site meals and travel.
    Except for a few employees of the Institute, Lead Appraisers DO NOT work for the CMMI Institute. The are either employees of other companies or operate as independent contractors. The CMMI Institute merely certifies that they are allowed to perform official duties as lead appraisers. Lead Appraisers are free to charge whatever they want. Their prices are not regulated.
    Someone will need to provide Appraisal Team Training to the people you plan to have on the Appraisal Team. If they've not had training before, this can take a day or two. The appraisal's Lead Appraiser must confirm that the training was provided and that it met the needs of the appraisal. Often the hired lead appraier will provide this training.
    So, plan on the Lead Appraiser needing about 1-3 weeks to do the preparatory work for an appraisal, including Appraisal Team Training and at least one Readiness Review, and then 1-3 weeks to perform the appraisal itself (depending on the scope), then another day or two to wrap-up all the paperwork.
    Appraisal Team Members
    Every Appraisal for a rating is done by a team. The minimum number of people is 4 and that can include the Lead Appraiser. Every person on the team must meet certain individual pre-requisites and contribute to certain team-wide qualifications. It is best if the team's constituents include people from your company as well as outsiders. At the appraisal, if you don't have (and can't create) qualified people in your company to be on the team, then you will need to bring in outside team members. (Most Lead Appraisers keep these in their back pockets—kinda.) Outside team members are essentially consultants and charge as such. You're doing VERY well if you can get outside team members for $1000/day. This would be very high-value. And, if you're only charged for a day where 1 day = the date on the calendar, and not 1 day = 8 hours, you're doing VERY well. External consultants as team members are more likely to cost you $2000/day +/- $500 plus on site meals and travel.
    Performance Improvement Consulting
    If your organization needs to get up to speed on CMMI, you'll probably do one of two things:
    (1) Look to hire an employee with the expected expertise, or
    (2) Look to hire a consultant with the expertise.
    Whichever you choose to do depends on your organization's needs. The pros and cons of either approach are a basic matter of business and strategy. Either way, there's a cost. As for consultants, they're a lot like Lead Appraisers. And yes, many Lead Appraisers are also consultants. So, what and how they charge is largely up to them.
    There are no CMMI Institute-mandated fees for improving your processes, using their models, or getting an unofficial appraisal. There is a licensing fee for accessing (not using) the model and for any training, tests, and certifications one receives directly from the Instititue. The only fees charged by the CMMI Institute for using CMMI are for courses licensed by them to the providers of such services, and for registering official appraisals. Only certified people can use the material and when such people do so, and the people in class want it to be "official", there's a licensing fee that goes to theCMMI Institute.
    Consulting firms can charge whatever they want and call it whatever they want, but if anyone is implying that there are CMMI Institute-mandated "fees" for consulting, or using evidence tools, they're only implying this. Yes, if they are certified by the CMMI Institute they have to pay licensing fees, and so forth. But how they recover these expenses is their business. The Institute doesn't pass fees to customers for consulting.
Other General Costs
As above, the only other general costs associated with an appraisal are:
  • Official training, and
  • Your employees' time on the clock.
NOTICE what's *NOT* in the list above: TOOLS.

There is NO requirement for the purchase or use of any tool. Anyone saying that in order to "comply" with CMMI (or the appraisal) that you must purchase a tool, they're full of *crap!*

Some consultants do use tools as part of their work and as part of you hiring them you are also buying a license to use the tool. That's OK. Since you will end up using the tool after they're gone, it's reasonable that you should pay for using something that is either the consultant's intellectual property, or something they bought and are bringing to the table. And, it's up to you if you want to hire that company. It's not reasonable for you to hire a consultant who tells you they use a tool and then tell them not to use it so you don't have to pay for their tools. Many consultants work their pricing structure into the productivity and efficiencies they gain by using a tool and asking them to stand by their rates when you've asked them to leave their tools in the shed is not playing nice. On the other hand, anyone telling you that if you don't buy their tool then you are not going to "meet the CMMI's requirements" or "pass" the appraisal is FLAT OUT LYING LYING LYING!!! and should be reported to the CMMI Institute! And, you can do that by taking a number of actions listed here (see bottom).

Back to Appraisals/Ratings FAQs for Newbies


What's involved (in getting a rating)?

A: Um... that's a little broad, don'chya think? But, we get that question frequently enough so we might as well answer it. At least at a very high altitude.

There are three broad steps towards achieving a level rating:

1. Know where you are now.
This is usually called a "gap analysis" or "present state analysis". An "evaluation appraisal" is the official version of this activity. Though, you do not need this to be official. The right person to do this is someone who really understands the CMMI and how to appraise for the CMMI IN YOUR CONTEXT. Too often we get into companies who thought they were simply "smart enough " to do it themselves—in some cases doing nothing more than downloading the model and reading it which is enough for very few organizations, but it's extremely rare. Even taking licensed introduction courses seldom provides enough of an understanding to determine, without any other direct experience, how closely your company is meeting the intent of the CMMI's practices. This will also not likely be sufficient to determine how your particular implementation of the practices will fare in an appraisal. Also, please don't make the following mistake: Assume you're "golden" just because you've been through an ISO 9000 audit, you've won the Malcolm Baldridge Award, or even been carved out of an organization previously appraised. We've actually found that prior experience with other process-oriented bodies of work can work against a company's true understanding of what CMMI is about, how to implement it effectively, and how to appraise their practices.

Once you know what and where your gaps are in implementation you're ready for the next broad step.

2. Address your "gaps".
This is usually called, in CMMI circles, "Process Improvement". Although this step implies that your processes aren't up to the task as they stand now, what it really implies is that you will likely be making some changes to your current processes as you work to meet the intent of CMMI's practices. The idea is that you follow a method of process improvement and not simply a re-skinning of your paper trail. The entire purpose behind CMMI is that of performance improvement via continuous improvement, and companies that simply slap a layer of CMMI-looking processes over top of what they're currently doing is not process improvement, it's death by process; it's WASTE.

It's come to our attention that CMMI has a reputation as being "death by process" as it is. We firmly believe that it's the latter approach towards CMMI, as described in the previous paragraph, that causes this, not CMMI. To be blunt (you're used to it by now, yes?), slapping CMMI over top of your existing process, those processes that you feel have been working all along, is a STUPID way to implement CMMI.

On the other hand, if you do find value in practices CMMI promotes, then what you want to be doing is implementing them in a way that continues to provide you with the value-proposition of the things you like about your current processes and replacing or adding with CMMI those things that could use some strengthening. The smoothest way to this approach is by following CMMI as a guide to building a systemic continuous performance improvement infrastructure. Again, please be advised that doing this on your own without a CMMI expert (employee or consultant) is not advisable for the same reasons having an expert is best for performing the present state analysis.

One last comment on this step (and it's a bit of an unsung truism about the CMMI): companies who are honestly thrilled with their current process and really have a handle on the outcome of their efforts are probably doing a lot of what the CMMI would have you doing. Such companies may call their activities by different names, they might get value out of their activities in a less direct way, but ultimately, they are getting the job done and are still in business, so they must be doing things right. (Or at least doing the right things.) If this is you, then your effort towards implementing CMMI is going to be quite painless and enjoyable.

Oh, OK... there really is one other important point: CMMI says precious little about organizational culture and leadership qualities necessary to make any of this work. While the GOV PA does aim directly at leadership, it doesn't lay out any required characteristics of "good continuous improvement" attributes. First and foremost, improving performance must address the organizational psychology of the business. If/when there are issues with the organizational psychology, they are nearly always a negative effect on improvement. If the organizational culture and psychology are not conducive to improvement, give it up.

3. Get appraised.
Getting appraised with a "Benchmark" appraisal is what most people think about when they are looking at CMMI. The appraisal is what gives an organization their official "Level". Once the appropriate expert can make a sound call on your organization's implementation of the CMMI practice intent and value, you can start planning for an appraisal. Details of the appraisal are answered elsewhere in this FAQ.

Back to Appraisals/Ratings FAQs for Newbies


How does the appraisal work?

A: NOTE:The appraisal process is guided by something called the Method Definition Document (MDD). At this time, copies are only available to certified individuals.

If you are not familiar with the appraisal process already (which you aren't 'cause you're asking the question), you don't want to try to read this anyway. (Unless you're doing some whacky exercise in self-hypnosis.)

Just so you understand that the complete answer to this question is ordinarily delivered in days' worth of training. We're obviously limited in what we can explain here.

We're going to pick up the appraisal with the portion of the event that most people think about: the on-site period. That part when people usually show up and start asking questions. That's when there's a full appraisal team at your company and they're looking at your evidence and conducting interviews (or performing some other accepted form of verbal or written affirmation). It's at the end of this period that a company gets the results of the appraisal and, when all goes well, a rating.

The projects and organizational functions that are evaluated in the appraisal are based on a "randomly generated sample (RGS)" of projects. The total population of projects that go into the algorithm from which the RGS is cranked out are the subset of the organization's projects to which the appraisal results will apply. Except in smaller organizations, this subset of projects is seldom the entire organization. The RGS is, then, a subset of the subset.

The randomly generated sample mitigates the risk that companies coach only specific projects into being "appraisal ready". When the organization can pick and choose which projects are evaluated, it's too easy to stack the deck in their favor while avoiding exposing the true nature of their improvement activities that might, shall we say, be something less than pretty by comparison.

The RGS is generated sufficiently in advance to allow the organization to plan for the logistics of those personnel to be involved as well as for the opportunities to request changes to the sample for operational purposes. Also, some projects can be explicitly excluded from being chosen in the first place, also for logical reasons such as bad timing, classification, resource availability, etc.

The RGS report lists the projects and which PAs they will be asked to provide evidence for. One advantage to the RGS is that raises the confidence that continuous improvement is broadly implanted in the organization without having to involve too many projects, even from larger organizations.

So... that's pretty much what happens at the appraisal: A team lead by a Lead Appraiser as team leader looks at evidence from a subset of projects and makes a judgment on that evidence regarding the extent to which the it conveys that the intent and value CMMI's practices are being met. There are 2 types of objective evidence: Artifacts and Affirmations.

Artifacts are, as the name implies, *things* an appraisal team can look at, read, hold, touch, etc. Affirmations are verbal data gathered by interacting with people doing the work.

For each practice in the scope of the appraisal the evidence is looked at collectively for that practice and a determination is made regarding the extent to which projects and the organization meets the intent and achieves the value of the practice. This is called "practice characterization" The characterization scale is: Fully Meets, Largely Meets, Partially Meets, and Does Not Meet. There's also "Not Yet" which gets a bit too complicated for this medium to address.

The characterizations are then looked at in aggregate according to rules in the MDD across all projects and the organizational support functions. Basically, after aggregating the characterizations across all projects, no single practice can be characterized as less than Largely Meets (in aggregate) or it will spell disaster. Even then, if certain practices are found even universally "Largely Meets", and the appraisal team believes there's a pattern of weaknesses in what they're seeing that causes these practices to only be found as "Largely Meets", the team may still choose to say that whatever's causing these practices to not be Fully Meeting is worrisome enough to preclude the organization from satisfying the Practice Area, and if any level (correctly called a "practice group", in case anyone asks) in a Practice Area isn't satisfied, then it can't be said that the whole Practice Area is being satisfied, can it? And, that, our friends, is how the appraisal works: it's a search for whether the organization is meeting the intent and achieving the value the practices of those Practice Areas in scope of the appraisal.

NOTE: Project are drawn from "Subgroups". Subgroups are distinguished by a set of key factors that differentiate one Subgroup from another.

The minimum list of subgroup Sampling Factors are:

  • Location: if work is performed in more than one location (can be near, far—not limited, what matters is whether or not the processes and other relevant attributes are different).
  • Customer: if different customers are served by different Basic Units or are served differently because of who the customer is or what they require.
  • Size: if work is performed differently based on the size of the Basic Unit, or Support Function, or the size of the effort.
  • Organizational Structure: if work is performed differently in different parts of the organizational structure.
  • Type of work: if there is more than one distinct type of work done in the organization (mobile apps vs. mainframe, hardware vs. software, systems of systems vs. electronic components).

Once you distinguish Subgroups based on these factors (and others, that you and your lead appraiser may determine to be relevant), the entire lot of projects delineated by subgroup are submitted to the CMMI Institute to be sampled by the random sample generator.

The key to the Sampling Factors are to identify the most likely sources of process differences. The important outcome of this sampling process is the analysis of the sample, not to force the organization to split-up its work into tiny pieces. If, after the analysis, it is determined that one or more of these factors do not change the processes, then the factors can be eliminated as influential on the processes—thereby eliminating unnecessary and possibly artificial barriers across the organization's work, which would also unnecessarily increase the appraisal's complexity. While the analysis during planning has increased, the idea is not to increase the appraisal burden while also increasing confidence in the results and allowing the organization's processes to be more reflective of the work being evaluated.

Back to Appraisals/Ratings FAQs for Newbies


Who can do the appraisal?

A: Another quick and easy question, thanks!

A Certified Lead Appraiser. Certified by who? The CMMI Institute.
Lead Appraisers (as of this writing) have to qualify by surviving the following activities in this order

    Building Organizational Capability Course Series—
    • Foundations of Capability + one or more of the following:
          » Building Development Excellence
          » Building Service Excellence
          » Building Supplier Management Excellence
          » Building Safety and Security Excellence
          » Building People Management Excellence
          » Building High Maturity Concepts (required for high maturity lead appraisers and appraisal team members)

    Advancing Organizational Capability Course Series—
    • Foundations of Capability + one or more of the following:
    • Achieving High Maturity (required for high maturity lead appraisers)

    Mastering Organizational Capability Course Series—
    • CMMI Lead Appraiser Training (after participating in a number of appraisals

There are exams along the way as well as having to participate in a certain number of appraisals (before ev, plus a Validation of Education and Experience all wrapped up in being observed performing an appraisal by one of a very few number of people the CMMI Institute trusts to do that sort of thing.

NOTE: There is a distinction for "High Maturity" appraisals, Lead Appraisers, and appraisal team members. "High Maturity" are appraisals performed to a target maturity level 4 or 5. Certified "High Maturity" Lead Appraisers (CHMLA) are required to take more coursework, more exams (written and oral), and to qualify in much greater depth of experience and knowledge in concepts found in the Maturity Level 4 and 5 practices.

The organization being appraised needs to have a contractual relationship with the Partner Organization sponsoring the Lead Appraiser performing the appraisal in order for the appraisal to be valid.

In other words, Lead Appraisers who aren't directly working for a Partner (or, who aren't themselves representative of a Partner), can't contract to perform an appraisal without the contractual involvement of a Partner. That's not to say that money needs to be involved, and, it also doesn't mean that the appraiser needs to negotiate their dealings through Partners, however, it does mean that the Partner at least knows about the appraisal and the relationship being established with the organization being appraised.

Back to Appraisals/Ratings FAQs for Newbies


Can we have our own people on the appraisal?

A: Yes! Absolutely Yes! In fact, it's encouraged.

The appraisal team must be at least 4 people strong (including the Lead Appraiser), and with your company's employees on the appraisal team you increase the odds of buy-in to the appraisal process, results, and any recommended follow-up and follow-through. There are a number of qualifications potential team members must meet, the most logistically challenging of them being that candidate team members must have had a licensed delivery of the Foundations of Capability + the relevant Building ____ Excellence class (typically 3 days, total in person) before going into the appraisal activities (which begin a month or more before the actual on-site period). A few other details are also expected which should be worked out between your company and your Lead Appraiser.

Back to Appraisals/Ratings FAQs for Newbies


Can we have observers at the appraisal?

A: Let's first start by defining what an observer is. An "observer" is someone who is not qualified to be on the appraisal team, or, despite being qualified is not actually on the appraisal team, but is hanging around with the appraisal team while they do their thing. OK, got that? 

So, the answer is: No.

To preserve the non-attribution and confidentiality of the appraisal, per the Method Definition Document (MDD), observers are not permitted on SCAMPIs. In fact, the MDD includes an explicit table of who is/isn't permitted for various aspects of the apraisal process. Furthermore, the MDD exclusively calls out the only exceptions to "observers" as being a certified language translator/interpreter, or an official CMMI Institute observer who is there as part of an appraisal audit (of the appraisal, not the organization) or as part of the qualification process for the lead appraiser.

The primary rationale has to do with the nearly invariable experience that observers (being either untrained and/or sometimes not involved with any of the work leading up to the event) tend to be inadvertently disruptive to the discussion and proceedings, and, have no accountability. However, the most important consideration is that observers are not bound to maintain the confidentiality or the non-attributional aspects of the proceedings. And, there are no provisions in the method and no recourse through the CMMI Institute to address issues that may be caused by non-participants breaking confidentiality or non-attribution policies or any other method rules. Another concern is that of the comfort level of the participants to be open and honest when people who aren't committed to the results may be present, and, should there be any unfavorable findings, there may engender a concern for the influence of observers on the outcomes.

While all of the above rationale might be manageable by a competent appraisal team leader, the probability of problems outweigh the possibility of that everything will be fine. If there are unique circumstances whereby the conditions exist for the risks to be fully mitigated, a lead appraiser may request a waiver from the CMMI Institute during appraisal planning. (We wouldn't hold our breaths that it would be granted.)

Back to Appraisals/Ratings FAQs for Newbies


What sort of evidence is required by the appraisal?

A:We've covered this here.

We will, however, take the opportunity to point out that evidence does not have to be written in stone or divinely received. Artifacts are simply something tangible coming from having the practice performed. Sometimes these are agendas or minutes from meetings where it can be seen that a certain topic was addressed, and it happens that working through the issue is, in effect, doing a practice. Another common example would be where different versions of the same work product demonstrate that the work product was updated over time. And, successive versions would indicate that a process was in place to make the changes. If the practice says to keep track of changes, these versions could be used to demonstrate that changes were made, and one could infer that there was some way to keep track of them even though the fact that changes were made isn't actually the same as keeping track of changes. Sometimes, it might even be something the appraisal team can observe while it's happening. All of which are tangible.


Back to Appraisals/Ratings FAQs for Newbies


How much of our company can we get appraised?

A: The part of your company that gets the actual rating is called the "Organizational Unit". This can be the entire company or only parts of it as determined by the types of work (and as such, the types of processes) the company wants the appraisal to be performed on, and as a result, the appraisal results to apply towards.

For the appraisal to apply to an entire company, work that represents all the sorts of efforts that the company does would need to be evaluated in the appraisal. One instantiation of a type of work that consumes the entire company, and is the only work that company has and does would result in the appraisal on that one effort and that company could say that it's entire company has achieved the level rating awarded by an appraisal on that work.

The actual composition of the organizational unit is something that needs to be defined up-front during appraisal planning. The Lead Appraiser must analyze the selection of work the company desires to be accounted for in the appraisal results to ensure that the work used in the appraisal does, in fact, represent the organizational unit of the appraisal results. The more variety in the kinds of work in the organizational unit, the more types of work will be needed. Also, the broader the application of the appraisal results, the broader the scope of included work. Meaning, for example, if the company has a number of sites, and, the company wants "the entire" company included in the appraisal results, work representing each site must be included. Multi-site work efforts are OK, but using one location's part of the work to represent another will not, uh, work. There are a number of facets that are analyzed called "sampling factors" which we discussed in more detail above. Also there we explained the role of the Randomly Generated Sample (RGS).

Back to Appraisals/Ratings FAQs for Newbies


How many projects need to be appraised?

A: Project are sampled as a cross-section of Practice Areas and number of projects in scope.

Projects (also referred to as "instances" when used for sampling purposes) to provide evidence for appraisals are providing them for Practice Areas (PAs). In many cases, projects will not provide evidence for all the PAs. If there are more than one project in the organizational unit, barring any valid rationale, different projects will provide evidence for different PAs. Naturally, when the number of projects is fewer than the number of PAs, projects provide evidence for more than one PA. In fact, projects are often chosen to provide artifacts for groups of related PAs (commonly, Capability Areas. The minimum number of projects to provide artifacts for each PA is as follows:

Total Project
in the OU
Minimum number of
Projects (Instances) per PA
1 to 10 1
11 to 40 2
41 to 200 3
201 to 400 4


More than 400 active projects, it's best you stop reading and ask yourselves whether you should be seeking help from an FAQ.

Back to Appraisals/Ratings FAQs for Newbies


Can we have more than one appraisal and inch our way towards a rating?

A: No, At least not yet. Well, at least not in the way you're thinking.

You can have as many appraisals as you want, however, at this time, if you want a Maturity Level rating (or even a Capability Level rating), you will only achieve that if the appraisal looks at all the evidence for all the Practice Areas in the scope of the appraisal in a single appraisal. This appraisal could span many weeks—up to 90 days—but it would still be considered a single appraisal. Only one rating would take place. (OK, there are some nuances we're leaving out about handling a large organization that wants to bookend a massive model scope and multiple OUs under the umbrella of one event, but as stated elsewhere, if that's you it's best you stop reading and ask yourselves whether you should be seeking help from an FAQ.

Back to Appraisals/Ratings FAQs for Newbies


If we go for a "level" now, do we have to go through it again to get to the next "level"?

A: Yes. Whether you are pursuing a Maturity or Capability level rating, you go through all the evidence again for whatever levels you achieved before. One reason is that at this time there are no mechanisms in place to allow for "cumulative" appraisals, which is what would be necessary to make a non-recursive approach work. However, even more fundamentally, the appraisal team and Lead Appraiser can't be expected to assume that there would be evidence from the lower levels to support the higher levels' activities or that nothing would have changed in between the events. Even more basic than that is the fact that the levels support one another and it would be very unlikely that appraising to a higher level could be accomplished without evidence from the earlier levels.

The only exception to this is if an appraisal is spread out over a period of time, and is, in fact, one long appraisal. The time-limit for completing a single appraisal is 90 days.

Back to Appraisals/Ratings FAQs for Newbies


How long does the "certification" rating last?

A: Appraisal Results are recognized by the CMMI Institute for three (3) years from date of the appraisal's acceptance by the CMMI Institute.

Under certain circumstances, this rating validity can be extended by up to two (2) years, three times by performing a "Sustainment Appraisal". As long as a sustainment appraisal is performed before the expiration of the prior validity period, the total validity period when maximized is nominally 9 years, 3+2+2+2.

While qualifying for a sustainment appraisal could possibly be a hurdle to some organization, the benefits are steep: Sustainment appraisals evaluate about a third of the original PA scope in depth and require half the minimum appraisal team. This reduced scope means reduced effort and shorter durations. The smaller appraisal team also has savings. Therefore, for some organizations, the effort savings alone recovers the costs of conducting the appraisals slightly more often.

If you're reading this you probably haven't had your first benchmark appraisal yet, so don't get ahead of yourself.

Back to Appraisals/Ratings FAQs for Newbies


What is the difference between the appraisal types?

A: The differences boil down to scope and the level of rigor, and, as reflection of the level of rigor, to what the outcomes can be.

There are four (4) appraisal types:

    Benchmark: for all the marbles
    Sustainment: also for all the marbles, but only allowed under certain circumstance, shorter validity period, more frequent, smaller scope, allowed between benchmarks
    Action Plan Reappraisal: Whoops! Missed hitting the mark by a bit than can be fixed. Very strict rules.
    Evaluation: Just a look-see. A formal, official peek at how you're doing, but does not result in a rating of any kind.

Since we discuss the benchmark at length above, and we mention the sustainment appraisal here, explaining the others in this FAQ makes no sense. This area is for beginners and the other appraisals are not for begineers.

Back to Appraisals/Ratings FAQs for Newbies


How do we pick a consultant or lead appraiser?

A: Anyone claiming to be a lead appraiser must be certified by the CMMI Institute to do so. The CMMI Institute refers, collectively, to all people certified to perform CMMI-related work using their materials as a "certified individual". Thus, all actual lead appraisers are "certified individuals". You can search/sort a list of such people here, and, specifically limit your search to lead appraisers. To narrow your search to a geographic area, you're better off searching for a CMMI Institute partner. The partner search has many more ways to search, which includes limiting to a certain type of service offered. And then, once you find a partner, you can see the authorized individuals associated with that partner.

However, one need not be a lead appraiser to consult on CMMI. In fact, there are many people very well experienced in implementing CMMI and with appraisal experience who are not credentialed to do appraisals or official training. Many more experienced people than there are authorized or certified individuals. Frequently, because they don't carry authorizations, they're not able to charge as much as those who are authorized. The trick is: finding them. Many work for partners, so once you find a partner, you might ask about the authorizations of their consultants as a gauge of what you can expect to pay. Many people experienced in CMMI work for large organizations who need their services full-time, on site, and moonlight as consultants. Others are just independent consultants and get much of their work by word-of-mouth. Certifications below Lead Appraiser mostly means you sit in class and take exams well. However, Lead Appraisers and instructors are vetted by a rigorous process including observations.

Though, the question isn't "how do we find a consultant or lead appraiser" but, "how do we pick from all the ones out there?!?!?".

As you can guess, there's not a simple answer, but we can say two things:

  1. Caveat Emptor (buyer beware), and
  2. Pick one who you feel can understand your business and your needs; your context.
WHY must the buyer beware? Because interpreting models for how a given implementation can be done, and also recognizing that a given implementation of a model is a legitimate interpretation of the model are far from exact science.

CMMI is a model not a standard, as we've said many times before. It's not something that, when applied, will look the same each time. Furthermore, as we've said, the practices in the model are not processes themselves, they are practices to improve performance. It takes skill to effectively interpret the model and implement it in a given situation, and, it takes contextual relate-ability to appraise whether the model has been implemented or interpreted properly/effectively.

There are (in our opinion) far too many lead appraisers (and consultants) who don't know how to/don't want to/don't appreciate (or are too lazy to) do what it takes to help clients design contextually appropriate process solutions, and/or to allow for contextually-driven interpretations of model practices when performing an appraisal. Symptoms of such an appraiser or consultant are in what they consider valid evidence of the practices, or valid descriptions of the processes, or how they describe their approach towards working with an organization to build up their practices. 

An appraiser or consultant may not be suitable to a given buyer if they only expect to create, see, or will accept specific things as evidence, or if they expect to see, create, or will accept process descriptions with each CMMI practice spelled out, or if they expect organizations starting out to generate artifacts that address model practices but don't add value to the product development process. None of these characteristics are required (or prevented) by CMMI or the appraisal method. Therefore, buyers must be able to select the CMMI service provider whose attitude, knowledge, and experience suits their needs. After all, the model and appraisal process can allow for a wide variety of strategies, tactics, and contexts, but not every consultant or appraiser will (or can) allow for it.

What this means, in practical terms, is that buyers of CMMI services must be able to interview potential consultants and lead appraisers for their attitude towards, and knowledge and experience in practice implementation, evidence of practice implementation, and artifacts of the implementation. Furthermore, buyers must interview providers for the ability of the provider to pick up on, adapt, and appreciate the context in which the model has been or will be implemented.

The easiest example(s) to provide readers with relate to whether the consultant or lead appraiser can communicate with the buyer in terms the buyer understands such as: being a small outfit, or using Agile development methods, or being embedded with the client and using the client's practices as well as their own. Another relevant inquiry is if the buyer can give or ask for some examples of practices as actually carried-out in or envisioned for their organization, and gauge the response from the potential CMMI service provider as to what they think of those practices.

The challenge in conducting such an interview is that the buyer must have enough of an understanding of the model and the appraisal process to be able to determine whether what they're hearing is the provider's opinion/approach or whether what they're hearing is dictated by the model or the appraisal process. Sounds like an impossible task. Fear not, some CMMI service providers will give you this sort of unbiased advice or even a quick education for free. This FAQ and its contributors are aimed at providing this sort of advice because we feel it's to the detriment of the entire CMMI enterprise not to do so. Sadly, rough estimates of the number of such providers puts the figure at about 5-10% of the entire authorized population. As a character in a pretty good movie once said, "choose wisely".

Good luck!

Back to Appraisals/Ratings FAQs for Newbies


Where can we see a list of organizations that have been appraised?

A: Finally! A question with a simple, straight-forward and easy answer!

Simply visit the CMMI Institute's Published Appraisal Results System, (PARS) and put in your query. It's fairly uncomplicated. 
There are, however, a few points to keep in mind:

  • Not all organizations have asked to be listed in the system, not appearing does not guarantee they have not been appraised.
  • If an organization has changed their name after being listed, they will *not* be listed with their subsequent name(s), organizations are only listed with the name they had when the appraisal was performed.
  • Pay close attention to the Organizational Unit (OU) (discussed on this FAQ here) of the appraisal. Though you may be interested in validating whether a company has been rated, it's rare that entire companies are rated (especially if the company is not small). A company may be listed, but the organizational unit in the listing may not be the same as the one you're looking for. Or, there may be several organizational units within a single company. Do not take for granted that the organization you are researching is (or its people are) the same as the one appearing in the system.
  • Once all the appraisal data is fully completed and submitted to the CMMI Institute, it can take days before appearing in the PARS. Most common causes of taking longer include:
    » appraisal team or sponsor not completing their appraisal experience surveys,
    » appraisal sponsor not signing the appraisal disclosure statement (ADS), or
    » issues with the results that are being investigated by CMMI Institute's QA process.


Back to Appraisals/Ratings FAQs for Newbies


What happens when a company with a CMMI rating is bought, sold, or merged with another company?

A: Appraisals are patently rearward-looking. Furthermore the sampling factors distinguish important characteristics of the organizations being appraised that may cause the circumstances and therefore the processes to change from one part of the operation to another. As such, the only valid statements that can be made about an organization and appraisals performed on the organization are statements related to the specific organization named in the appraisal results at the time of the appraisal.

In other words: when two (or more) organizations come together or are split apart due to mergers, sales, or acquisitions, appraisal results do not convey, combine, confer, assume or transfer with, from, or to the new entity/entities in any way whatsover. The appraisal results remain attributed to the original entity/entities and the circumstances that characterized the original entity/entities at the time of the appraisal. Furthermore, it is not possible to combine appraisal results from two or more entities or to assume the highest or newest of two or more ratings when independently appraised organizations combine in any way.

We hope that's clear.


Back to Appraisals/Ratings FAQs for Newbies


What's the official record of the appraisal results?

A: The Appraisal Disclosure Statement (ADS) is the sole and entirety of the official results of the appraisal, regardless of what does or does not appear in the CMMI Institute's Published Appraisal Results System, (PARS). Nothing in any appraisal presentation, and unlikely anything to be found framed and on the wall at a company, or printed on a large banner and hung from a footbridge are official or complete indication of what exactly was appraised and the meaning and context of the results of an appraisal. (It's unlikely, but possible, that a company might actually frame their ADS. It's several pages long; but in the spirit of avoiding any absolutes we can't prove, above, we used the phrase "...and unlikely anything to be found...".) In any case, the ADS is generated by the Lead Appraiser after all the other data has been collected and submitted to the appraisal system. It's signed by the appraiser and the sponsor, and contains all the details of the appraisal, its circumstances, the explicit organizational unit to which the results apply, and the results themselves. If someone were serious about determining whether an organization has been appraised, when, to what end, and to what scope, they should request to see the non-confidential parts (if any are even confidential) of the ADS.


Back to Appraisals/Ratings FAQs for Newbies


Can we go directly to Maturity Level 5?

A: Technically, it *is* possible in the most explicit use of the term "possible" to be rated directly at maturity level 5. All this means is that the organization was appraised performing all the practice groups (levels) of all the predefined in-scope PAs. The fact that they were not level-rated before this results in the organization having appeared as achieving ML5 "directly".

However, in reality, it's not likely that any organization would proceed to implement all relevant process areas without ever having performed any appraisals between the start of their CMMI journey and their appraisal for ML5. What is more likely is that at certain points the organization will conduct appraisals to gauge their progress. Whether or not these intermediate appraisals were used to generate a level rating would be up to them. There's no requirement that appraisals generate ratings, so an organization appraising at ML5 and receiving a rating may appear to have gone directly to ML5, when in fact they had several appraisals before then—none of which generated a rating.

Of course, there's another reality to consider: the CMMI Institute reviews all high maturity appraisals very carefully. If an organization has never had a CMMI appraisal prior to their appraisal for ML5, it will be viewed with even more scrutiny, and both the Lead Appraiser and the organization appraised can expect to get a call from the CMMI Institute's QA team. Not to mention that not having any appraisals prior to the one aiming for ML5 is extremely risky.


Back to Appraisals/Ratings FAQs for Newbies


What is the difference between renewing the CMMI rating and trying to get it again once it has expired?

A: Generally, the difference is only in how much preparation it takes the organization. In our collective experience, most 1st-time ratings require some amount of transition from the original "present state" of the organization's practices to some "new" present state of practice in later future such that they can attain the desired level rating.

Assuming the organization in question didn't change much, and/or that they were successful in maintaining their practices over the years and have kept-up with their institutionalized practices and processes, then they would have little or no "gap" in practice performance. They would merely need to put in some sweat equity towards collecting the evidence for the appraisal beforehand.

The mechanics of an appraisal are no different. (Sustainment appraisals follow the same mechanics, just with a smaller scope of deep-dive PAs.) The lead appraiser (appraisal team leader) must still plan and prepare for the appraisal. The appraisal team must still be qualified and briefed. A pre-appraisal readiness review must be performed by the appraisal team leader. And, the on-site portion of the appraisal must still be performed.

Appraisal team members from prior appraisals can be re-used as long as they have the most up-to-date qualifications as required by the scope and method of the appraisal in question. There is no need for going through lengthy training for prior appraisal team members whose prior training still keeps them qualified for the scope and method of the appraisal planned. (These are Q&A for the lead appraiser. If you don't yet have one, we can probably answer them for you but we'd need some more information from you, so please contact us.)

If the organization has not changed the state of its practices since their prior rating event (i.e., Benchmark or Sustainment appraisal), there is no compelling reason to perform a new "present state" (a.k.a. "gap") analysis or to invest in any "improvement" consulting. However, if there is some concern that practices may have dropped off or slipped from the tracks since the last appraisal, a current present state analysis might be a good idea so that the required readiness review can be more productive with fewer risks and unknowns going into the next Benchmark (or Sustainment) appraisal.

In our experience, renewal appraisals have cost 25%-40% of the original appraisal costs since the original costs included coaching, training, and consulting to bring the organization up to the point where they are ready for the appraisal.


Back to Appraisals/Ratings FAQs for Newbies


Q: Can my organization go directly to a formal Benchmark without any Evaluation appraisals? Is it mandatory that before a formal Benchmark, a formal Evaluation appraisal should be completed?

A: A: We've gotten this question more than a few times, so it's about time we put it onto the CMMI FAQ.

There is no requirement to perform an Evaluation appraisal prior to a Benchmark.
This is true for *any* Benchmark—regardless of whether it is your second, tenth, or first, or any other Benchmark.

Under certain conditions, an Evaluation appraisal is recommended, and there are many good reasons to perform one (which is why they exist) but under no circumstances are they required.


Back to Appraisals/Ratings FAQs for Newbies


CMMI, Agile, LifeCycles and other Process Concepts FAQs

What if our development life cycle doesn't match-up with CMMI's?

A: CMMI isn't a development life cycle. The end.
It's a model for building an improvement system to continuously improve very particular areas of what goes on in your work, regardless of the life cycle. This is a central tenet of our approach to CMMI, by the way. Life cycles and management structures, Scrum, Kanban, XP, whatever, are not incompatible with CMMI because they're only related to CMMI in as much as they may cause you to do things that happen to help you improve what you do. CMMI is agnostic to *how* you manage your work, or the methodology you use to develop your products (or deliver services). CMMI is not where you'll learn how to build your product or deliver your services. CMMI will not tell you how to operate your business. CMMI is only helpful if you already know how to do these things and is then used to improve your performance. Lifecycles are how you get things done. You choose them and CMMI can help you improve within them.

Back to Agile and Standards FAQs


Doesn't the CMMI only work if you're following the "Waterfall" model?

A: NO!   CMMI is not about development life cycles. While a fair criticism of CMMI is that many of the contributors come from a "Waterfall"- centric or a "Big Plan Up Front", "top-down" way of developing wares, they were at least careful not to box anyone into following a specific development method. Nonetheless, it takes very deep understanding of the CMMI to implement it, regardless of which life cycle you follow. Meanwhile, you can browse over to our this blog.

Back to Agile and Standards FAQs


How does CMMI compare with ISO/TL 9000 and ITIL? (or other standards?)

A: While there is considerable overlap between these models, frameworks, and "best" practices, they are different from each other and used for different purposes. People who ask this question come from one (or both) of two camps:

  1. They're just totally unfamiliar with CMMI (and/or the others), and are asking innocently enough, and/or
  2. They just look at CMMI (and the others) as some standard they need to comply with, and not as something that can make a positive difference in the operations of business.

(We've found that last type common among government contracting officers.)

Let's address a question of "standards" first.

The practice areas and the practices within them are not intended on being or replacing any technical "standard" for doing anything. Some practice areas that share names with other familiar activities have volumes of "standards" already written for how to perform those activities. Many of the engineering-oriented practice areas come immediately to mind such as Configuration Management and Requirements Development and Management. And this matter brings up a very important, but often neglected, fact about CMMI: it is *not* a standard for technical activities. And, for whatever CMMI *is* supposed to be used for, it does *not* a prescribe how to do anything in it.

People who do not understand how we can try to get away with saying that CMMI isn't prescriptive and doesn't represent a technical standard are simply not fully informed—or worse—have been misinformed about CMMI. We'd really love an opportunity to set the record straight.

CMMI is about improving performance associated with developing products, delivering services, and aquiring good and services. CMMI is not about the technical processes needed to actually do the developing and delivering. The CMMI "practice areas" are what the authors believe to be important elements that contribute to a systematic ability to affect continuous improvement in and among (the management of) those processes and practices that actually develop and deliver the products and services.

In essence, CMMI's practice areas are many of the things needed for continuous improvement of technical activities, not the activities themselves.
What CMMI is saying is:
In order to improve your performance, you need to manage your requirements, risks, and configurations; you need to plan, monitor, and control your work, you need to measure, analyze and manage the output of your efforts; you need to actually pay attention to the performance of your project to how well they follow processes and to whether your processes are working out for you.
CMMI then says: if you really want to get good at these things you'd have be making a focused effort on your processes, leadership would give a sh*t about performance via processes, you'd have standardized process assets, an organization-wide training program, and a formality to your technical activities that might otherwise be left to fend for themselves.
For the true process zeal: you'd be able to quantify the performance of your projects and processes and you'd be able improve them by focusing on what numbers tell you to focus on, not just what people gripe about the most. CMMI also says that if you're going to do a process, you should have a policy for doing it, a plan for it, resources, assignments, process-level training, stakeholder involvement, and other activities to make them stick.

If improvement is what you want, it only makes sense, doesn't it?
(The types of activities mentioned here are from the practice areas, in case they weren't familiar to you.)

You see, CMMI has a number of practice areas that are needed for technical activities, but their presence in CMMI is because the process they refer to are also needed for process design just as much as they are needed for technical engineering.

SO, if we disassemble a practice area into its intent and value in light of the above understanding we will see that the intent and value are not oriented at technical activities, they're oriented towards continuous improvement. We can hope that in this context, the matter of whether CMMI is a technical standard can be laid to rest, and, we hope that we bring a deeper appreciation for how CMMI works.

With that, we can simply explain that ISO/TL 9000 and ITIL have a different focus than CMMI, and just like CMMI has practices that sound similar to technical engineering processes, these other bodies of knowledge also have their similar-sounding activities that are needed and relevant for the purpose they each represent. Since this isn't a FAQ about ISO/TL 9000 or ITIL, we hope it's enough of an answer for now to explain that wherever CMMI has a practice that seems like it's also in another body, CMMI does not innately conflict with the others... there are ways of implementing CMMI that can make them all work well... however, an organization can go about implementing any practice under the sun that could conflict with some other practice, CMMI or otherwise, but it would not be because of anything in CMMI.

Back to Agile and Standards FAQs


Aren't CMMI and Agile / Kanban / Lean methods at opposite ends of the spectrum?

A: Not at all. We've got A LOT of content on this subject! Instead of being very redundant by putting something here, please check out the blog on that topic, and the SEI's Technical Note, CMMI or Agile: Why Not Embrace Both!.

We will, however, leave you with this: There is nothing inherently *in* Agile/Kanban/Lean methods that make them incompatible with CMMI. However, both CMMI and the family of approaches commonly and collectively called "agile" have a very wide (perhaps infinite) range of ways of being interpreted and implemented. For example, nothing in CMMI requires that everything be "documented", though, many organizations take this (silly) approach to using CMMI. Similarly, nothing among the agile values or practices insist that a team produces *no* documentation, but that doesn't prevent teams from being just as silly.

Faux agile doesn't help matters any more than rigid waterfall does.

One more point: Most of the incompatibilities we've seen—beyond interpretation and implementation misfires—come from focusing on the practices (either CMMI or agile) and on tools and/or artifacts (again, either CMMI or agile) instead of the values and principles behind them. Focusing on the wrong thing will most often lead you to the wrong place. Believe it or not, both CMMI and Agile come from the same source. They merely take different routes to get to the same desired outcome, but when used incorrectly, neither is likely to save you from failure.

Back to Agile and Standards FAQs


How are CMMI and SOX (SarBox / Sarbanes-Oxley) Related?

A: They're not. Well... at least not in the way that many people think they might be.
See, many people think that because the Sarbanes-Oxley Act of 2002 (which we'll just call SarBox) frequently involves business process and IT infrastructure and related systems, that it involves CMMI. But, in actually, the connection to CMMI is rather weak and always is a function of the organization's intentional effort to connect the two.

SarBox is about public company corporate governance. It is a US law that aims to eliminate the excuse by corporate leaders of public companies that they "didn't know" some bit of information about their company that could result in mistakes (or outright lies) about accounting, work-in-process, inventory, earnings reports, valuations, sales/revenue forecasts, and so on.

Its origin is in the several accounting scandals revealed in the late 1990's and early 2000's.

The intersection of SarBox and CMMI is only in that companies working towards SarBox compliance are very often relying on systems and software to help them achieve their compliance. When a company says to an auditor, "our software (or systems) are capturing the data, and we rely on that data to be SarBox-compliant," then they might get into questions about the system's requirements, design, configuration, etc.

A company in such a position might look towards CMMI for ways of improving the management of their development practices if, in fact, they are relying on those practices to maintain their SarBox compliance.

The simple answer is this: SarBox and CMMI are not related and don't have similar practices, EXCEPT that some companies *make* them related because of the context in which they're using technology to be SarBox-compliant, and, their reliance on technology development disciplines and/or institutionalization of process improvement disciplines to make sure they have a handle on how they run the company.

That said, another quick connection between SarBox and CMMI is that a company already familiar with CMMI might want to use the GPs and perhaps a few PAs to deploy whatever they need to deploy to "establish and maintain" SarBox compliance. (Sorry, couldn't resist!)

There's another angle to mention:

Many executives seem to have the nasty trait of putting anything that looks, smells, sounds, tastes or feels like "compliance" requirements into the same "bucket".

When they make this mistake, they then jump to a conclusion that goes something like this:

"This is compliance, that is compliance, and that over there is compliance too. They must all be related, and therefore redundant. Which is best? Which is easiest? Which is cheapest? Which should I do first?"

This, of course, is utter nonsense, but it happens. The fact that implementation of these process-related concepts must be driven by the context of the business is just ignored. It is a symptom of an organization looking for the marketing benefit of some auditable achievement and not the benefit of the concepts behind the effort.

Though, in fairness, companies that must comply with SarBox don't have a choice, unless they can afford to go private by buying back all their stock.

Back to Agile and Standards FAQs



SEI / CMMI Institute / ISACA FAQs


Why was CMMI Taken Out of the SEI then sold to ISACA?

A: CMMI and its predecessors had been worked on by SEI for over 25 years. Much of it was funded by the US Department of Defense (DOD). The DOD stopped funding CMMI several years prior to being carved out of the SEI. However, SEI is still an FFRDC (see here) funded by DOD. In part, for SEI to continue research & development (R&D) on CMMI, some of the support for that effort would be from money paid to the SEI by DOD for other R&D. In 2012 the DOD decided that it wanted the SEI to focus all of its resources on evolving other technologies more urgent to DOD than CMMI and that the CMMI is mature enough to support itself. So, instead of dropping CMMI entirely, Carnegie Mellon University (CMU) created the "CMMI Institute" as a wholly-owned spin-out to operate CMMI (and People-CMM and a few other things, eventually). CMMI Institute evolved CMMI in directions independent of the path it was on while within SEI.

A few years into the CMMI Institute's independence from SEI, CMU decided to sell the CMMI Institute to ISACA. The integration of the CMMI Institute into ISACA will complete in the year 2020.

Back to SEI / CMMI Institute / ISACA FAQs


Who Will Operate the CMMI

A: CMMI will continue to be operated by the CMMI Institute as an enterprise of ISACA.

Back to SEI / CMMI Institute / ISACA FAQs


What Will Happen to CMMI? Will CMMI Continue to be Supported?

A: CMMI will continue to be supported by CMMI Institute. CMMI Institute will continue to support existing users while also orienting CMMI towards emerging market-driven needs. We can expect CMMI and its related products and services (such as appraisals) to be evolved in directions that make sense to meet many market segments and to appeal to audiences more broadly than the market it had been serving.

Back to SEI / CMMI Institute / ISACA FAQs


Will CMMI Change? What's the Future of CMMI?

A: As you've seen CMMI has just undergone a significant update to V2.0. The current version and architecture of CMMI are a step evolution from prior versions. Under ISACA CMMI can evolve along many new paths. For example, CMMI can branch so that there are different versions or different products for different markets. It could sliced and diced and re-packaged for different uses/users. Different types of appraisals can be created to meet demands not suitably addressed yet.

Imagine, for example, versions of CMMI and of appraisals that focus on ongoing fiscal improvement, or versions that meet the specific targeted needs of start-ups and their venture backers.
Imagine appraisers and consultants specifically qualified to work with lean, agile, start-ups, enterprise, operational services, technical debt, Cyber, or DevOps, each with a version of CMMI, training, and appraisals suited specifically to their business and without the ambiguity currently experienced with only one version of everything for everyone. These are the sorts of things possible with new markets and resources.

Back to SEI / CMMI Institute / ISACA FAQs


Will We Still Be Able to Work with Our Current "CMMI Partner"?

A: All current CMMI Partners in good standing will be offered the opportunity to have their licenses continue to oeprated under ISACA. Other than changes to references to SEI, CMU, or the CMMI Institute and online real estate, the change of relationship between the Partners and CMMI Institute will not change the relationship between you and your Partners.

Back to SEI / CMMI Institute / ISACA FAQs


Do the Lead Appraisers work for the CMMI Institute or ISACA?

A: Almost none of them. The rest are licensed to appraise and teach through Partners. CMMI Institute does, however, administer and train people to be certified to take leadership roles and responsibilities for leading appraisals and delivering official instruction.

In particular, CMMI Institute controls very closely how and when it allows people to become Lead Appraisers. Even still, while the cadre of people with the authority to observe candidate Lead Appraisers on behalf of the CMMI Institute is small, only a few of them are actually CMMI Institute employees. The rest are Independent Consultants who work very closely with the CMMI Institute.

Back to SEI / CMMI Institute / ISACA FAQs


How do we report concerns about ethics, conflicts of interest, and/or compliance?

A: Waste, Fraud, Abuse, and Noncompliance with Policies Harms Everyone. If you have concerns about the truth behind an organization's rating, or about the ethics, compliance or conflict-of-interest of a consultant or appraiser, we strongly encourage you to report these concerns to the CMMI Institute. You may also want to review their Partner policies, here, as well to ensure your concern is properly supported. All authorized and licensed individuals and organizations must operate through a Partner, so all investigations will include an inquiry to the Partner.

The Ethics and Compliance information is here, there you can also see other information on expectations and how to report your concerns.
The direct reporting email address for Ethics and Compliance concerns is: ethics-compliance@cmmiinstitute.com.

We sincerely hope you never have to use any of them, but if you do, we're very sorry. And, we hope you are undeterred from your process improvement aspirations.

Back to SEI / CMMI Institute / ISACA FAQs


Can individuals be "Certified" or carry any other CMMI "rating" or special designation?

A: Individuals can be Professionally "Certified" for career enhancement purposes as well as for carrying out official activities within the CMMI Product suite.

There area number of professional CMMI certifications that only demonstrate that an individual has taken classes and exams. These certifications do not convey any special rights or privileges to the individuals who have them that they can apply for other parties.

To lead offical appraisals or provide official training recognized by the CMMI Institute, a person must be specifically certified to do so, and their license fees must be up-to-date.

There are no designations conferred on individuals specific to CMMI level ratings. In other words, John Q. Public cannot claim that they, as a person, are "CMMI Level X". The organizational unit Mr. Public worked for can make that claim, but Mr. Public as an individual cannot. Anyone claiming something ridiculous like that (we've seen this on many resumes) represents a gross misunderstanding by the individual and/or a terrible lack of communication/training by the organization.

Also, to be clear, taking CMMI courses does not create any official designation as a "certified" or "authorized" CMMI consultant. (We've seen that too.)

Please refer to the CMMI Institute's learning site for further information.

Back to SEI / CMMI Institute / ISACA FAQs


Training FAQs

Is there required training to "do" CMMI?

A: That depends on what you want to accomplish.

    To just implement CMMI? None whatsoever. An organization can buy a license to the CMMI viewer, print it out for their own use, and start to implement it. There's no required training to do that.
    To be completely blunt, however, we have not found a single company yet who could take this layman's approach from scratch and make it work for them—whether to get through a Benchmark appraisal or just realize improvements. There are just some things that a few hours with someone willing and qualified to explain everything—at least as far as using the model effectively and/or getting to/through a Benchmark is concerned—to make a world of difference between success and disillusionment. (Entinex—sponsor of this site—does that in a 4-hour session we call a Crash Course.)

    To be on an appraisal team, a prerequisite is to become a CMMI Associate by taking and passing the Foundations / Fundamentals course and exam, plus a Building Excellence course in the discipline(s) relevant to the appraisal scope. The fundamentals course can be online by the CMMI Institute or in-person by the Institute or Partners.

    If the appraisal includes Maturity Level 4 or 5, team members also need an additional course on "High Maturity" concepts. Then, in preparation for the appraisal itself, team members receive "Appraisal Team Training" from the Lead Appraiser (or an alternative qualified individual) prior to the appraisal—but this is part of the appraisal process and not training that must be delivered by CMMI Institute or a Partner.

    To be a certified instructor, one needs several additional courses as well as to be observed delivering the course before becoming authorized to deliver it on one's own.

    To be a Lead Appraiser one needs similar courses, to participate as a team member on two appraisals, the Lead Appraiser course and then still to be observed leading an appraisal.

    High Maturity Lead Appraisers (HMLAs) require additional coursework and exams.
    Applicants for all certified roles will undergo a resume review of experience and qualifications in appropriate areas consistent with the designation they are pursuing.

Back to Training FAQs


Who can provide CMMI-related training?

A: The CMMI Institute itself, and people certified by the CMMI Institute *and* working through a Partner can deliver any training they are authorized to deliver—if the expectation is that there will be some official registration of the that training event. If there is no such expectation of a Certificate of Completion, or, if there is no intention of using the training as a pre-requisite to future activities, the training is not controlled by the CMMI Institute since they would never know about it. Be sure to be clear with whoever you are receiving the training from about their authority to deliver the expected outcome. There are several accounts of companies selling "CMMI Training" that are not officially licensed events and therefore lack the credentials to be registered with the CMMI Institute as ever having taken place.

At this time, the CMMI Institute is the only source for official CMMI-related model training materials and registration. The CMMI Institute also controls the exams.

Back to Training FAQs


What sort of CMMI-related training is there?

A: Please go here for training information.

Back to Training FAQs


How can we learn about the appraisal process?

A: For that we have some bad news.
There are only four ways to learn about the appraisal process. One of them is not recommended, one requires a lot of commitment, and a you wouldn't be here if you were doing the third. This leaves the fourth. See here:

  1. Ask a certified individual to share the CMMI Appraisal Method Definition Document with you and study them.
  2. Go through all the training requirements of becoming a Lead Appraiser.
  3. Go through an appraisal.
  4. Hire someone who has done #2 to explain it you.

Back to Training FAQs


Specific Model Content FAQs

What are the II and GOV PAs About?

A:It can be confusing. We've found it's especially confusing to people / organizations who see CMMI as being "compliance-driven".

II (Implementation Infrastructure) and GOV (Governance) are about your company and your company's business, your company's leadership, and what your company does about process and continuous improvement.

These PAs are the absolute minimum PAs for Benchmark and Sustainment appraisals, and they investigate relevant processes and tools, systems, methods, etc., that your company uses to facilitate continuous improvement.

These two practices areas are not about CMMI practices, they are about the organization's processes and their senior management. In fact, in keeping with the concerted effort to separate CMMI practices and practice areas from an organization's processes we should note, again, that PAs are not the definition of processes, just a means of improving organizational performance. Not only those processes associated with PAs in CMMI, but all of their processes. (The scope of an appraisal will limit the processes being evaluated, but the intent and value of the II and GOV PAs are universal and will be evaluated as such.)

If your leadership is accustomed to being in the hot seat, buckle up. This could get uncomfortable.


Back to Specific Model FAQs




This section is under construction.

As you can imagine, this section is going to go deep. Once it's done we will be getting into the following topics:





Go away and come back when we care about you.

Or, go away and come back when you care about building an operation where people want to work, where you're in it for creating something great, and where you're ready to listen.

Otherwise, start here.




CMM, CMMI, and SCAMPI are ® registered in the U.S. Patent and
Trademark Office by Carnegie Mellon University.
All other content © Entinex, Inc. (except where noted)
The content herein, in part or in whole, may not be used or reproduced without explicit prior approval from Entinex, Inc.
(Just ask, we'll probably let you use it.)
Please read our disclaimer.

Disclaimer: The opinions expressed here are the authors' and contributors' and do not express a position on the subject from the Software Engineering Institute (SEI), CMMI Institute, Clear Model Institute, Carnegie Mellon University, ISACA or any organization or Partner affiliated with the SEI, CMMI Institute, Clear Model Institute, Carnegie Mellon University, or ISACA.

Most recent notable update::
21 April 2020

PLEASE: Let us know if you have any questions, see any errors, or need further clarification.




About these ads: The ads that appear below DO NOT reflect an endorsement, recommendation, or suggestion by the authors, editors or contributors to this CMMIFAQ, the SEI, or CMMI Institute.

















PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.
























About these ads: The ads that appear below DO NOT reflect an endorsement, recommendation, or suggestion by the authors, editors or contributors to this CMMIFAQ or the SEI.














PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.






































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.
























About these ads: The ads that appear below DO NOT reflect an endorsement, recommendation, or suggestion by the authors, editors or contributors to this CMMIFAQ or the SEI.














PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.












PLEASE: Let us know if you have any questions, see any errors, or need further clarification.