Welcome to our brutally honest, totally hip CMMIFAQ.

We're probably going to make as many enemies as friends with this FAQ, but hey, we expect it to be worth it. :-)

We also did a bit of research and found it pretty hard (if not impossible) to find this kind of information anywhere else on the web. So anyone who has a problem with our posting this information is probably the kind of person who wants you to pay to get it out of them before you have enough information to even make a good decision. But we digress... We do that a lot.

This site was designed to provide help with CMMI for people who are researching, trying to get to "the truth" about CMMI, or just looking for answers to basic, frequently asked questions about CMMI and the process of having an appraisal for getting a level rating (or CMMI certification as some people (inaccurately) prefer to say).

The information on this site has also been demonstrated to provide answers and new insights to people who are already (or thought they were) very familiar with CMMI and the appraisal. Feedback has indicated that there is more than a fair amount of incomplete and actual incorrect information being put forth by supposed experts in CMMI.

Your feedback is therefore very important to us. If you have any suggestions for other questions, or especially corrections, please don't hesitate to send them to us.

This is a work-in-progress, not all questions have been answered yet -- simply a matter of time to write them, not that we don't know the answers -- but we didn't want to keep you waiting, so we're starting with that we have.

For your own self-study, and for additional information, the source material for many of the answers on CMMI come from the CMMI Institute. They're not hiding anything; it's all there.

We've broken up the FAQs into the following sections (there will be much cross-over, as can be expected):


Model FAQs


Appraisals/Ratings FAQs


CMMI, Agile, Kanban, Lean, LifeCycles and other Process Concepts FAQs


SEI / CMMI Institute & Transition Out of SEI FAQs


Training FAQs


Specific Model Content FAQs



Model FAQs


A: CMMI stands for "Capability Maturity Model Integration". It's the integration of several other CMMs (Capability Maturity Models). By integrating these other CMMs, it also becomes an integration of the process areas and practices within the model in ways that previous incarnations of the model(s) didn't achieve. The CMMI is a framework for business process improvement. In other words, it is a model for building process improvement systems. In the same way that models are used to guide thinking and analysis on how to build other things (algorithms, buildings, molecules), CMMI is used to build process improvement systems.

It is NOT an engineering development standard or a development life cycle. Please take a moment to re-read and reflect on that before continuing.

There are currently three "flavors" of CMMI called constellations. The most famous one is the CMMI for Development -- i.e., "DEV". It has been around (in one version or another) for roughly 10 years and has been the subject of much energy for over 20 years when including its CMM ancestors.

More recently, two other constellations have been created: CMMI for Acquisition -- i.e., "ACQ", and CMMI for Services -- i.e., "SVC". All constellations share many things, but fundamentally, they are all nothing more than frameworks for assembling process improvement systems. Each constellation has content that targets improvements in particular areas, tuned to organizations whose primary work effort either:

  • Develops products and complex services, and/or
  • Acquires goods and services from others, and/or
  • Provides/ delivers services.

NONE of the constellations actually contain processes themselves. None of them alone can be used to actually develop products, acquire goods or fulfill services. The assumption with all CMMIs is that the organization has its own standards, processes and procedures by which they actually get things done. The content of CMMIs are to improve upon the performance of those standards, processes and procedures -- not to define them.

Having said that, it should be noted that there will (hopefully) be overlaps between what any given organization already does and content of CMMIs. This overlap should not be misinterpreted as a sign that CMMI content *is*, in fact, process content. It can't be over-emphasized, CMMIs, while chock-full-o examples and explanations, do not contain "how to" anything other than building improvement systems. The overlap is easy to explain: activities that help improve a process can also be activities to effectively perform a process, and, not every organization performs even the basic activities necessary to perform the process area well. So, to one organization, what seems trivial and commonplace, to another is salvation from despair.

Another way to look at CMMIs are that they focus on the business processes of developing engineered solutions (DEV), acquiring goods and services (ACQ) and delivering services (SVC). To date, CMMI has most widely applied in software and systems engineering organizations. Now, with the expansion of the constellations, where it is applied is a significantly distinct matter from being anything even remotely akin to a standard or certification mechanism for the engineering, methods, technologies, or accreditation necessary to build stuff, buy stuff or do stuff, . If an organization chose to do so, CMMI could be applied in the construction or even media production industries. (Exactly, how would be an *entirely* different discussion!)

Before we get too off-track... CMMI is meant to help organizations improve their performance of and capability to consistently and predictably deliver the products, services, and sourced goods their customers want, when they want them and at a price they're willing to pay. From a purely inwardly-facing perspective, CMMI helps companies improve operational performance by lowering the cost of production, delivery, and sourcing.

Without some insight into and control over their internal business processes, how else can a company know how well they're doing before it's too late to do anything about it? And if/ when they wait until the end of a project or work package to see how close/far they were to their promises/expectations, without some idea of what their processes are and how they work, how else could a company ever make whatever changes or improvements they'd want/need to make in order to do better next time?

CMMI provides the models from which to pursue these sorts of insights and activities for improvement. It's a place to start, not a final destination. CMMI can't tell an organization what is or isn't important to them. CMMI, however, can provide a path for an organization to achieve its performance goals.

Furthermore, CMMI is just a model, it's not reality. Like any other model, CMMI reflects one version of reality, and like most models, it's rather idealistic and unrealistic -- at least in some ways. When understood as *just* a model, people implementing CMMI have a much higher chance of implementing something of lasting value. As a model, what CMMI lacks is context. Specifically, the context of the organization in which it will be implemented for process improvement. Together with the organization's context, CMMI can be applied to create a process improvement solution appropriate to the context of each unique organization.

Putting it all together: CMMI is a model for building process improvement systems from which (astute) organizations will abstract and create process improvement solutions that fit their unique environment to help them improve their operational performance.

At the risk of seeming self-serving, the following addresses the question of what CMMI is:

Keys to Enabling CMMI.

Back to Model FAQs


Is CMMI for us?

A: We should start the answer to this question with a quick sentence about what CMMI itself *is*.

CMMI is about improving performance through improving operational processes. In particular, it's improving processes associated with managing how organizations develop or acquire solution-based wares and define and deliver their services. So we should ask you a question before we answer yours: Do you feel that you ought to be looking at improving your processes? What business performance improvements would you like to see from your operations?

SO, is CMMI right for you? Obviously this depends on what you're trying to accomplish. Sometimes it's best to "divide and conquer". So we'll divide the world into two groups: those who develop wares and provide services for US Federal agencies (or their prime contractors) and those who don't.

Those of you in the former group will probably come across CMMI in the form of a pre-qualifier in some RFP. As such, you're probably looking at the CMMI as a necessary evil regardless of whether or not you feel your processes need to be addressed in any way. If you're in this group, there aren't many loop-holes.

One strong case for why your company might not need to mess with CMMI would be if you are selling a product of your own specification. Something that might be called "shrink-wrapped" or even COTS (Commercial Off-The-Shelf). While looking at CMMI for process improvement wouldn't be a bad idea, the point is that unless you are developing wares from scratch to a government (or a Prime's) specification, you ought to be able to elude having someone else require or expect you to pursue CMMI practices when you otherwise might not do so.

A couple of exceptions to this "rule of thumb" would be (a) if you are entering into the world of custom wares for the Feds, even though you currently aren't in it, and/or (b) if the extent to which your product might need modifications or out-of-spec maintenance for it to be bought/used by the government. Governments have an all-too-regular habit of buying a product "as is" functionally, and then realizing that what they need kinda only looks like the original product but is really different. Knowing this, many agencies and prime contractors are using the CMMI's appraisal method (called "SCAMPI") as part of their due diligence before wedding themselves to a product or vendor.

If you're in the latter group, (remember... those who don't sell to the Feds or their Primes) then the question is really this, "what's not working for you with your current way of running your operation?" You'll need to get crystal clear about that. Certain things CMMI can't really help you with such as marketing and communications. OK, it could, but if managing your customers and marketing are your biggest challenges, you've got other fish to fry and frying them with CMMI is a really long way around to get them into the pan. Don't get us wrong, there are aspects of CMMI that can be applied to anything related to *how* you do business. But, if you are worrying about where the next meal is coming from, you might be hungry for a while before the ROI from CMMI will bring home the bacon. It usually takes a number of months.

Having said that... If you're finding that

  • customer acquisition, satisfaction, or retention, and/or
  • project success, profitability, predictability, or timeliness, and/or
  • employee acquisition, satisfaction, or retention, and/or
  • service level accuracy, predictability, cycle or lead time

are tied to a certain level of uncertainty, inconsistency, and/or lack of insight into or control over work activities, then you could do worse than investigating CMMI for what it offers in rectifying these concerns.

Back to Model FAQs


Is CMMI Dead?

A: NO.

NOTE: This answer assumes you know a thing or two about CMMI, so we won't be explaining some terms you'll find answered elsewhere in this FAQ.

As of this writing, after the 2013 conference and workshop held by the newly-formed CMMI Institute, we can unequivocally state the rumors of the CMMI's demise are greatly exaggerated. The Institute hired a firm to conduct an independent market survey, the Partner Advisory Board conducted a survey of Institute Partners and their sponsored individuals, and, one of the Partners even took it upon themseleves to hire a firm to directly contact at least 50 companies who used CMMI. The interesting finds from these surveys and market data are that use of CMMI for actual improvement (not just ratings) are on the rise. Furthermore, that CMMI for Services is picking-up more users, and, it seems CMMI-SVC getting some users to convert over from CMMI-DEV (which partially explains a drop in CMMI-DEV).

In the US, the DOD no longer mandates use of CMMI as a "minimum pre-qualification", but it does view CMMI as a +1 benefit in offerors' proposals. In addition, most (if not all) of the "big integrators", defense, infrastructure and aerospace firms who use CMMI continue to use and expect the use of CMMI by their subcontractors.

In short, CMMI is far from dead, and, with new initiatives (in content and appraisal approaches) under way and planned for at the CMMI Institute, the relevance and applicability of CMMI to the broader market is expected to pick up again over the coming years.

Back to Model FAQs


How many processes are there in CMMI?

A: NONE. Zero. Zip. Nada. Rien. Nil. Bupkis. Big ol' goose-egg. There's not a single process in all of CMMI. They're called Process Areas (PAs) in CMMI, and we're not being obtuse or overly pedantic about semantics. It's an important distinction to understand between processes and Process Areas (PAs).

So, there are *no* processes in CMMI.  No processes, no procedures, no work instructions, nothing. This is often very confusing to CMMI newcomers. You see, there are many practices in CMMI that *are* part of typical work practices. Sometimes they are almost exactly what a given project, work effort, service group or organization might do, but sometimes the practices in CMMI sound the same as likely typical practices in name only and the similarity ends there. Despite the similar names used in typical work practices and in CMMI, they are *not* to be assumed to be referring to one-in-the-same activities. That alone is enough to cause endless hours, days, or months of confusion. What CMMI practices are, are practices that improve existing work practices, but do not *define* what those work practices must be for any given activity or organization.

The sad reality is so many organizations haven't taken the time to look at and understand the present state of their actual work practices, so as a result not only do they not know everything they would need to know to merely run their operation, they then look to CMMI as a means of defining their own practices! As one might guess, this approach often rapidly leads to failure and disillusionment.

How you run your operation would undoubtedly include practices that may happen at any point and time in an effort and during the course of doing the work. Irrespective of where these activities take place in reality, the CMMI PAs are collections of practices to improve those activities. CMMI practices are not to be interpreted as being necessarily in a sequence or to be intrinsically distinct from existing activities or from one CMMI practices to another. Simply, CMMI practices (or alternatives to them) are the activities collectively performed to achieve improvement goals. Goals, we might add, that ought to be tied to business objectives more substantial than simply achieving a rating. There's so much more to say here, but it would be a site unto itself to do so. Besides, we never answered the question....

... in the current version of CMMI for DEVELOPMENT (v1.3, released October 2010) there are 22 Process Areas. (There were 25 in v1.1, and also 22 in v1.2.) CMMI v1.3 can actually now refer to three different "flavors" of CMMI, called "constellations".

CMMI for Development is one "constellation" of PAs. There are two other constellations, one for improving services, and one for acquisition. Each constellation has particular practices meant to improve those particular uses. CMMI for Acquisition and CMMI for Services are now all at v1.3. While much of the focus of this list is on CMMI for Development, we're updating it slowly but surely to at least address CMMI for Services, too.

Meanwhile, we'll just point out that the three constellations share 16 "core" process areas; CMMI for Development and for Services share the Supplier Agreement Management (SAM) process area. The CMMI for Acquisition has a total of 21 PAs, and Services has a total of 24 PAs. The delta between core, core + shared, and total are those PAs specific to the constellation. More on that later.

We would like to thank our friend, Saif, for pointing out that our original answer was not nearly doing justice to those in need of help. The update to this answer was a result of his keen observation. Thanks Saif!

The Process Areas of CMMI are listed below. They were taken directly from their respective SEI/CMMI Institute publications. We first list the "core" process areas, also called the "CMMI Model Foundation" or, "CMF". Then we list the process area shared by two of the constellations, DEV and SVC, then we list the process areas unique to each of the three constellations, in order of chronological appearance: DEV, ACQ, then SVC.

All the PAs are listed in alphabetical order by acronym, and for those who are interested in Maturity Levels, we include in brackets '[]' which Maturity Level each PA is part of. We're also listing the purpose statement of each one.

We should also note that in process area names, purpose statements, and throughout the text, in CMMI for Services, the notion of "project" has largely been replaced with the notion (and use of the term) "work". For example, in CMMI for Services, "Project Planning" becomes "Work Planning", and so forth. The rationale for that is the result of months of debate over the relevance and subsequent confusion over the concept of a "project" in the context of service work. While the concept of a "project" *is* appropriate for many types of services, it is quite inappropriate for most services, and, substituting the notion (and use of the term) "work" for "project" has effectively zero negative consequences in a service context.

This may raise the question of why not merely replace "work" for "project" in all three constellations? In the attitude of this CMMIFAQ, our flippant answer would be something like, "let's take our victories where we can get them and walk away quietly", but a more accurate/appropriate answer would be that product development and acquisition events are generally more discrete entities than services, and the vast majority of product development and acquisition events are, in fact, uniquely identified by the notion of a "project". Furthermore, there is nothing in the models that prevent users from restricting the interpretation of "project" or "work". It's just that re-framing "project" and "work" in their respective contexts made sense in a broader effort to reduce sources of confusion.

Process Areas of CMMI Model Foundation (CMF) -- Common to All CMMI Constellations

    Causal Analysis & Resolution, [ML 5]
    The purpose of Causal Analysis and Resolution (CAR) is to identify causes of defects and other problems and take action to prevent them from occurring in the future.
    Configuration Management, [ML 2]
    The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits.
    Decision Analysis & Resolution, [ML 3]
    The purpose of Decision Analysis and Resolution (DAR) is to analyze possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.
    Integrated Project Management, [ML 3]
    The purpose of Integrated Project Management (IPM) is to establish and manage the project and the involvement of the relevant stakeholders according to an integrated and defined process that is tailored from the organization's set of standard processes.
    Measurement & Analysis, [ML 2]
    The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability that is used to support management information needs.
    Organizational Process Definition, [ML 3]
    The purpose of Organizational Process Definition (OPD) is to establish and maintain a usable set of organizational process assets and work environment standards.
    Organizational Process Focus, [ML 3]
    The purpose of Organizational Process Focus (OPF) is to plan, implement, and deploy organizational process improvements based on a thorough understanding of the current strengths and weaknesses of the organization's processes and process assets.
    Organizational Performance Management, [ML5]
    The purpose of Organizational Performance Management (OPM) is to proactively manage the organization's performance to meet its business objectives.
    Organizational Process Performance, [ML 4]
    The purpose of Organizational Process Performance (OPP) is to establish and maintain a quantitative understanding of the performance of the organization's set of standard processes in support of quality and process-performance objectives, and to provide the process performance data, baselines, and models to quantitatively manage the organization's projects.
    Organizational Training, [ML 3]
    The purpose of Organizational Training (OT) is to develop the skills and knowledge of people so they can perform their roles effectively and efficiently.
    Project Monitoring and Control, [ML 2]
    The purpose of project Monitoring and Control (PMC) is to provide an understanding of the ongoing work so that appropriate corrective actions can be taken when performance deviates significantly from the plan.
    Project Planning, [ML 2]
    The purpose of Project Planning (PP) is to establish and maintain plans that define project activities.
    Process and Product Quality Assurance, [ML 2]
    The purpose of Process and Product Quality Assurance (PPQA) is to provide staff and management with objective insight into processes and associated work products.
    Quantitative Project Management, [ML 4]
    The purpose of Quantitative Project Management (QPM) is to quantitatively manage the project's defined process to achieve the project's established quality and process-performance objectives.
    Requirements Management, [ML 2]
    The purpose of Requirements Management (REQM) is to manage requirements of the products and product components and to identify inconsistencies between those requirements and the work plans and work products.
    Risk Management, [ML 3]
    The purpose of Risk Management (RSKM) is to identify potential problems before they occur so that risk-handling activities can be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives.

Shared by CMMI for Development and CMMI for Services

    Supplier Agreement Management, [ML 2]
    The purpose of Supplier Agreement Management (SAM) is to manage the acquisition of products from suppliers.

Process Areas Unique to CMMI for Development

    Product Integration, [ML 3]
    The purpose of Product Integration (PI) is to assemble the product from the product components, ensure that the product, as integrated, functions properly, and deliver the product.
    Requirements Development, [ML 3]
    The purpose of Requirements Development (RD) is to produce and analyze customer, product, and product component requirements.
    Technical Solution, [ML 3]
    The purpose of Technical Solution (TS) is to design, develop, and implement solutions to requirements. Solutions, designs, and implementations encompass products, product components, and product-related lifecycle processes either singly or in combination as appropriate.
    Validation, [ML 3]
    The purpose of Validation (VAL) is to demonstrate that a product or product component fulfills its intended use when placed in its intended environment.
    Verification, [ML 3]
    The purpose of Verification (VER) is to ensure that selected work products meet their specified requirements.

Process Areas Unique to CMMI for Acquisition

    Agreement Management, [ML 2]
    The purpose of Agreement Management (AM) is to ensure that the supplier and the acquirer perform according to the terms of the supplier agreement.
    Acquisition Requirements Development, [ML 2]
    The purpose of Acquisition Requirements Development (ARD) is to develop and analyze customer and contractual requirements.
    Acquisition Technical Management, [ML 3]
    The purpose of Acquisition Technical Management (ATM) is to evaluate the supplier's technical solution and to manage selected interfaces of that solution.
    Acquisition Validation, [ML 3]
    The purpose of Acquisition Validation (AVAL) is to demonstrate that an acquired product or service fulfills its intended use when placed in its intended environment.
    Acquisition Verification, [ML 3]
    The purpose of Acquisition Verification (AVER) is to ensure that selected work products meet their specified requirements.
    Solicitation and Supplier Agreement Development, [ML 2]
    The purpose of Solicitation and Supplier Agreement Development (SSAD) is to prepare a solicitation package, select one or more suppliers to deliver the product or service, and establish and maintain the supplier agreement.

Process Areas Unique to CMMI for Services

    Capacity and Availability Management, [ML 3]
    The purpose of Capacity and Availability Management (CAM) is to ensure effective service system performance and ensure that resources are provided and used effectively to support service requirements.
    Incident Resolution and Prevention, [ML 3]
    The purpose of Incident Resolution and Prevention (IRP) is to ensure timely and effective resolution of service incidents and prevention of service incidents as appropriate.
    Service Continuity, [ML 3]
    The purpose of Service Continuity (SCON) is to establish and maintain plans to ensure continuity of services during and following any significant disruption of normal operations.
    Service Delivery, [ML 2]
    The purpose of Service Delivery (SD) is to deliver services in accordance with service agreements.
    Service System Development*, [ML 3]
    The purpose of Service System Development (SSD) is to analyze, design, develop, integrate, verify, and validate service systems, including service system components, to satisfy existing or anticipated service agreements.
    *SSD is an "Addition"  As such, it is at the organization's discretion whether to implement SSD, and, whether to include SSD in a SCAMPI appraisal.
    Service System Transition, [ML 3]
    The purpose of Service System Transition (SST) is to deploy new or significantly changed service system components while managing their effect on ongoing service delivery.
    Strategic Service Management, [ML 3]
    The purpose of Strategic Service Management (STSM) is to establish and maintain standard services in concert with strategic needs and plans.

Back to Model FAQs


How are the processes organized?

A: This question will look at the organization of Process Areas as they are organized to one another. The next FAQ question addresses the elements of each Process Area. Process Areas are organized in two main ways, called "Representations".

  • Staged, and
  • Continuous

Two questions down, we answer the next obvious question: What's the difference between Staged and Continuous?  For now, just trust us when we say that this really doesn't matter except to a very few people and organizations who really geek out over this idea of "pathways" through an improvement journey. Ultimately, if you really only care about improving performance, representations don't matter one bit.

Back to Model FAQs


What is each process made up of?

A: Each process area is made of two kinds of goals, two kinds of practices, and a whole lot of informative material.

The two goal types are: Specific Goals and Generic Goals. Which then makes the two practices to also follow suit as: Specific Practices and Generic Practices. Astute readers can probably guess that Specific Goals are made up of Specific Practices and Generic Goals are made up of Generic Practices.

Every Process Area (PA) has at least one Specific Goal (SG), made up of at least two Specific Practices (SPs). The SPs in any PA are unique to that PA, whereas, other than the name of the PA in each of the Generic Practices (GPs), the GPs and Generic Goals (GGs) are identical in every PA. Hence, the term "Generic".

PAs all have anywhere from 1 to 3 Generic Goals -- depending on which model representation (see the previous question) the organization chooses to use, and, the path they intend to be on to mature their process improvement capabilities.

The informative material is very useful, and varies from PA to PA. Readers are well-advised to focus on the Goals and Practices because they are the required and expected components of CMMI when it comes time to be appraised. Again, if improving performance is important to you and appraisals are not something you care about, then these goal-practice relationships and normative/informative philosophies don't really matter at all. Read more about that here. If all you want is improvement, and appraisals are not necessarily important, then it doesn't really matter how the model is organized. Use anything in it to make your operation perform better!

Back to Model FAQs


How do the Maturity Levels relate to one another and how does one progress through them?

A: We touch on this subject below but here is some additional thinking about the subject:

Before we get started, it's important to remind readers that the entire CMMI is about improving performance, not about defining processes or performing practices. If you go back to the challenge posed to the original founders of CMM, they wanted greater predictability and confidence that development projects (originally limited to software, but since expanded to any kind of development or services) would be successful. The practices in CMM and later CMMI were intended to provide that confidence. The rationale being that these practices provided the capability to increase performance and the increased performance would lead to better outcomes. Problems started when people saw the practices themselves as the intended outcome and not the increase in performance the practices were supposed to catalyze.

Part of this problem has to do with the evidence required for CMMI appraisals. Our sponsor company also has a problem with the appraisal in this regard. It is entirely possible for an organization to actually perform the practices in CMMI but that the evidence of having performed them to be very brief and difficult to capture. Unfortunately, the way the appraisal is currently defined, such fleeting artifacts are challenging to demonstrate to an appraisal and the value-add of attempting to preserve them is often very low. As you can imagine, the benefits of CMMI extend far beyond the appraisal, and in fact, exist regardless of whether or not an appraisal is performed.

For the remainder of this explanation, we will assume that we are after the performance benefits of CMMI and we are not too worried about the appraisal expectations. We will leave those for others to worry about. There are plenty of experts throughout the world who can answer specific questions about evidence. (Except for our own contributors, of course, we do not endorse any of them and we do not know whether they will give you correct or even satisfactory answers.)

To explain the different Maturity Levels, we use the idea of hiring particular experts. Clearly, using the practices of each Maturity Level does not require hiring such experts! We are only using the idea "hiring experts" to help explain the contents of each Maturity Level. For example, you don't need to hire coach to lose weight. You can read about it in many different books with many different ideas and come up with something that works for you. But, if you did hire a health coach, you would gain the benefit of their expertise without having to figure it out for yourself.

Maturity Level 2 is very very basic. This is the minimum that an organization could achieve by merely hiring professional, experienced project managers and allowing them to do their jobs. These project managers would work directly with the organization's leaders and owners to help projects be successful and the organization to be profitable. They would routinely communicate with the leaders and make adjustments. Companies struggling to incorporate or demonstrate use of the practices in CMMI ML2 are likely to be widely inconsistent with when they deliver, the quality of what they deliver and their profits are likely to be highly unpredictable. Such organizations frequently take on more work than they can handle. They then proceed to do a poor job of planning the level of effort and dependencies required to complete the work. When projects don't meet financial or customer expectations, companies who don't perform ML2 practices don't know where to begin to start to understand why and they typically turn to specific people to try to figure out what went wrong. Very often people (internally or customers) get blamed for the missed expectations rather than realizing that the problems really started with their own lack of situational awareness. ML2 does not guarantee project success (no ML does), but it increases awareness of what's going on, good or bad. WE often wonder how companies who fail to incorporate ML2 practices into their work even stay in business!

Organizations using ML2 practices mostly use data and metrics to ensure their projects are on budget, on time, and people are doing the work being asked of them. Organizations without the issues we describe above often perform the ML2 practices without realizing they are doing so.

Maturity Level 3 has quite a bit more going on. An organization with people who are experts in:

  1. process management,
  2. the technical/operational work of delivery (development or services, etc.), and
  3. organizational change and development

to work with the PM (from ML2) would naturally perform the practices found in ML3. The reason to add such people is to:

  • facilitate communication and coordination throughout the organization and to learn and share observations from the successes and failures of other projects,
  • establish performance norms for how to do the materially core work of the organization, and
  • put in place the mechanisms for continuous improvement, learning, strategic growth, and decision-making.

Furthermore, you would have the PM involved in ensuring the time and effort required to look across the organization is not used-up by the projects. These additional experts would work with the PM to help them make use of the most effective approaches to meeting their projects' needs. While projects often don't provide useful performance data until near the end of the work, these new experts would help the organization's leaders understand how well the organization is performing from the inside even while projects are in the middle of execution. ML3 organizations use data and metrics to help understand their internal costs and effectiveness. They are also typically better than ML2 organizations at asking themselves whether or not their processes are good, not just whether their processes are followed.

Companies who don't perform ML3 practices well may have problems coordinating across projects. They are also likely to experience issues with accounting for risks and other critical decisions. They will often have inefficiencies in many places including the technical or operational work but they will be unaware of these inefficiencies or how these inefficiencies become problems both for the projects and for the company's profits. Companies with issues in the ML3 practices will often be playing "catch up" with opportunities due to insufficient consideration of training as a strategic enabler. Similarly, they will often experience slow and tedious communication and decision-making that further slows their ability to be responsive to the market or their customers. Companies without these issues are likely to already handle CMMI ML3 practices in some way.

Maturity Levels 4 and 5 are very unique. We consider them together because there's nearly no reason to separate them in actual practice. Together, ML4 and ML5 are called "high maturity" ("HM") in CMMI. And we look at ML4 and ML5 together as the set of practices organizations incorporate to help them become "high performance operations" ("HPO"). The type of expert(s) added to an ML3 organization to make them HPOs are experts in using quantitative techniques to manage the tactical, strategic and operational performance of the organization. HM practices take the data from ML2 and ML3 to better help them control variation and to make them more predictable operations. They also use the data to decrease uncertainty and increase confidence in their performance predictions at the project and organizational levels.

Organizations using HM practices are better at forecasting their performance than companies with ML3 capabilities. They are also better at managing their work throughout the organization. HPOs have a greater awareness of how their processes work and whether or not they can rely on their processes to achieve their desired outcomes. Companies who do not operate with HM practices may not be struggling, however, they may have issues with their competitiveness or issues with their gross margin that they are unable to solve. Many companies who operate at ML3 are unlikely to be as competitive as companies with HM practices. Also, ML3 organizations' primary approaches to improving performance is to work on eliminating muda ("waste"), reducing head-count, frequent organizational structure changes, and linearly increasing marginal growth through increased sales. Decision-making among ML3 organizations is typified by "deterministic" approaches. On the other hand, HPOs improve performance through use of process performance models, strategic investment in processes and tools and they use a "probabilistic" decision-making approach.

Again, it is critical to point out that these explanations are not necessarily sufficient to produce material for an appraisal. We also remind you that our description of "hiring experts" is not actually about hiring anyone. It is merely a metaphor to explain that people with certain skills will do certain things when allowed to do their jobs. If someone were to observe what they do, the observer would find the CMMI practices likely embedded into their work.

Back to Model FAQs


What's the difference between Staged and Continuous?

A: It's just different ways of looking at the same basic objects...

The main difference is simply how the model is organized around the path towards process improvement that an organization can take. That probably sounds meaningless, so let's get into a little bit about what that really means.

The SEI, based on the original idea behind the CMM for Software, promoted the notion that there are more fundamental and more advanced (key)* process areas that organizations should endeavor to get good at on the way to maturing their processes towards higher and higher capabilities. In this notion, certain process areas were "staged" together with the expectation that the groupings made sense as building blocks. Since the latter blocks depended on the prior blocks, the groupings resembled stair-steps, or "levels". The idea then was that the first level didn't include any process areas, and that the first staging of (K)PAs* (the actual "level 2") was a set of very fundamental practices that alone could make a significant difference in performance.

From there, the next staging of PAs, or "level 3", could begin to exploit the foundational PAs and begin to affect process improvement changes from a more detailed technical and managerial perspective. Whereas, up through Level 3, where PAs had some degree of autonomy from one another, Levels 4 and 5 add Process Areas that look across all the other process areas as well as other activities not exclusively limited to process-area-related efforts. While Levels 4 and 5 only add a total of four PAs, they are not in the least trivial. They add the maturity and capability to manage processes by numbers rather than only by subjective feedback, and they add the ability to optimize and continuously improve process across the board based on a statistically-backed quantitative understanding of effort and process performance.

Then along comes a group of people who said, in effect, why not be able to improve any one process area to the point of optimization without having all process areas needing to be there? In fact, why not be able to focus on process areas with high value to the organization first and then go after other process areas, or maybe even ignore any process areas that we don't really need to improve?

In the staged representation, which is the original Software CMM approach, this ability to mature a capability in any one process area doesn't exist, so in CMMI, the idea of a Continuous representation was taken from a short-lived "Systems Engineering" CMM and implemented -- whereby an organization could choose to get really really good at any number of PAs without having to put forth the effort to implement low-value or unused PAs. This becomes especially meaningful to organizations that want to be able to benchmark themselves (or be formally rated) in only areas that matter to them.

For example, an organization is expert at performing activities in the Verification (VER) process area. They want to be world-renoun for it. In fact, they already are. But they'd like to not only create turn-key verification activities, they want it down to a science for financial and logistical reasons. They want to be running verification at a continually optimizing pace. Without the continuous representation, there'd be no way in CMMI to either work towards this continually optimizing state nor a way to gain any recognition for it.

So, to understand the Continuous representation of the model, it should be enough to know that this representation allows organizations to pick any number of process areas, and also pick to whatever depth of capability they want to become in those process areas. The key determinant in such a capability lies in the Generic Goals. As we will cover in the next question, the "level" of capability of an organization using the Continuous representation has to do with the Generic Goal they've institutionalized, not the number or mix of PAs.

*In the original CMM for Software, the process areas were called "Key Process Areas", or KPAs, and, there was no distinction between types of levels (see below), therefore there was only one type of level, and when someone said "level 3" everyone understood. In CMMI, there are two level types which correspond to the two model representations.(see below) Saying, "level" in the context of CMMI is incomplete. However, for anyone reading this FAQ from the beginning, this concept has not yet been introduced, and we didn't want to start adding terms that had not yet been defined.

Back to Model FAQs


What's the difference between Maturity Level and Capability Level?

A: They are different ways of rating your process areas...

Let's start with the basics. A "Maturity Level" is what you can be appraised to and rated as when the organization uses the Staged Representation of the CMMI, and a "Capability Level" is what you can be appraised to and rated as when the organization uses the Continuous Representation of the CMMI. As for the details...

A "Maturity Level" X means that an organization, when appraised, was found to be satisfying the goals required by process areas in that level (X). Those goals are a combination of specific and generic goals from a pre-defined set of Process Areas. Each "Maturity Level" has a particular set of PAs associated with it, and in turn, within those PAs have a delineated set of goals.

Maturity Level 2 (ML 2) in CMMI for Development requires the following PAs be performed up to and including Generic Goal 2 within them:

  • Requirements Management (REQM)
  • Project Planning (PP)
  • Project Monitoring and Control (PMC)
  • Supplier Agreement Management (SAM)
  • Measurement and Analysis (MA)
  • Process and Product Quality Assurance (PPQA), and
  • Configuration Management (CM)

Maturity Level 3 (ML 3) in CMMI for Development requires the ML 2 PAs, plus the following PAs be performed up to and including Generic Goal 3 within all of them:

  • Requirements Development (RD)
  • Technical Solution (TS)
  • Product Integration (PI)
  • Verification (VER)
  • Validation (VAL)
  • Organizational Process Focus (OPF)
  • Organizational Process Definition (OPD)
  • Organizational Training (OT)
  • Integrated Project Management (IPM)
  • Risk Management (RSKM), and
  • Decision Analysis and Resolution (DAR)

Maturity Level 4 (ML 4) requires the ML 2 and 3 PAs, plus the following PAs be performed up to and including Generic Goal 3 within all of them:

  • Organizational Process Performance (OPP) and
  • Quantitative Project Management (QPM)

And finally, Maturity Level 5 (ML 5) requires the ML 2-4 PAs, plus the following PAs be performed up to and including Generic Goal 3 within all of them:

  • Organizational Performance Management (OPM) and
  • Causal Analysis and Resolution (CAR)

For CMMI for Services and CMMI for Acquisition, the idea is the same, only some of the process areas are swapped out at both ML 2 and ML 3 for their respective disciplines. You can refer back to this question to fill in the blanks on which PAs to swap in/out for CMMI for Services and CMMI for Acquisition at ML2 and ML3. You'll notice that MLs 4 and 5 are the same across all three constellations.

Now, if you recall from the earlier FAQ, the Continuous representation is tied to the Generic Goals, and from above, Capability Levels are attained when using the Continuous representation. So with that, Capability Levels are then tied to the Generic Goals. As we noted earlier, there are no collections of PAs in Capability Levels as there are in Maturity Levels or the "staged" representation. Therefore, it is far simpler to explain that a Capability Level is attained PA by PA. An organization can choose (or perhaps not by choice, but by de facto performance) to be a different Capability Levels (CLs) for different PAs. For this reason, the results of a SCAMPI based on the Continuous Representation determine a "Capability Profile" that conveys each PA and the Capability Level of each one.

Basically, the Capability Level of a PA is the highest Generic Goal at which the organization is capable of operating. Since there is actually 3 Generic Goals, 1-3, an organization can be found to be operating at a Capability Level of ZERO (CL 0), in which they aren't even achieving the first Generic Goal which is simply to "Achieve Specific Goals"

Thus, the three Capability Levels are (in our own words):

  • Capability Level 1: The organization achieves the specific goals of the respective process area(s).
  • Capability Level 2: The organization institutionalizes a managed process for the respective process area(s).
  • Capability Level 3: The organization institutionalizes a defined process for the respective process area(s).

Back to Model FAQs


What are the Generic Goals?
    a.k.a. What are the differences among the Capability Levels?
    a.k.a. What do they mean when they say process institutionalization?

A: The Generic Goals *are*, in fact, perfectly parallel with the Capability Levels. In other words, Generic Goal 1 (GG1) aligns with Capability Level 1 (CL1). GG2 with CL2, and GG3 with CL3. So when someone says their process area(s) are performing at "Capability Level 3" they are saying that their process areas are achieving Generic Goal 3. The Generic Goals are cumulative, so saying that a process area is CL3 (or GG3) includes that they are achieving GG1 and GG2 as well.

Before we get into a discussion about the idea of institutionalization, let's list the Generic Goals:

    Generic Goal 1 [GG1]: The process supports and enables achievement of the specific goals of the process area by transforming identifiable input work products to produce identifiable output work products.

    Generic Practice 1.1 [GP 1.1]: Perform the specific practices of the process to develop work products and provide services to achieve the specific goals of the process area.
    Generic Goal 2 [GG2]: The process is institutionalized as a managed process.

    Generic Practice 2.1 [GP 2.1]: Establish and maintain an organizational policy for planning and performing the process.

    Generic Practice 2.2 [GP 2.2]: Establish and maintain the plan for performing the process.

    Generic Practice 2.3 [GP 2.3]: Provide adequate resources for performing the process, developing the work products, and providing the services of the process.

    Generic Practice 2.4 [GP 2.4]: Assign responsibility and authority for performing the process, developing the work products, and providing the services of the process.

    Generic Practice 2.5 [GP 2.5]: Train the people performing or supporting the process as needed.

    Generic Practice 2.6 [GP 2.6]: Place selected work products of the process under appropriate levels of control.

    Generic Practice 2.7 [GP 2.7]: Identify and involve the relevant stakeholders as planned.

    Generic Practice 2.8 [GP 2.8]: Monitor and control the process against the plan for performing the process and take appropriate corrective action.

    Generic Practice 2.9 [GP 2.9]: Objectively evaluate adherence of the process against its process description, standards, and procedures, and address noncompliance.

    Generic Practice 2.10 [GP 2.10]: Review the activities, status, and results of the process with higher level management and resolve issues.
    Generic Goal 3 [GG3]: The process is institutionalized as a defined process.

    Generic Practice 3.1 [GP 3.1]: Establish and maintain the description of a defined process.

    Generic Practice 3.2 [GP 3.2]: Collect work products, measures, measurement results, and improvement information derived from planning and performing the process to support the future use and improvement of the organization's processes and process assets.

So, you're wondering what's this business about institutionalization. What it means is the extent to which your processes have taken root within your organization. It's not just a matter of how widespread the processes are, because institutionalization can take place in even 1-project organizations. So then, it's really about how they're performed, how they're managed, how they're defined, what you measure and control the processes by, and how you go about continuously improving upon them.

If we look at what it takes to manage any effort or project, we will find what it takes to manage a process. Look above at GG2. You'll see that each of those practices are easy to understand if we were discussing projects, but when it comes to processes, people have trouble internalizing these everyday management concepts. All institutionalization reduces to is the ability to conduct process activities with the same rigor that we put forth to effectively execute projects.

The GPs are all about the processes, not necessarily the work of the services or projects performed. GPs make sure the processes are managed and executed and are affected contextually by the work being done but are otherwise agnostic as to the work itself. In other words, you could be executing the processes well/poorly as far as GPs go, but the work itself might still be on/off schedule, budget, etc.

Assume for a moment that the process execution is out of step with the results of the work, in other words for example, that the processes are crappily done, but the work is still on time and on budget and the customers are happy. That's possible, right? What it tells us is that the processes are actually garbage and wasteful, that they have no beneficial impact on the work.

GPs ensure the processes are (defined and) managed as though you'd (define and) manage the work. When they're out of sync (like in the example above) it tells you that the processes suck and your people are still why your efforts succeed, they're not really influenced by the processes. That, if the processes were to disappear no one would notice, but if key people in the company would disappear, everything would fall apart and the processes can't be relied-upon to carry the load. 

It's not that we ever want processes to replace good people, but when even good people aren't supported by good processes and the people are constantly working around the process instead of working with the processes, then you've got risks and issues. Good processes allow good people to think forward and apply themselves in more value-added ways than in reinventing the routine work every time they need to perform it. Good processes also allow the good people to unload busy work to less experienced people while they go off applying their experience to new ideas and improved performance leaving the routine stuff to people who can follow a process to uphold the status quo performance.

The point, here, being that the GPs ensure the processes are working and operating effectively. They're not about the specific work products of the service or project.

These concepts were not as consistently articulated in pre-CMMI versions of CMM. But if all this is still confusing, please let us know where you're hung-up and we'll be happy to try to answer your specific questions.

Back to Model FAQs


What's High Maturity About?
    a.k.a. What's the fuss about High Maturity Lead Appraisers?
    a.k.a. What's the fuss about the informative materials in the High Maturity process areas?

A: "High Maturity" refers to the four process areas that are added to achieve Maturity Levels, 4 and 5:

  • Organizational Process Performance (OPP),
  • Quantitative Project Management (QPM),
  • Organizational Performance Management (OPM), and
  • Causal Analysis and Resolution (CAR)

Collectively, these process areas are all about making decisions about projects, work, and processes based on performance numbers, not opinions, not compliance, and eventually not on "rearward-looking" data, rather, forward-looking and predictive analysis.

It's not just any numbers, but numbers that tie directly into the organization's business and performance goals, and not just macro-level goal numbers but numbers that come from very specific, high-fidelity sub-processes that are used by work and projects and can predict a work effort and/or project's performance as well as the process' outcomes.

What enables this sort of quantitative-centric ability are benchmarks about the organization's processes (called process "performance baselines") and the predictive analysis of the organization's processes (called process "performance models"). Together the process performance baselines and models provide the organization with an idea of what their processes are really up to and what they can really do for the bottom line. This is not usually based on macro-level activities, but are based on activities for which nearly every contributing factor to the process is known, quantifiable, within the organization's control and measured.

Since process are used by projects and other work, and, since at maturity (and capability) levels of activity beyond level 2 the projects are drawing their processes from a pool of defined practices, the ability to create processes for projects and other work, and to manage performance of both the work/project activities and process activities is facilitated by the quantification of both work/project and process performance. This results in process information that simultaneously supports work/project outcomes as well as provides insight into how well the processes are performing.

In order, however, for the process data to provide value, they must be stable and in control. Determining whether processes are stable and in control is usually a matter of statistical analysis. Therefore, at the core of all this quantification is a significant presence of statistics, without which process data is not always trustworthy (remember: high maturity, it's different at ML2 and 3), and that data's ability to predict outcomes is tenuous, at best.

The eventual need and ability to modify the processes similarly exploits the value richness of statistics and higher analytical techniques. By the time an organization is operating at higher levels of maturity, they will expect themselves to rely on hard data and process professionalism to guide them towards developing, testing, implementing process changes.

The fuss about all this is multi-faceted. To name a few facets, we can begin by categorizing them (though the categories are not unrelated to each other) as:

  • the only required and expected components of the model are the goals and practice statements, respectively, and
  • misunderstanding and/or misinterpretation of the model high maturity practices.

The action to address these facets stems from a flood of findings that many high maturity appraisals didn't accept as evidence those artifacts that convey the proper intent and implementation of these higher maturity concepts were applied at the organizations appraised. In fact, the opposite was found to be true. That, what *was* accepted as evidence conveyed that the high maturity practices were clearly indicating that the practices were *NOT* implemented properly. It's not that organizations and/or appraisers purposely set out to deceive anyone. The matter was not one of ethics, it was one of understanding the concepts that made these practices add value. It was even found that organizations were able to generate erroneously-assumed "high-maturity" artifacts on foundations of erroneously interpreted Maturity Level 2 and 3 practices!

The crux of the matter is/was that none of the practices of the model at any level are entirely stand-alone. They (nearly) all have some informative material that provide context and edification for the practice statements. Since CMMI is a model, and, since all models aren't complete -- by definition -- the most any model user can hope for is edification, explanation, and examples of use. The CMMI's informative material provide this.

At maturity levels 2 and 3, many of the practices are not foreign to people with some process discipline experience such as change control, project management, peer reviews and process checks. They may yet be a bit arcane for some, therefore explanations are provided; the practices at the lower-numbered maturity levels enjoy a wide array of people who can understand and perform them to satisfy the goals of process areas without additional explanation. (Think of it this way: more people know algebra than differential equations, but for some people, even algebra is a stretch.) Practical methods and alternative implementations for these practices abound.

This seems not the case with higher maturity practices in maturity levels 4 and 5. The practices of "high maturity" activities require a different set of skills and methods that few individuals applying CMMI ever have the need or opportunity to acquire. These would be the "differential equations" population in our prior analogy. Thus, the informative materials (namely, sub-practices and typical work products) become far more important -- not towards the required artifacts of an appraisal, but for the proper implementation of what's intended by the practices.

An analogy used elsewhere bears repeating here: want to implement ML2 and ML3? Hire a project manager, a really good operations person in your type of work (e.g., development or services), and a process improvement specialist. Want to implement ML 4 and 5? Team up the first three with a process improvement specialist with expertise in statistical process control, operations research, lean thinking (such as design of experiments) and skills in advanced analytical techniques. For people who can claim the latter on their resumé's, little, if anything in ML4 & ML5 is new to them. ML 4 & 5 bear much resemblance to Six Sigma activities.

To address these findings, SEI created a certification for lead appraisers, and a class for anyone looking to implement high maturity concepts. The certification is intended to ensure that lead appraisers performing high maturity appraisals understand what the concepts mean and what they need to look for, and the course provides deeper insight and examples about the practices and concepts.

Although there are many lead appraisers providing consulting to clients about high maturity concepts, only CMMI-Institute-Certified High Maturity Lead Appraisers ("HMLAs") can perform the appraisals to those levels. Although a blanket statement would be highly inappropriate, there have been clear, well-documented, cases of non-HMLAs providing very poor advice on high-maturity implementations. This FAQ strongly recommends that any organization pursuing high maturity take advantage of some consulting by an CMMI-Institute-Certified HMLA well before attempting a SCAMPI to those levels. Having said that, organizations should still take steps to ensure they hire a good-fitting lead appraiser.


What's a Constellation?

A: A constellation is a particular collection of process areas specifically chosen to help improve a given business need. Currently there are three (3) constellations:

  • Development: For improving the development of (product or complex service) solutions.
  • Acquisition: For improving the purchasing of products, services and/or solutions.
  • Services: For improving delivery of services and creation of service systems (say, to operate a solution but not buy it or build it in the first place).
There are 16 of the process areas common to all three constellations,  Basically, in all CMMIs, you have all the process areas listed here minus the following process areas specific to CMMI-DEV:
  • RD
  • TS
  • PI
  • VER
  • VAL, and
  • SAM

A quick reminder that the process ares listed here are for the DEV constellation only. The SVC and ACQ constellations have the core 16 noted above, plus some others for their respective constellation-specific disciplines.

Back to Model FAQs


How many different ways are there to implement CMMI?

A: Infinite, but 2 are most common.

But before we get into that, let's set the record straight. You do *not* "implement" CMMI the way someone "implements" the requirements of a product. The only thing getting "implemented" are your organization's work flows along with whatever "standard processes" and associated procedures your organization feels are appropriate--not what's in CMMI. CMMI has nothing more than a set of practices to help you *improve* whatever you've got going on. CAUTION: If whatever you've got going on is garbage, CMMI is unlikely to help. AND, if you create your organization's processes only using CMMI's practices as a template you'll not only never get anything of value done but your organization's work flows will be dreadfully lacking all the important and necessary activities to operate the business!

Let's say that again: You need to know what makes your business work. You need to know how to get work done. You need to know what your own work flows are BEFORE you will get anything good from CMMI. CMMI is awful as a process template! The *BEST* way to use any CMMI practice is to read that practice and ask yourself any of the following questions:

  • "Where in our workflow does *that* happen?"
  • "How does *that* show up?"
  • "What do we do that accomplishes *that*?"
  • Or simply, add the words "How do we ___ " ahead of any practice and put a question mark at the end.

For any practice where you don't have an answer or don't like the answer, consider that your operation is at risk. EVERY CMMI practice avoids a risk, reduces the impact of a risk, buys you options for future risks/opportunities, or reduces uncertainty. EVERY.ONE. You might need a bit of expert guidance to help you refactor the practice so that it appears more relevant and useful to your particular needs, but there is a value-add or other benefit to every practice. Truly.
(Admittedly, whether or not there's value to *your* business to modify your behavior to realize the benefit of a given practice is an entirely different question.)

Now, as far as the "2 most common approaches". There's what we call the blunt-object (or silo'd or stove-piped) approach, which is, unfortunately, what seems to be the most common approach in our observation. In this approach CMMI is implemented with the grace and finesse of a heavy, blunt object at the end of a long lever -- impacting development organizations and managers' collective craniums. This is most commonly found among organizations who care not one wit about actual performance improvement and only care about advertising their ratings.

And then, there's the reality-based approach. In which, processes are implemented in such a way that work and service personnel may not even know it's happening. Can you guess which one we advocate?

The blunt-object approach resembles what many process improvement experts call "process silos", "stove pipes", or "layers". This approach is also often implemented *to* a development team *by* some external process entity with brute force and very extreme prejudice. So, not only does the blunt approach employ some very unsavory techniques, subjecting its royal subjects to cruel and unusual process punishment, it also (in its design) is characterized by a "look and feel" of a process where each process is in its own vacuum, without any connection to other processes (or to reality, for that matter), and where the practices of the processes are somehow expected to be performed serially, from one to the next, in the absence of any other organizational context.

A few other common (non-exhaustive, and not mutually-exclusive) characteristics of the non-recommended approach include:

  • Heavy emphasis on compliance irrespective of performance.
  • Little or no input from staff on what the processes should be.
  • Using CMMI practices as project or process "requirements".
  • Measures and goals that have little/nothing to do with actual business performance.
  • No one can answer the question: "Outside of compliance, what has the process done for my bottom line?"
  • Complaints about the "cost of compliance" from people who actually watch things like the bottom line.

If so many implementations of CMMI are guided by an (internal or external process) "expert", one might (justifiably) wonder how and why CMMI processes could ever be implemented in such an obviously poorly conceived approach!

There are two (sometimes inter-related) reasons:

  1. Lack of understanding of the model, and
  2. Being an expert process auditor, and not a process improvement expert.
Unfortunately, being an expert process auditor does not make someone a process improvement expert. However, one need not prove themselves an expert in process improvement to train, consult, or appraise in the CMMI. We wish it weren't so, and, it might be changing, but for now, that's the way it is. So, what you have are many people who become "experts" in CMMI, but they're really only experts in the model's text and in appraising an organization's ability to read the text and produce text-book artifacts. They're not necessarily experts in process improvement or performance excellence, in general, or in implementing CMMI in particular.

We've come across countless examples of organizations' attempts to implement CMMI while being led by someone (or plural) who was at least one of the two types of persons, and too frequently, both at once. Frightening, but true. The jury is still out on whether it's worse to be led by such a non-expert or to attempt "Do-It-Yourself" CMMI implementation. What the jury is definitely in agreement on is that if your focus is on CMMI and not on improving business performance, you're really wasting your time. Again, we digress....

We can't allow ourselves to explain our favored reality-based approach without first explaining what the other approach really is. Not so that our approach looks better, and not because we must justify our approach, but because we feel that it's important for people new to CMMI and/or to process/performance improvement to be prepared to recognize the signs of doom and be able to do something about it before it's too late.

All kidding aside, believe it or not, there are organizations for whom the blunt/silo/stove-pipe approach actually works well, and we wouldn't necessarily recommend that they change it. These organizations tend to share certain characteristics including any number of the following: being larger, bureaucratic by necessity, managing very large and/or complex projects; and, there's an actual, justifiable reason for their approach. In fact, in these cases, the effect is actually neither blunt, nor particularly silo'd, but these types of organizations have other mechanisms for "softening" the effect that such an approach would have on smaller projects/organizations. And, that is precisely, how we can characterize the main difference between the two approaches: we believe that the reality-based approach to implementing CMMI works well in most types of organizations and work/projects of most scope, where the brute-force approach would not.

What does the blunt/brute-force/silo/stove-pipe approach look like?

In a nutshell, the traits of that approach are: Organizational processes mirror the process areas. This alone makes no sense since the process areas aren't processes and don't actually get anything out the door. Process area description documents are prescriptive and implementation of the processes do not easily account for the inter-relatedness of the process areas to one another, or of the generic practices to the specific practices. Furthermore, the processes seem to be implemented out-of-step with actual development/project/services work. Nowhere in the descriptions or artifacts of the processes is it clear how and when the process gets done. It's not a matter of poorly written processes, quite the opposite, many of these processes are the exemplar of process documents. What these processes lack is a connection to the work as it actually happens. Without a process subject-matter expert on hand, it's unlikely that the process would actually get done. In many cases (thanks to the sheer size of the organization) such processes *are*, in fact, done by a process specialist, and not by personnel doing the work.

In other words, with such processes, if an organization doesn't have the luxury of process specialists to do the process work, it would be difficult for someone actually doing the real work who is trying to follow the processes to see how the process activities relate to his or her activities and/or to see when/where/how to implement the process activities on actual tasks at hand. Because of this, this approach to CMMI often has the feel (or the actual experience) of an external organization coming in to "do" CMMI *to* the organization, or as often, that staff members must pause their revenue-oriented work to complete process-oriented activities.

Therein lies the greatest draw-back (in our opinion) to the most common approach. Instead of process improvement being an integral and transparent characteristic of everyday work, it becomes a non-productive layer of overhead activity superimposed on top of "real" work. And yet, this seems to be the prevalent way of implementing CMMI! Crazy, huh?

Why is it so prevalent?

That's where the two reasons of poor implementation, above, come in. People who don't understand the model as well as people who are not process experts (and therefore may have a weak understanding of the model) don't truly "get" that the model is not prescriptive, and so they attempt to make it a prescription. Auditing and appraising to a prescription is far easier and less ambiguous than auditing and appraising to a robust integrated process infrastructure. Frankly, the "common" approach suits the lowest common denominator of companies and appraisers. Those companies and appraisers who aren't after true improvement, and are only after a level rating, and who are willing (companies -- unknowingly -- (sometimes)) to sacrifice the morale and productivity of their projects for the short-term gain of what becomes a meaningless rating statement.

Alright already! So what's the reality-based approach about?!

The reality-based approach starts with a premise that a successful organization is already doing what it needs to be doing to be successful, and, that process improvement activities can be designed into the organization's existing routines. Furthermore, the reality-based approach also assumes that, as a business, the organization actually *wants* to increase their operational performance. Note the use of "designed into". This is crucial. This means that for reality-based process improvement (reality-based CMMI implementation), the operational activities must be known, they must be definable, and, they must be at work for the organization. Then, activities that achieve the goals of CMMI can be designed into those pre-existing activities.

This whole business of designing process improvement activities into product/project activities illuminates a simple but powerful fact: effective process improvement (CMMI included) requires processes to be engineered. Sadly, a recent Google search on "process engineering" turned up few instances where the search term was associated with software processes, and most of those positive hits were about software products, not process improvement. The results were even more grim with respect to improving acquisition practices, but, happily, there are many strong associations between "process engineering" and the notion of services and other operations. There is hope.

Besides the reality of what's already working, other attributes of our preferred implementation approach is that we don't expect the processes to be done by someone else, and, we don't expect them to magically apparate into existence. For both of those attributes to be in place, the reality-based approach doesn't rely on process descriptions to make the processes happen. Instead, the practices that achieve the goals of the processes are built into the very product, service and project activities of the organization's work, and, the process descriptions simply describe where in that work to find the practices happening.

One other attribute of our approach that is in stark contrast with the most common approaches is this: one of the expected practices of every managed process area is that they are planned for each project. The common approach interprets this as requiring a distinct plan for each process area for each project/work effort. Our approach categorically rejects this notion in favor of an epiphany we like to share with clients:  You can have a plan for performing each process without having to create an entirely new plan for doing so as long as you've already done all the planning. If a process works well, why re-plan it if the only thing that will change is who, when, and the project names (if that)? Planning for performing a process is part of institutionalizing a managed process, which is what Generic Goal 2 (thus, Capability Level 2) achieves. If not re-inventing the planning piece for each project *is* appropriate, can't the same be said for the remainder of the practices in institutionalizing a managed process? We believe, yes.

In the end we extend this concept to account for the capabilities of having managed and defined processes. We extend it in such a way that any and all processes an organization wants to improve can be managed and defined whether or not those processes come from CMMI. The reality-based process improvement approach (CMMI or not) results in process improvement artifacts that appear where the "real" work gets done, and not as an overhead process, or a process performed by process commandos, or a process that only generates artifacts if developers and project managers have to go searching for proof that the process was performed.

For what it's worth, this approach is what we at Entinex call AgileCMMI

Back to Model FAQs


Do we have to do everything in the book? Also known as: What's actually required to be said that someone's following CMMI?

A: The Goals are required. Everything else is mostly commentary.



Let's be frank (as if we haven't been frank thus far). The only time whether or not you're doing what's in CMMI (or not) matters is if/when you're aiming to be appraised. Otherwise, you'd just do whatever you want to get the most improvement out of and ignore what you don't need.

Having said that, the context of this answer is then about what's required for people who want it said that they are "doing" CMMI, and for the most part, this means that they're going to determine this via an appraisal. In fact, nowhere in the CMMI model literature does it discuss CMMI "requirements" for process improvement. The model is very careful to only use terms that imply that requirements of the model are for the model, not for process improvement. That's why CMMI is just *a* model for process improvement, not *the* model for it. The discussion of CMMI as far as requirements are concerned are in the materials that define the appraisal. This is also an often misunderstood aspect of CMMI.

SO... in the context of performing activities that appear like they came from the model -- especially where an appraisal is concerned -- there are three types of model content components:

  • required,
  • expected, and
  • informative
The goals are required. Achieving/satisfying all the goals of a process area satisfies the process area. Since goals don't get done by themselves (sports analogies work well here), an organization must be performing some kind of practices in order to achieve a goal, therefore, in the absence of any other practices, CMMI provides some practices that an organization might perform to satisfy each goal. That's why the practices are expected, but not required. The organization might have entirely different practices and might have a different number of practices, either of which are entirely OK as far as CMMI goes, but *something* must be happening to achieve a goal.

If an organization is *doing* something, then it must be resulting is some form of identifiable, tangible output. However, not every organization does the same thing, therefore not every organization produces the same outputs, and therefore sub-practices, most narratives and sample work products of a process' practices are only informative, and neither expected, nor required. Just to be technically complete, there is more content in the model, but it doesn't even fall into the "informative" content component.

The appraisal even has a term for practices that achieve goals that aren't in the model. They're called (logically enough) alternative practices! It logically leads to the reality that an organization's alternative practices include sub-practices and produce work products that aren't in the model.

However, when speaking of goals, they are immutable. The goals support the process area's purpose and each purpose supports improvement. If an organization can claim to satisfactorily perform a process area without achieving some number of goals within it, then the SEI and/or CMMI Institute would really like to hear about that, because it would require a whole re-thinking of a goal's (and possibly a process area's) applicability towards improvement.

What does this mean for an appraisal or the appraiser? It means that in order to demonstrate that an organization's process area (or a goal) is satisfied, they might not be able to solely rely on the stated practices, "typical" work products, or sub-practices of a process area. This means that not only might it be a good bit of work before an appraisal for the appraiser(s) to get up to speed and elbow-deep into an organization's processes, but it could even drag with it the need to be somewhat competent in the kind of work an organization does or tools they use. DANGER! That kind of in-depth involvement puts appraisers (and consultants) at some risk: they might be exposed for not being competent in the ways and means of modern operations! (Did we just say that?) Well, in for a penny... let's go the whole way... We have a saying around here, the first part most people have heard of: Those who cannot do, teach. [We added this next corollary:] Those who cannot teach, audit.

It's much easier on the appraiser if the expected model components were investigated as "required" and if some of the informative materials were also expected or required in order to demonstrate the (now, newly promoted) "required" parts (in their minds). This is closely tied to our discussion above regarding the implementation approaches. But until now, we didn't have enough background to get into it. The blunt approach to CMMI is replete with verbatim practices (which is often fine -- except where they're just floating out there without being tied to everyday work) and verbatim sub-practices, which starts to get a little fishy since sub-practices often change with the context of the projects, and verbatim typical work products, which is even fishier since it's rare that any one piece of an organization's work will use/need/produce so many work products. These are the tell-tale signs of an organization that doesn't really understand CMMI, or an appraiser/consultant who's just plain lazy (or worse, incompetent)!

Back to Model FAQs


Why does it cost so much?

A: Well that's a loaded and ambiguous question! What qualifies as "so much"? We'll just tell you what goes into the costs here and you can determine whether it's reasonable for you or how you can go about minimizing cost or maximizing value.

Here are the variables that go into the factors that affect cost:

  • Where you are *now* with respect to your implementation of process improvement using CMMI? (i.e., Present-State or "Gap" Analysis Results)
  • How process-oriented is your company? Do you understand process improvement? Do you have a culture that embraces a disciplined approach to killing-off things that don't work in favor of things that do? Do you have process improvement professionals on staff? Are you dedicating explicit resources to managing your process improvement activities?
  • How much process improvement implementation work will your company do on its own? vs.
  • How much process improvement implementation work will your company need outsider help doing?
  • How much progress do you think you'll be able to make? Meaning, how fast can you absorb change? Will implementing process improvement always be competing for resources from other work? Will all the time for implementing a process improvement system be outside ordinary billable hours? And,
  • How quickly do you want to make progress?
Other considerations include your organization's size, the kind of work you do, the kind of products you build and techniques and tools you employ to build them, the kind of contracts you find yourself in, your relationship with your clients, the way you manage your projects, skills your people have and the nature and composition of your organization and management structures. NOT trivial.

Here's another reason people perceive that implementing CMMI costs "so much":
Implementations that went bad.

There are far more bad implementation stories than success stories. By "bad" we simply mean those implementations that, while many of them did achieve a maturity level ratings, and all the while they were spending lots of time and money, they were also causing disillusionment, cynicism, and processes that fundamentally didn't work! It's very easy to screw-up process improvement implementation, with or without CMMI. Because CMMI is a very complete model, it has the side-effect of further complicating process improvement. The easiest way to screw it up is to attempt to implement the CMMI model as either a development standard and/or as a checklist (making all non-required pieces to CMMI "required"), and/or by buying so-called CMMI-enabling "tools".

While there are also many ways to being a CMMI implementation "success story", what these stories share in common are the following attributes:

  • Treat process improvement with the same rigor as a technical project.
  • Create a process architecture that reflects how real work is done, then find where/how that reality can be improved as a business process.
  • Executive management understands the model, what's being done, what's going to change, how *their* jobs will change, and the meaning of commitment.
  • Create and sustain a culture of process improvement.
  • Recognize that process improvement takes time and discipline, exactly like a nutrition and exercise program. And,
  • Process Improvement can't be done *to* a project, it's done *by* the project by the very nature of their work, not by any explicit "CMMI activities"
But, we are not in a position to give numbers. We hope you now understand why.

Back to Model FAQs


Why does it take so long?

A: That's a loaded and ambiguous question! What qualifies as "so long"? We'll just tell you what goes into the time frames here and you can determine whether it's reasonable for you or how you can go about minimizing time or maximizing progress. Please see the previous question.

Back to Model FAQs


Why would anyone want to do CMMI if they didn't have to do it to get business?

A: Because they must be perceiving that the way they do technology development or services now isn't giving them everything they want or need to be confident in their ability to produce the results they want/expect (profit, happy clients, low overhead, etc.) and to do it in a consistent way. If that's not you, move on. Otherwise, give CMMI a shot and check back here for more elaboration on this topic soon.

Back to Model FAQs


Isn't CMMI just about software development?

A: Nope. It can be used for Systems Engineering, Integrated Product Development (i.e., large, complex projects), and Supplier Sourcing. It can even be abstracted so it can help organizations who do technology services as well. More on that coming up.

Back to Model FAQs


What's the difference between CMMI v1.1 and v1.2?

A: Since the current version of CMMI is v1.3, we won't get into detailed differences between v1.1 and v1.2, but a summary of major changes to the model (only) are as follows:

  • Both representations (Staged/Continuous) are packaged together.
  • The advanced practices and common feature concepts were eliminated.
  • Hardware amplifications and examples were added.
  • All definitions were consolidated in the glossary.
  • IPPD practices were consolidated and simplified. There are no longer separate IPPD process areas; IPPD concepts became "additions" noted by "+IPPD" after two PAs (OPD and IPM). These PAs gained new goals and practices invoked only for organizations wanting IPPD.
  • Supplier Agreement Management (SAM) and Integrated Supplier Management (ISM) were consolidated, and the original v1.1 Supplier Sourcing addition was eliminated.
  • Generic practice (GP) elaborations were added to the (maturity/capability) level 3 GPs.
  • An explanation of how process areas support the implementation of GPs was added.
  • Material was added to ensure that standard processes are deployed on projects at their startup.
With v1.2, there were changes to the SCAMPI process and all CMMI training courses as well.

Back to Model FAQs


What's the difference between CMMI v1.2 and v1.3?

A: CMMI v1.3 does several things:

  • Aligns all three (3) constellations (DEV, SVC, ACQ) at once.
  • Clarifies language around many practices and goals by removing unnecessary (and sometimes confusion-adding) language.
  • Focuses more on improvement -- without assuming things are necessarily "bad" now and that all users need CMMI just to "bad" to "good" and that users might just be looking for CMMI to help them go from "good" to "great".
  • Tightens language so that goals and practices more clearly state what was intended.
  • Substantially re-writes "high maturity" process areas (i.e., maturity level 4 and 5) to reflect the original intentions of those areas (that somehow got lost in committee) and to close loop-holes exploited by less scrupulous users. In that effort, the Organizational Innovation and Deployment (OID) process area has been replaced with Organizational Performance Management (OPM)
  • Eliminates Generic Goals 4 and 5, and as a result,.
  • Eliminates Capability Levels 4 and 5.
  • Eliminates the IPPD "addition" from the DEV constellation.
  • Removes references to "projects" where there is no reason to limit the use of practices to such specific management constructs.
  • Re-orients the concept of a "project" to that of "work" in CMMI-SVC.
  • Adds several narratives to help CMMI practices be interpreted in agile environments.
  • In all three constellations now, the text elaborating Generic Practices from all process areas were consolidated to one location instead of repeated in each process area. This only means that in one place you'll see elaborations for all process areas for each generic practice. For example, you'll see Generic Practice 2.6 ("Control Work Products") listed once and under it you'll see elaborations for each of the process areas to help understand how it applies to each one.
  • Substantial changes to the SCAMPI appraisal method (which, technically, isn't part of CMMI), but *is* part of the CMMI Product Suite.
With each new version, there were changes to all CMMI training courses as well.

Back to Model FAQs


What's the key limitation for approaching CMMI?

A: This question comes to us from one of our readers. We love our readers!

There really are no size or scope limitations to CMMI as far as an organization is concerned. CMMI can be put to good use in any sized organization doing any kind of solution development. The real driver (or limitation) is whether there's any need for process improvement. It's a limitation based in business. As we've said here, CMMI is used as a basis from which to create process improvement solutions that fit your particular organization in all its particular particularness. If an organization doesn't have a process improvement need, there's no need for a process improvement solution, we suppose.

The one limiting attribute of the model is that the organization pick the correct one for the type of work they do. There are currently three "Constellations" of the CMMI model. One each for Development, Acquisition, and Services (due in spring 2009). Neither the model nor the appraisal process are specific as to what type of "development" it applies towards. That means proposal development is just as "process improvable" as software, hardware, or wetware development. All that is necessary for that constellation is the development of solutions according to some abstract concept of a life cycle of your choosing. Because the model is not specific as to whether those solutions are technology products, proposal products, service solutions, or children's lunches, it requires that whoever is implementing it understand exactly what is going to happen when they do implement it. Although written with technology products in mind, it is not impossible to abstract the model for use in any activity where the input is a need and the output is a solution.

On the practical, implementation side, however, there are many limitations. In the previous paragraph we hinted that the broad applicability of the model necessitates a certain level of expertise for an organization to know what decisions must be made, and how to make them most appropriately. If we were pressed to pick only one, we'd have to say that misplaced expectations is the #1 common cause, or limiter in approaching CMMI. Especially the misplaced expectations of senior level organizational management

Misplaced expectations explains a lot about why CMMI efforts fail to meet their goals, or, if they succeed, they do so only at the expense of disillusionment, cynicism, lost moral, and employee turn-over, on top of the high monetary cost of the so-called "achievement" of a level rating.

Misplaced expectations lead to bad decisions in every aspect of life, and this is no different for CMMI. If we look at CMMI implementation like any other project, it becomes very easy to spot what misplaced expectations lead to: insufficient resources, improper allocation of responsibilities, unrealistic time and cost estimates, insufficient training, inappropriate measures of success, lack of leadership insight or oversight, lack of leadership accountability for owning the effort and instilling the necessary discipline to make the project succeed, and a dearth of enthusiasm or buy-in from project participants.

Do you get the picture?

Simply setting ones expectations appropriately with respect to CMMI must be the next step once it is determined that there is something going on within an organization that is a "Development" process (and/or later in 2007, a Service process and/or an Acquisition process).

So, in the end, the only solution to mitigating misplaced expectations is to get smart about what the model is, what the model requires, what are the collateral implications of implementation, and how the appraisal works.

After all. implementing CMMI must be a strategic decision. Every respect an organization pays to other serious strategic decisions must be afforded to the decision to pursue CMMI.

We hate that this is the answer, because we wish it were more cut and dry. But the fact that it's not cut and dry is the same issue that leads people to ask about implementation cost and time long before any such answers ought to be mentioned. There are so many factors that the only way to really get around these challenges (limitations) is to get educated in process improvement, CMMI, and in the appraisal process and requirements.

This FAQ can help a lot. When we set out to create it, we thought it would be helpful. Feedback has indicated that it's been more than helpful, it's a bonefide resource. But the FAQ can't address specific questions from specific companies. After all, those types of questions wouldn't be the "F" in "FAQ", would they?

The authors here believe that these answers ought to be provided to a potential CMMI traveller before they set out on the path. Unfortunately, like an inexperienced canyon hiker who doesn't wear the right shoes or take enough water, we've found that not enough CMMI travellers avail themselves of this information. However, much worse than that, an appallingly high number of CMMI consultants do not volunteer this information as part of their own due diligence to evaluate a potential client and to help the prospect (or new client) arrive at decisions most suitable to their circumstances and context,  *That* is a true blemish on the industry.

Back to Model FAQs


What's the key effort required in CMMI implementation?

A: This question also comes to us from one of our readers. We love our readers!

Be sure to read the above question and answer as it is closely related to this one.

A key effort, in our opinion, in CMMI implementation is the identification of the organization's actual development life cycle. In other words, what is their reality? Any organization successfully operating and, to some extent, profitably growing must be doing something that *works* for *them* and delivers on customer expectations. Figuring out what that is, for each organization, is key to implementing CMMI. There's really no magic to it.

Another key effort is organizing a process architecture that simultaneously reflects the reality just discovered and describes where and how process improvement takes place within that reality. If process improvement isn't taking place, now you know where you need to insert process improvement activities, and, you have some insight into how to best design the process improvement solution for the organization. It is highly recommended that the process of designing the process solution receive guidance from someone who knows how to do this as well as someone who understands the CMMI and how it is appraised. Such a person could be a hired gun, or a smart hire, depending on the needs and resources of the organization.

The important consideration to note is that it requires the combined knowledge of each organization's reality-based context as well as knowledge of the CMMI model and of the appraisal to come together to implement CMMI well.

And therein lies an assumption we've made in answering this question. That the organization wants to implement CMMI "smartly", and, in a way that results in lasting value that persists beyond the appraisal. There are other ways to just make it through an appraisal, but since we at CMMIFAQ don't advocate those approaches we're not gonna write about them. If you keep reading, you'll probably figure out what those approaches are. Bottom line: design your process improvement efforts into your revenue-generating activities and you will not only benefit from the improvements, you will also find yourself with a robust process improvement system which requires no evidence production when it comes time to perform appraisals. It becomes a simple matter of just doing the development work and allowing the work products of that development to speak process improvement for themselves.

Back to Model FAQs


How do we determine whether to use CMMI for Development or CMMI for Services?

A: This question (paraphrased) also comes to us from one of our readers. We love our readers!

For many users, it's not immediately easy to determine whether they're providing services or whether they're doing development. Especially when they are providing development services! Furthermore, many operations do both. So, there are a number of questions to ask when trying to make this decision:

  • How do customers get from us what we do? Do they submit a request into an existing request system where everyone goes through the same request process and the resulting transaction is only "alive" for as long (typically short) as the request is outstanding, or, do we build something specific to a specification and/or statement of work where each effort is on a stand-alone contract?
  • How do customers pay for what we do? Do they pay per request or do they pay for effort/results over time?
  • Is there a "Service Level Agreement" in place that we have to meet?
  • Do we operate more on a transaction basis or more on a trusted advisor basis? (Ignore, for now, what your marketing people say.)
  • What are we trying to improve? How we manage and develop products, or how we provide services?

Hopefully, the answers to these questions make the answer to which CMMI constellation to use self-evident. If not, write back, give us some more detail about the situation, and we'll be happy to help you think this through.

Back to Model FAQs



Appraisals/Ratings FAQs

How do we get "certified"?

A: OK, let's get something straight here and forever-after:  You do not get "certified" in CMMI. At least not yet. In the US, the concept of a "certification" carries a specific legal expectation and companies who are *rated* (and that *IS* the right term) to a level of the CMMI are not being "certified" to anything.

So the correct question is, 'how do you get "rated"?'. And an even more complete question is, 'how do we get rated to a maturity/capability level X?'

We'll get to the difference between Maturity Levels and Capability Levels and what the level numbers mean shortly.

The short answer for how to get rated still leaves a lot of information on the table. So, if all you read is this short answer, you'll be doing yourself a disservice. The really short answer on getting a level rating is that you get appraised by an appraisal team led by an CMMI-Institute-Certified Lead Appraiser who determine whether you are performing the practices of the CMMI.

This answer is so loaded with hidden terms it's frightening. So just so you know that you've been warned that this answer is too short, we'll point out each of the terms in our previous answer that has hidden meaning in it:

  • getting
  • level
  • rating
  • you
  • get appraised
  • appraisal team
  • led
  • CMMI-Institute-Certified
  • Lead Appraiser
  • determine
  • whether
  • performing
  • practices
  • CMMI.
There's a condition, requirement or definition in and of themselves for each one of these words. Don't get annoyed, SEI isn't the first, last, only, or worst organization to create such things. Every non-trivial discipline is loaded with concepts that experts can do in their sleep but that requires effort to understand by everyone else. It's true of EVERY profession so, _CHILL_OUT_. Need an example? Think of it like getting into shape. The short answer is "diet and exercise". Brilliant. Wonderful. What do you eat? How much? How often? What sort of work-out routine is right for you? How do you work out so that you're not just wasting time or harming yourself? See? Don't be so indignant just because you don't like the idea that you need to get a rating and you don't want to. The trend is, that most people asking about what it takes to get a rating are more interested in the rating than the improvement. That's OK... We understand. Sadly, too well.

Keep reading this FAQ. What else did you have to do today anyway?

Back to Appraisals/Ratings FAQs


How long does it take?

A: Here's another one of those dead give-away questions that a company is more interested in the rating than the improvement.

OK, that's a little unfair. Let's just say that as often as we hear this question, our judgmental attitude holds for ALMOST everyone who asks it. Allright, so maybe you are the exception. The truth is, it's a fair question. For every company.

A rare few companies don't care how long it takes. Lucky them. Applying a generous dose of benefit of the doubt, we can assume that the question is asked not for "how soon can we get this out of the way?" as much as from "are there any rules that dictate a minimum time before performing an appraisal?" How we can tell whether the company is interested in the improvements vs. the rating is simply a linear function of how long into the conversation we are before it gets asked. All-too-often, the source of the question is less ignorance of the process and more ignorance of the point behind going through the process.

Process improvement purists wish more people were more interested in the journey than in the destination. We are process improvement pragmatists. We know you're not looking at CMMI because you had nothing better to do with your time and money. That's for Bill Gates and his very worthy charitable endeavors. The company he's famous for founding is still in business for the money. FAST. So, how long it takes is a real question regardless of how you spend your money.

Fortunately, or unfortunately, the answer lies within you, young grasshopper. Really. We can't give you a much better answer than that. What we can do, however, is give you a list of the attributes that you can use to estimate how long it will take you, and give you a few example cases and some very general time-ranges.

Let's start again with our favorite analogy. Say you're carrying around about 40lbs

(18.18kg) of excess body fat. How long will it take you to lose the fat? A year? Two? 6 months? Can one person do in 6 months what another person needs 2 years? We all know the answer to these questions. "IT DEPENDS!"

EXACTLY! How quickly a company can become rated to a pre-determined point in the CMMI's rating scale depends entirely on them and their circumstances. It depends on:

  • their level of commitment,
  • their tolerance for and ability to implement change,
  • how busy they are,
  • what they know about process improvement in general and CMMI in particular, and
  • it depends on where they are as a starting point and
  • how much of the organization they want to include in the rating.
Working backwards from the appraisal itself (NOT including process changes to incorporate the CMMI practices or goals--only for planning and conducting the appraisal), the absolute minimum calendar time a company should expect between when the starting gun is fired and when they cross the finish line is a simple matter of logistics. Probably about a month if they're lucky. Two months would be more realistic. These 2 months, of course, are just the logistics and prep-work necessary to plan and conduct the appraisal and the activities that lead to an appraisal. Obviously, this time frame would only be realistic if the company was completely ready for the appraisal, had done all their homework, knew exactly what the state of their process implementation was and were literally trying to do nothing more than figure out how much time they had before they could conduct the appraisal. Of course, such a company wouldn't be asking the question. They'd already know.

So then there's almost everyone else. Everyone else needs time to first determine where they are in their implementation of CMMI practices. This is like saying, first we need to

find out how much excess fat we're carrying around. A trip to the right physician would answer this. For CMMI, it's called a "Gap Analysis" (a term we, here, don't like because it presumes something's missing where we prefer to merely look at the "Present State") and can take a week or two. Then, depending on those factors bulleted earlier, the gap found by the analysis would need to be filled. This is the part where a company would need to figure out what it's optimum sustainable diet and exercise routine should be, and, how long to stick with it to see the desired results.

In CMMI v1.1, there were 25 Process Areas, and in v1.2 and v1.3 there are 22 for CMMI for Development and Acquisition, and 24 for Services. There are two ways to look at them. The duration of the gap closure activities would also be a function of how many (and which ones) of the Process Areas the organization wanted appraised. Each of the Process Areas could be analogous to some aspect of a healthy lifestyle such as food choices, food quantity, shopping, cooking, meal planning, exercises, frequency, repetitions, technique, equipment, blood work, rest, stress management, work environment, time management, and so on. Obviously, the more of the lifestyle someone wanted to adopt, the longer it would likely take.

Once a gap is filled (i.e., the weight is lost and/or new muscle mass is added), an organization should give itself at least 2-3 months (on the short-project end) to 12-16 months (on the larger project end) to actually use their processes. This would provide them with enough data to actually conduct an appraisal. However, the actual metric isn't the calendar, it's the cycle-time of their development processes. Often called their development life-cycle.  Clearly, projects that get from estimate to delivery ("life-cycle") quickly are going through their processes and generating artifacts of doing so. This is the value to key off of moreso than the clock.

On the fat-loss analogy, this would be like finding that point where diet and exercise are enough to keep the weight off and one is able to demonstrate to themselves (or others, as needed) that they can, in fact, live and sustain a healthy lifestyle -- in the face of temptation and other uncertainties.

Once people internalize how process improvement works, how long it takes to earn a rating is a question such people stop asking. Like fat loss and getting into shape, process improvement is a discipline backed by many best practices. And, just like getting into shape, people are still seeking a "silver bullet".

We, on the other hand, stick to a healthy diet and exercise program. When we're off track we know it. We gain fat and feel like crap. When we're on it, we see the results.

Make sense?

Back to Appraisals/Ratings FAQs


How much does it cost?

A: If you've read the answer to the previous question and are still asking this question then you must really only be wondering about fees, attributes of cost or other general costs. Otherwise, go and read the answer to "How long does it take?" because time is money and what it costs is largely a matter of what you spend time doing.

As for fees, attributes of cost and other general costs, here's break-down of things that can or will cost you money towards being rated to a capability or maturity level of the CMMI:

    Lead Appraiser
    The Lead Appraiser will need time to meet with you to plan the appraisal, perform some preliminary evidence review (called "Readiness Review") and then to perform the appraisal. The range of what Lead Appraisers charge is pretty wide. Most charge about $2000/day +/- $1000.
    As a benchmark for your ball-park, the CMMI Institute has a small cadre of Lead Appraisers who can be hired (NOTE: only a few handfuls of Lead Appraisers actually work for the CMMI Institute, most are employees of other companies or operate independently of the CMMI Institute.). Prior to transferring to the CMMI Institute, the SEI used to charge at least $1800/day from the moment they leave their home (or their last engagement) to the moment they get back home (or to their next engagement). They also charge for all travel expenses as well as time they spend away from your site to do their preparatory and concluding activities. Also, they will often work by the "book". Meaning, a guidebook exists that assists with planning appraisals. The guide suggests that, based on the scope of the appraisal, appraisals be scheduled for a certain duration and not be condensed into fewer days and longer hours. Lead Appraisers are free to charge whatever they want. not many charge the way the SEI once did. (The CMMI Institute will have its own rates that, as of this udpdate, were not known to us at the site.)
    Someone will also need to provide Appraisal Team Training to the people you plan to have on the Appraisal Team. This takes 2 days and is usually done by a Lead Appraiser, and best if done by the Lead Appraiser you plan to have doing your appraisal.
    So, plan on the Lead Appraiser needing about 1-3 weeks to do the preparatory work for an appraisal, including Appraisal Team Training and at least one Readiness Review, and then 1-3 weeks to perform the appraisal itself (depending on the scope), then another day to wrap-up all the paperwork.
    Appraisal Team Members
    Every Appraisal for a rating is done by a team. The minimum number of people is 4 and that can include the Lead Appraiser. Every person on the team must meet certain individual pre-requisites and contribute to certain team-wide qualifications. (More on that in answer.) It is best if the team's constituents include people from your company as well as outsiders. At the appraisal, if you don't have (and can't create) qualified people in your company to be on the team, then you will need to bring in outside team members. (Most Lead Appraisers keep these in their back pockets -- kinda.) Outside team members are essentially consultants and charge as such. You're doing well if you can get outside team members for $1000/day. This would be very high-value. And, if you're only charged for a day where 1 day = the date on the calendar, and not 1 day = 8 hours, you're doing VERY well.
    Process Improvement Consulting
    If your organization needs to get up to speed on CMMI, you'll probably do one of two things: (1) Look to hire an employee with the expected expertise, or (2) Look to hire a consultant with the expertise. Which you choose to do depends on your organization's needs. The pros and cons of either approach are a basic matter of business and strategy. Either way, there's a cost. As for consultants, they're a lot like Lead Appraisers. And yes, many Lead Appraisers are also consultants. So, what and how they charge is largely up to them.
    There are no SEI- or CMMI Institute- mandated fees for improving your processes, using their models, or getting an appraisal. The only fees charged by the SEI or CMMI Institute are for courses licensed by them to the providers of such services, and for using their own in-house consultants or Lead Appraisers. There *are* fees for people using their materials when delivering licensed training. First of all, only authorized or certified people can use the material and when such people do so, and the people in class want it to be "official", there's a licensing fee that goes to the SEI and/or CMMI Institute.
    Consulting firms can charge whatever they want and call it whatever they want, but if anyone is implying that there are SEI- or CMMI Institute- mandated "fees" for consulting or appraising, they're only implying this. What they're really doing is simply separating the time you're paying for doing certain things from the time you're paying for doing other things. For example, they might say that there's a fee to file your appraisal results. Not with the SEI or CMMI Institute, there's not, but it does take time and it's reasonable for them to simply charge you some amount for the time it takes them to put in all the paperwork.

Other General Costs
As above, the only other general costs associated with an appraisal are:
  • Official training, and
  • your employees' time on the clock.

NOTICE what's *NOT* in the list above: TOOLS.

There is NO requirement for the purchase or use of any tool. Anyone saying that in order to "comply" with CMMI (or the appraisal) that you must purchase a tool, they're full of *crap!*

Some consultants do use tools as part of their work and as part of you hiring them you are also buying a license to use the tool. That's OK. Since you will end up using the tool after they're gone, it's reasonable that you should pay for using something that is either the consultant's intellectual property, or something they bought and are bringing to the table. And, it's up to you if you want to hire that company. It's not reasonable for you to hire a consultant who tells you they use a tool and then tell them not to use it so you don't have to pay for their tools. Many consultants work their pricing structure into the productivity and efficiencies they gain by using a tool and asking them to stand by their rates when you've asked them to leave their tools in the shed is not playing nice. On the other hand, anyone telling you that if you don't buy their tool then you are not going to meet the CMMI's "requirements" or "pass" the appraisal is FLAT OUT LYING LYING LYING!!! and should be reported to the SEI/CMMI Institute! And, you can do that by taking a number of actions listed here.

Back to Appraisals/Ratings FAQs


What's involved (in getting a rating)?

A: Um... that's a little broad, don'chya think? But, we get that question frequently enough so we might as well answer it. At least at a very high altitude.

There are three broad steps towards achieving a level rating:

1. Know where you are now.
This is usually called a "gap analysis" or "present state analysis". The right person to do this is someone who really understands the CMMI and how to appraise for the CMMI. Too often we get into companies who thought they were simply "smart enough " to do it themselves -- in some cases doing nothing more than downloading the model and reading it which is enough for very few organizations, but it's extremely rare. Even taking the SEI's licensed Introduction to CMMI course(s) seldom provides enough of an understanding to determine, without any other direct experience, how closely your company is performing the expected practices of CMMI, or how your particular implementation of the practices will fare in an appraisal. Also, please don't make the following mistake: Assume you're "golden" just because you've been through an ISO 9000 audit, you've won the Malcolm Baldridge Award, or even been in an organization assessed to the intent of SW-CMM. We've actually found that prior experience with other process-oriented bodies of work can work against a company's true understanding of what CMMI is about, how to implement it effectively, and how to appraise their practices.

Once you know what and where your gaps are in implementation you're ready for the next broad step.

2. Address your "gaps".
This is usually called, in CMMI circles, "Process Improvement" Although this step implies that your processes aren't up to the task as they stand now, what it really implies is that you will likely be making some changes to your current processes as you implement CMMI's practices and the method you should follow is one of process improvement and not simply a re-skinning of your paper trail. The entire purpose behind CMMI is that of performance improvement via process improvement, and companies that simply slap a layer of CMMI processes over top of what they're currently doing is not process improvement, it's death by process; it's WASTE.

It's come to our attention that CMMI has a reputation as being "death by process" as it is. We firmly believe that it's the latter approach towards CMMI implementation, as described in the previous paragraph, that causes this, not CMMI. To be blunt (you're used to it by now, yes?), slapping CMMI over top of your existing process, those processes that you feel have been working all along, is a STUPID way to implement CMMI.

On the other hand, if you do find value in practices CMMI promotes, then what you want to be doing is implementing them in a way that continues to provide you with the value-proposition of the things you like about your current processes and replacing or adding with CMMI those things that could use some strengthening. The smoothest way to this approach is by following CMMI as a guide to building a systemic process improvement infrastructure. Again, please be advised that doing this on your own without a CMMI expert employee or consultant is not advisable for the same reasons having an expert is best for performing the present state analysis.

One last comment on this step (and it's a bit of an unsung truism about the CMMI): companies who are honestly thrilled with their current process and really have a handle on the outcome of their efforts are probably doing a lot of what the CMMI would have you doing. Such companies may call their activities by different names, they might reach the goals in a less traditional way, but ultimately, they are getting the job done and are still in business, so they must be doing things right. (Or at least doing the right things.) If this is you, then your effort towards implementing CMMI is going to be quite painless and enjoyable.

Oh, OK... there really is one other important point: CMMI says precious little about organizational culture and leadership necessary to make any of this work. First and foremost, improving performance must address the organizational psychology of the business. If/when there are issues with the organizational psychology, they are nearly always a negative effect on improvement. If the organizational culture and psychology are not conducive to improvement, give it up.

3. Get appraised.
Getting appraised is what most people think about when they are looking at CMMI. The appraisal is what gives an organization their "Level". Once the appropriate expert can make a sound call on your organization's implementation of the CMMI practices, you can start planning for an appraisal. Details of the appraisal are answered elsewhere in this FAQ.

Back to Appraisals/Ratings FAQs


How does the appraisal work?

A: NOTE:This answer is for v1.3 of the appraisal method. Users of prior appraisal methods may not recognize this.

The appraisal process is guided by something called the Method Definition Document (MDD v1.3). If you are not familiar with the appraisal process already (which you aren't 'cause you're asking the question), you don't want to try to read this unless you're doing some whacky exercise in self-hypnosis. The MDD is based on a requirements spec called the Appraisal Requirements for CMMI (ARC). And, reading of that document is strictly forbidden without a prescription or special medical dispensation. The documents are available for download from CMMI Institute's web site and are actually well-written and really useful if you're a Lead Appraiser or studying to be one, but otherwise it's gonna sound like gibberish. Don't say we didn't warn you.

Just so you understand that the complete answer to this question is ordinarily delivered in 2 days' worth of training. We're obviously limited in what we can explain here.

We're going to pick up the appraisal with the portion of the appraisal that most people think about: the on-site period. It's that period of time when there's an appraisal team at your company and they're looking at your evidence and conducting interviews (or performing some other accepted form of verbal affirmation). It's at the end of this period that a company gets the results of the appraisal and, when all goes well, a rating.

So... that's pretty much what happens at the appraisal: A team, lead by a Lead Appraiser looks at evidence and makes a judgment on that evidence regarding the extent to which the it demonstrates that CMMI's practices are being implemented. There are 2 types of evidence: Artifacts and Affirmations. 

In v1.2 of the appraisal method there were variations of the artifacts, but these have been dropped. In v1.3 of the appraisal method there are only two types of evidence and no variations within them.

While this change has simplified an otherwise routine source of inconsistencies, this simplification has been slightly off-set by a possible increase in planning complexity. In an effort to reduce the demand for redundant data, organizations wishing to spend less time preparing for an appraisal --logistically-- and less time in the appraisal --looking at the same evidence-- can elect to provide either artifacts *or* affirmations -- as a function of the total scope of artifacts in the appraisal.

Artifacts are, as the name implies, *things* an appraisal team can look at, read, hold, touch, etc. Affirmations are verbal data gathered by interacting with people doing the work. There are rules that determine the number of evidence (artifacts & affirmations) required for a given example of processes, and, there are rules that govern the number of examples required to adequately sample the organization's processes.

These rules have been brilliantly summarized by our friends and colleagues over at Process Group in this paper.

For now, rest assured that the rules are there to ensure a proper sample of the organization is used in an appraisal, and, that the evidence required to support the sample ensures coverage of all the practices without being redundant.

For each practice in the scope of the appraisal the evidence is looked at collectively for that practice and a determination is made regarding the extent to which the practice is being implemented. This is called "practice characterization" The characterization scale is: Fully Implemented, Largely Implemented, Partially Implemented, and Not Implemented. There's also "Not Rated" and "Not Yet" which get a bit too complicated for this medium to address.

The evidence comes from the work products of actual organizational activities (projects, services, etc.). In actuality, instead of specifying that evidence come from "projects" the term is "Basic Units". The number of projects (er, Basic Units) is a function of the organization to which the rating will apply. You need a sample of Basic Units representative of the organization. And, no, you can't pick them, the Lead Appraiser works with you to pick them; and, no, you can't look at only the "best" aspects of the organization and puzzle together all the good-looking evidence from a bunch of different activities.

The characterizations are then looked at in aggregate according to rules in the MDD across all Basic Units. Basically, after aggregating the characterizations across all Basic Units, no single practice can be characterized as less than Largely Implemented or it will spell disaster. Even then, if certain practices are found even "Largely Implemented", and the appraisal team believes there's a pattern in what they're seeing that causes these practices to only be found as "Largely Implemented", the team may still choose to say that whatever's causing these practices to not be Fully Implemented is worrisome enough to preclude the organization from achieving the goals of the Process Area, and if any goal in a Process Area isn't achieved, then it can't be said that the whole Process Area is being satisfied, can it? And, that, our friends, is how the appraisal works: it's a search for whether the organization is satisfying the goals of those Process Areas in scope of the appraisal.

Basic Units are drawn from "Sub-Groups". Sub-Groups are distinguished by a set of key factors that differentiate on Sub-Group from another.

The minimum list of Sampling Factors are:

  • Location: if work is performed in more than one location (can be near, far -- not limited, what matters is whether or not the processes and other relevant attributes are different).
  • Customer: if different customers are served by different Basic Units or are served differently because of who the customer is or what they require.
  • Size: if work is performed differently based on the size of the Basic Unit, or Support Function, or the size of the effort.
  • Organizational Structure: if work is performed differently in different parts of the organizational structure.
  • Type of work: if there is more than one distinct type of work done in the organization (mobile apps vs. mainframe, hardware vs. software, systems of systems vs. electronic components).

Once you distinguish Sub-Groups based on these factors (and others, that you and your lead appraiser may determine to be relevant), there's an equation that is used to ensure that the number of Basic Units chosen from each Sub-Group is representative of the size of the Sub-Group and is representative of the Sub-Group's sizes in relation to the entire organization under consideration.

The key to the Sampling Factors are to identify the most likely sources of process differences. The important outcome of this sampling process is the analysis of the sample, not to force the organization to split-up its work into tiny pieces. If, after the analysis, it is determined that one or more of these factors do not change the processes, then the factors can be eliminated as influential on the processes -- thereby eliminating unnecessary and possibly artificial barriers across the organization's work, which would also unnecessarily increase the appraisal's complexity. While the analysis during planning has increased, the idea was not to increase the appraisal burden. Having said that, there have been plenty of organizations using far too few samples of their work and declaring these small sample sizes as "representative" of very large organizations. For such organizations, their appraisal burden my increase... or as we prefer to think about it... they may experience a long overdue "market correction".

However, when the number of Basic Units grows, wherever processes are similar, the amount of evidence required and whether this evidence is from artifacts and/or affirmations can be "collapsed". Again, this is all part of the details of appraisal planning and are too intricate to be explained here.

We should point out: for many organizations, the new sampling rules change nothing. For many organizations, the amount and types of work they do result in no differences in the sample of work used and/or in the types or volume of artifacts used. The new sampling rules were actually created to ensure large organizations include an appropriately sized sample, and for smaller organizations to avoid having to unnecessarily include too much in their sample.

Back to Appraisals/Ratings FAQs


What's a SCAMPI?

A: Ah-ha! Finally! A quick and easy question!

    Method for

Back to Appraisals/Ratings FAQs


Who can do the appraisal?

A: Another quick and easy question, thanks!

A Certified Lead Appraiser. Certified by who? The SEI and/or CMMI Institute.
Lead Appraisers (as of this writing) have to qualify by surviving the following activities in this order (sort-a):

  1. Introduction to CMMI
  2. Intermediate Concepts of CMMI, or two distinct CMMI for Practitioners courses
  3. Being a team member on at least 2 SCAMPIs*
  4. SCAMPI Lead Appraiser Training,
  5. various examinations throughout, and
  6. Being observed performing a SCAMPI by one of a very few number of people the CMMI Institute trusts to do that sort of thing
*Participating on 2 SCAMPIs can happen any time after the Introduction to CMMI course but must happen prior to applying for SCAMPI Lead Appraiser Training. The placement of SCAMPI participation is unrelated to when someone takes Intermediate Concepts of CMMI or the two CMMI for Practitioners courses.

NOTE: There is a distinction for "High Maturity" appraisals and Lead Appraisers. "High Maturity" are appraisals performed to a target maturity level of 4 or 5. "High Maturity" Lead Appraisers (HMLA) are required to take more coursework, more exams (written and oral), and to qualify in much greater depth of experience and knowledge in concepts found in the Maturity Level 4 and 5 process areas. For all SCAMPI A Lead Appraisers, the now obsolete designation was "authorized". Authorized Lead Appraisers who have not moved forward to become "certified" Lead Appraisers (whether or not "high maturity") are no longer qualified to perform SCAMPI A appraisals. Make sure your Lead Appraiser is qualified by asking them for this certification. (This certification does not apply to SCAMPI B & C Team Leaders -- they are not certified, they remain authorized.)

IMPORTANT!  ALSO, as of v1.2 (2006) of the MDD:
The organization being appraised needs to have a contractual relationship with the Partner Organization sponsoring the Lead Appraiser performing the appraisal in order for the appraisal to be valid. This rule stayed in place in MDD v1.3.

In other words, Lead Appraisers who aren't directly working for a Partner (or, who aren't themselves representative of a Partner), can't contract to perform a SCAMPI without the contractual involvement of a Partner. That's not to say that money needs to be involved, and, it also doesn't mean that the appraiser needs to negotiate their dealings through Partners, however, it does mean that the Partner at least know about the appraisal and the relationship being established with the organization being appraised. This was done in an effort to improve the Partner knowledge of activities happening in their name. And hopefully, also improve the quality of those activities.

Back to Appraisals/Ratings FAQs


Can we have our own people on the appraisal?

A: Yes! Yes, in fact, it's encouraged.

The appraisal team must be at least 4 people strong (including the Lead Appraiser), and with your company's employees on the appraisal team you increase the odds of buy-in to the appraisal process as well as follow-up and follow-through on any recommended actions from the appraisal. There are a number of qualifications potential team members must meet, the most logistically challenging of them being that candidate team members must have had a licensed delivery of the Introduction to CMMI before going into the appraisal activities (which begin a month or more before the actual on-site period). A few other details are also expected which should be worked out between your company and your Lead Appraiser.

Back to Appraisals/Ratings FAQs


Can we have observers at the appraisal?

A: Let's first start by defining what an observer is. An "observer" is someone who is not qualified to be on the appraisal team, or, despite being qualified is not actually on the appraisal team, but is hanging around with the appraisal team while they do their thing. OK, got that? 

So, the answer is: No.

Per the Method Definition Document (MDD), observers are not permitted on SCAMPIs. In fact, MDD v1.3 includes an explicit table of who is/isn't permitted for various aspects of the SCAMPI process. Furthermore, the MDD exclusively calls out the only exception to "observers" as being an official CMMI Institute observer who is there as part of an appraisal audit (of the appraisal, not the organization) or as part of the qualification process for the lead appraiser.

The primary rationale has to do with the nearly invariable experience that observers (being either untrained and/or sometimes not involved with any of the work leading up to the event) tend to be inadvertently disruptive to the discussion and proceedings. However, the most important consideration is that observers are not bound to maintain the confidentiality or the non-attributional aspects of the SCAMPI proceedings. And, there are no provisions in the SCAMPI method and no recourse through the CMMI Institute (or SEI, or CMU) to address issues that may be caused by non-participants breaking confidentiality and/or non-attribution requirements. Another concern is that of the comfort level of the participants to be open and honest when people who aren't committed to the results may be present, and, should there be any unfavorable findings, there may engender a concern for the influence of observers on the outcomes.

While all of the above rationale might be manageable by a competent appraisal team leader, the probability of problems outweigh the possibility of that everything will be fine. If there are unique circumstances whereby the conditions exist for the risks to be fully mitigated, a lead appraiser may request a waiver from the CMMI Institute during appraisal planning. (We wouldn't hold our breaths that it would be granted.)

Back to Appraisals/Ratings FAQs


What sort of evidence is required by the appraisal?

A:There are 2 types of evidence: Artifacts and Affirmations. For each practice in the scope of an appraisal, the requirement for evidence (in a SCAMPI Class A appraisal -- which we'll get to later) requires either Artifacts and Affirmations, or either Artifacts or Affirmations, as a function of the volume of work being appraised and several other factors determined by the evidence sampling rules.

These are the actual product or output of following a procedure, performing a process or some direct or supporting output or outcome of implementing a practice. It's fairly simple. If, for example, the way you implement a practice says you are to fill out a certain template, then the filled-out template is a Direct artifact of the practice. It doesn't matter whether the artifacts are a direct work product of the process or whether the artifact is a clear support to performing the work necessary to produce a work product. Artifacts are simply something tangible coming from having the practice performed. Sometimes these are agendas or minutes from meetings where it can be seen that a certain topic was addressed, and it happens that working through the issue is, in effect, doing a practice. Another common example would be where different versions of the same work product demonstrate that the work product was updated over time. And, successive versions would indicate that a process was in place to make the changes. If the practice says to keep track of changes, these versions could be used to demonstrate that changes were made, and one could infer that there was some way to keep track of them even though the fact that changes were made isn't actually the same as keeping track of changes. Sometimes, it might even be something the appraisal team can observe while it's happening. All of which are tangible.

Essentially interviews. These can also be obtained through other means such as surveys and demonstrations, most appraisals find it useful to conduct face-to-face interviews with the people who are actually doing the work, and hopefully, performing the practices.

Again, the mix of artifacts and affirmations are an important detail that follow specific rules. The rules themselves are HIGHLY context-dependent. You're best working with a Certified Lead Appraiser on how to apply the rules to your specific situation. The rules themselves are in the Method Definition Document (MDD v1.3). Look for the terms "Coverage", "Sampling", or "Data Sufficiency".

Back to Appraisals/Ratings FAQs


How much of our company can we get appraised?

A: The part of your company that gets the actual rating is called the "Organizational Unit". This can be the entire company or only parts of it as determined by the types of work (and as such, the types of processes) the company wants the appraisal to be performed on, and as a result, the appraisal results to apply towards.

For the appraisal to apply to an entire company, work that represents all the sorts of efforts that the company does would need to be evaluated in the appraisal. One instantiation of a type of work that consumes the entire company, and is the only work that company has and does would result in the appraisal on that one effort and that company could say that it's entire company has achieved the level rating awarded by an appraisal on that work.

The actual composition of the organizational unit is something that needs to be defined up-front during appraisal planning. The Lead Appraiser must analyze the selection of work the company desires to be accounted for in the appraisal results to ensure that the work used in the appraisal does, in fact, represent the organizational unit of the appraisal results. The more variety in the kinds of work in the organizational unit, the more types of work will be needed. Also, the broader the application of the appraisal results, the broader the scope of included work. Meaning, for example, if the company has a number of sites, and, the company wants "the entire" company included in the appraisal results, work representing each site must be included. Multi-site work efforts are OK, but using one location's part of the work to represent another will not, uh, work. There are a number of facets that are analyzed called "sampling factors" which we discussed in more detail above.

Back to Appraisals/Ratings FAQs


How many projects (basic units) need to be appraised?

A: NOTE: Since the SCAMPI (appraisal) method applies to more than just CMMI for Development, the notion of what is appraised is no longer limited to "projects". The broader (if, admittedly, more vague) term, "basic unit" is used.

. Enough such that the sample of work efforts chosen can represent the organization to which the appraisal results and process improvement recommendations will apply.

The exact number is a function of the number and variety of work types your organization performs, and, how much of the organization the sponsor of the appraisal wants to include in the appraisal results.

Since this comes up a lot, we'll reiterate, here, part of what we discussed above. For more detail, please go there.

"Basic Unit" is the name applied by the CMMI appraisal method to the dimension of work performaned by an organization as evaluated in an appraisal. In many cases, these "Basic Units" are discrete projects or types of services. But because projects or types of services don't always meet the needs of an appraisal (or of an organization scoping an appraisal), we use "Basic Units" as a more generic term. Basic units are drawn from "Sub-Groups" of the organization.

Sub-Groups of the organization are distinguished from one another by a set of key samplingfactors. Together, the collection of sampling factors differentiate one Sub-Group from another. The list, below, includes the minimum factors required to be evaluated in the analysis of the organization's work. Lead Appraisers are required to identify and include any other factors they believe to be relevant in order to identify appropriate Sub-Groups and Basic Units within them.

The important outcome of this analysis, and the underlying need for identifying Sub-Groups and Basic Units, is to ensure that all significant differences in the processes used by the organization in question in-scope of the appraisal are accounted-for. The more variation the more sub-groups, and therefore the more basic-units. The less variation the fewer sub-groups, and therefore the fewer basic-units.

It is not assumed that all sampling factors necessarily change the processes. If/when it can be supported by the Lead Appraiser's analysis, the impact of these sampling factors can be lowered so that sub-groups can be combined or ignored. On the other hand, additional sampling factors may be more relevant, and, it is the Lead Appraiser's responsibility to account for them.

You may have figured out by now that pending the analysis, an entire organization can (theoretically) be appraised on the basis of one basic unit. This is true.

The equation used by the Lead Appraiser determines the number of basic units by calculating a simple fraction of the total number of basic units in a given sub-group relative to the total number of basic units in the organization.

. Since more readers are already lost (let alone unqualified to figure this out themselves) we'll leave the specific to your Lead Appraiser.

The minimum list of Sampling Factors are:

  • Location: if work is performed in more than one location (can be near, far -- not limited, what matters is whether or not the processes and other relevant attributes are different).
  • Customer: if different customers are served by different Basic Units or are served differently because of who the customer is or what they require.
  • Size: if work is performed differently based on the size of the Basic Unit, or Support Function, or the size of the effort.
  • Organizational Structure: if work is performed differently in different parts of the organizational structure.
  • Type of work: if there is more than one distinct type of work done in the organization (mobile apps vs. mainframe, hardware vs. software, systems of systems vs. electronic components).

Back to Appraisals/Ratings FAQs


Can we have more than one appraisal and inch our way towards a rating?

A: No, At least not yet. Well, at least not in the way you're thinking.
You can have as many appraisals as you want, however, at this time, if you want a Maturity Level rating (or even a Capability Level rating -- more on that later), you will only achieve that if the appraisal looks at all the evidence for all the Process Areas in the scope of the appraisal in a single appraisal. There is talk afoot of allowing something like a "cumulative" appraisals where you can do some subset of an appraisal scope then come back and do a little more, and so on until you've completed the scope and then putting it all together for a rating, but that's not how it works today. If you do perform several appraisals where none (except, perhaps, the last) are for a complete Maturity Level, it would only serve to provide you a sense of how you're doing, you couldn't use the results of those appraisals to pare down what needs to be done at the appraisal you're conducting for "all the marbles".

Having said that, there is a tiny loop-hole, but it's not really a loop-hole, it's just an aspect of appraisal planning and execution that can be used to make better use of time and resources. The "on-site" portion of an appraisal, that is, that part of the appraisal that most people think of when having an appraisal, the part when a team of people look at evidence and interview the organization, once started, must be completed in 90 days. That 90-day clock starts when the first practice of one instance is characterized by the appraisal team. Once it's characterized (i.e., how well it meets the model's expectations), the team can't go back and change the characterization based on evidence that didn't already exist at the time it was originally characterized. If an appraisal on-site is spread out over the 90-day period, it's sort-of like inching your way forward. However, it's important to reiterate that you're not really getting anywhere until the complete set of practices, goals, and process areas have been characterized and rated; it's not like you're getting a "Maturity Level 1.6" or something like that on your way to a full Maturity Level 2.

Back to Appraisals/Ratings FAQs


If we go for a "level" now, do we have to go through it again to get to the next "level"?

A: Yes. Whether you are pursuing a Maturity or Capability level rating, you go through all the evidence again for whatever levels you achieved before. One reason is that at this time there are no mechanisms in place to allow for "cumulative" appraisals, which is what would be necessary to make this approach work. However, even more fundamentally, the appraisal team and Lead Appraiser can't be expected to assume that there would be evidence from the lower levels to support the higher levels' activities. Even more basic than that is the fact that the levels support one another and it would be very unlikely that appraising to a higher level could be accomplished without evidence from the earlier levels.

The only exception to this is if an appraisal is spread out over a period of time, and is, in fact, one long appraisal. The time-limit for completing a single appraisal is 90 days.

Back to Appraisals/Ratings FAQs


How long does the "certification" last?

A: Setting aside the fact that it's *NOT* a "certification" (See #1), the current answer is that Appraisal Results will be recognized by the CMMI Institute for three (3) years from date of the appraisal's acceptance by the CMMI Institute (or if prior to 1 Dec 2012, by the SEI.

Back to Appraisals/Ratings FAQs


What is the difference between SCAMPI Class A, B and C appraisals?

A: The differences boil down to the level of rigor, and, as reflection of the level of rigor, to what the outcomes can be.

A SCAMPI class A is the appraisal most people think of when they think about CMMI appraisals. It is the only class of SCAMPI that can result in a level rating. And, as one might imagine, it requires the most rigor: an authorized lead appraiser, a minimum number (4) of people on the appraisal team collectively having explicit qualifications, very specific types of evidence, and certain minimum number (and type) of projects to characterize the process improvement of the organizational unit.

SCAMPI classes B & C have much wider ranges in terms of team composition and evidence, as well as how they're used and the level of rigor one wants to apply. Although a SCAMPI B can't provide an official (or otherwise) level rating, one can conduct a SCAMPI B with all the rigor and "fanfare" of a SCAMPI A. Many organizations use SCAMPI B as a means of preparing for a SCAMPI A and/or as a means of conducting process reviews or audits. SCAMPI B requires less of the kinds of evidence, only 2 minimum team members, and since no level rating is given, the sample of projects required to be appraised is less specific. 

The SCAMPI C is very much the "formal" way to do an otherwise informal process review. It can be performed with only one person as the appraisal team-and-appraiser, it can use either direct, indirect, or interview artifacts, in no specific quantity or combination, and is frequently best used for performing what many call a "gap analysis". Of course, it can be used for more than that. Though, due to the low expectation of evidence, SCAMPI C is often used to target specific needs and/or when an organization is generally expecting that there will be many more findings than conclusive improvement to be found.

Back to Appraisals/Ratings FAQs


How do we pick a consultant or lead appraiser?

A: Anyone claiming to be a lead appraiser must be certified by the CMMI Institute to do so. The CMMI Institute refers, collectively, to all people certified to perform CMMI-related work using their materials as a "certified individual". Thus, all actual lead appraisers are "certified individuals". You can search/sort a list of such people here, and, specifically limit your search to lead appraisers. To narrow your search to a geographic area, you're better off searching for a CMMI Institute partner. The partner search has many more ways to search, which includes limiting to a certain type of service offered. And then, once you find a partner, you can see the authorized individuals associated with that partner.

However, one need not be a lead appraiser to consult on CMMI. In fact, there are many people very well experienced in implementing CMMI and with appraisal experience who are not credentialed to do appraisals or official training. Many more experienced people than there are authorized individuals. Frequently, because they don't carry authorizations, they're not able to charge as much as those who are authorized. The trick is: finding them. Many work for partners, so once you find a partner, you might ask about the authorizations of their consultants as a gauge of what you can expect to pay. Many people experienced in CMMI work for large organizations who need their services full-time, on site, and moonlight as consultants. Others are just independent consultants and get much of their work by word-of-mouth.

Though, the question isn't "how do we find a consultant or lead appraiser" but, "how do we pick from all the ones out there?!?!?".

As you can guess, there's not a simple answer, but we can say two things:

  1. Caveat Emptor (buyer beware), and
  2. Pick one who you feel can understand your business and your needs; your context.
WHY must the buyer beware? Because interpreting models for how a given implementation can be done, and also recognizing that a given implementation of a model is a legitimate interpretation of the model are far from exact science.

CMMI is a model not a standard, as we've said many times before. It's not something that, when applied, will look the same each time. Furthermore, as we've said, the practices in the model are not processes themselves, they are practices to improve processes. It takes skill to effectively interpret the model and implement it in a given situation, and, it takes contextual relate-ability to appraise whether the model has been implemented or interpreted properly/effectively.

There are (in our opinion) far too many lead appraisers (and consultants) who don't know how to/don't want to/don't appreciate (or are too lazy to) do what it takes to help clients design contextually appropriate process solutions, and/or to allow for contextually-driven interpretations of model practices when performing an appraisal. Symptoms of such an appraiser or consultant are in what they consider valid evidence of the practices, or valid descriptions of the processes, or how they describe their approach towards working with an organization to build up their practices. 

An appraiser or consultant may not be suitable to a given buyer if they only expect to create, see, or will accept "typical work products" as evidence, or if they expect to see, create, or will accept process descriptions with each CMMI practice spelled out, or if they expect organizations starting out to generate artifacts that address model practices but don't add value to the product development process. None of these characteristics are required (or prevented) by CMMI or the SCAMPI appraisal method. Therefore, buyers must be able to select the CMMI service provider whose attitude, knowledge, and experience suits their needs. After all, the model and appraisal process can allow for a wide variety of strategies, tactics, and contexts, but not every consultant or appraiser will (or can) allow for it.

What this means, in practical terms, is that buyers of CMMI services must be able to interview potential consultants and lead appraisers for their attitude towards, and knowledge and experience in practice implementation, evidence of practice implementation, and artifacts of the implementation. Furthermore, buyers must interview providers for the ability of the provider to pick up on, adapt, and appreciate the context in which the model has been or will be implemented.

The easiest example(s) to provide relate to whether the consultant or lead appraiser can communicate with the buyer in terms the buyer understands such as: being a small outfit, or using Agile development methods, or being embedded with the client and using the client's practices as well as their own. Another relevant inquiry is if the buyer can give or ask for some examples of practices as actually carried-out in or envisioned for their organization, and gauge the response from the potential CMMI service provider as to what they think of those practices.

The challenge in conducting such an interview is that the buyer must have enough of an understanding of the model and the appraisal process to be able to determine whether what they're hearing is the provider's opinion/approach or whether what they're hearing is dictated by the model or the appraisal process. Sounds like an impossible task. Fear not, some CMMI service providers will give you this sort of unbiased advice or even a quick education for free. This FAQ and its contributors are aimed at providing this sort of advice because we feel it's to the detriment of the entire CMMI enterprise not to do so. Sadly, rough estimates of the number of such providers puts the figure at about 5-10% of the entire authorized population. As a character in a pretty good movie once said, "choose wisely".

Good luck!

Back to Appraisals/Ratings FAQs


Where can we see a list of organizations that have been appraised?

A: Finally! A question with a simple, straight-forward and easy answer!

Simply visit the CMMI Institute's Published Appraisal Results System, (PARS) and put in your query. It's fairly uncomplicated. 
There are, however, a few points to keep in mind:

  • Not all organizations have asked to be listed in the system, not appearing does not guarantee they have not been appraised.
  • If an organization has changed their name after being listed, they will *not* be listed with their subsequent name(s), organizations are only listed with the name they had when the appraisal was performed.
  • Pay close attention to the Organizational Unit (OU) (discussed on this FAQ here) of the appraisal. Though you may be interested in validating whether a company has been rated, it's rare that entire companies are rated (especially if the company is not small). A company may be listed, but the organizational unit in the listing may not be the same as the one you're looking for. Or, there may be several organizational units within a single company. Do not take for granted that the organization you are researching is (or its people are) the same as the one appearing in the system.
  • Once all the appraisal data is fully completed and submitted to the CMMI Institute, it can take 30 days before appearing in the PARS. Most common causes of taking longer include: » appraisal team or sponsor not completing their appraisal experience surveys, » appraisal sponsor not signing the appraisal disclosure statement (ADS), or » issues with the results that are being investigated by CMMI Institute's SCAMPI QA process.


Back to Appraisals/Ratings FAQs


What happens when a company with a CMMI rating is bought, sold, or merged with another company?

A: Current and prior versions of appraisals (through and including v1.3) are patently rearward-looking. Furthermore, in v1.3, explicit sampling factors were put in place to distinguish important characteristics of the organizations being appraised that may cause the circumstances and therefore the processes to change from one part of the operation to another. As such, the only valid statements that can be made about an organization and appraisals performed on the organization are statements related to the specific organization named in the appraisal results at the time of the appraisal. 

In other words: when two (or more) organizations come together or are split apart due to mergers, sales, or acquisitions, appraisal results do not convey, combine, confer, assume or transfer with, from, or to the new entity/entities in any way whatsover. The appraisal results remain attributed to the original entity/entities and the circumstances that characterized the original entity/entities at the time of the appraisal. Furthermore, it is not possible to combine appraisal results from two or more entities or to assume the highest or newest of two or more ratings when independently appraised organizations combine in any way.

We hope that's clear.


Back to Appraisals/Ratings FAQs


What's the official record of the appraisal results?

A: The Appraisal Disclosure Statement (ADS) is the sole and entirety of the official results of the appraisal, regardless of what does or does not appear in the CMMI Institute's Published Appraisal Results System, (PARS). Nothing in any appraisal presentation, and unlikely anything to be found framed and on the wall at a company, or printed on a large banner and hung from a footbridge are official or complete indication of what exactly was appraised and the meaning and context of the results of an appraisal. (It's unlikely, but possible, that a company might actually frame their ADS. It's several pages long; but in the spirit of avoiding any absolutes we can't prove, above, we used the phrase "...and unlikely anything to be found...".) In any case, the ADS is generated by the Lead Appraiser after all the other data has been collected and submitted to the appraisal system. It's signed by the appraiser and the sponsor, and contains all the details of the appraisal, its circumstances, the explicit organizational unit to which the results apply, and the results themselves. If someone were serious about determining whether an organization has been appraised, when, to what end, and to what scope, they should request to see the non-confidential parts (if any are even confidential) of the ADS.


Back to Appraisals/Ratings FAQs


Can we go directly to Maturity Level 5?

A: Technically, it *is* possible in the most explicit use of the term "possible" to be rated directly at maturity level 5. All this means (in the case of maturity level 5 for Development, for example) is that the organization was appraised performing the Specific Goals of all 22 process areas up to and including Generic Goal 3 of each process area. The fact that they were not level-rated before this results in the organization having appeared as achieving ML5 "directly".

However, in reality, it's not likely that any organization would proceed to implement all 22 process areas without ever having performed any appraisals between the start of their process improvement program and their appraisal for ML5. What is more likely is that at certain points the organization will conduct appraisals to gauge their progress. Whether or not these intermediate appraisals were used to generate a level rating would be up to them. There's no requirement that appraisals generate ratings, so an organization appraising at ML5 and receiving a rating may appear to have gone directly to ML5, when in fact they had several appraisals before then -- none of which generated a rating.

Of course, there's another reality to consider: the CMMI Institute reviews all high maturity appraisals very carefully. If an organization has never had a CMMI (SCAMPI) appraisal prior to their appraisal for ML5, it will be viewed with even more scrutiny, and both the Lead Appraiser and the organization appraised can expect to get a call from the CMMI Institute's SCAMPI QA team. Not to mention that not having any appraisals prior to the one aiming for ML5 is extremely risky.


Back to Appraisals/Ratings FAQs


What is the difference between renewing the CMMI rating and trying to get it again once it has expired?

A: Generally, the difference is only in how much preparation it takes the organization. In our collective experience, most 1st-time ratings require some amount of transition from the original "present state" of the organization's practices to some "new" present state of practice in later future such that they can attain the desired level rating.

Assuming the organization in question didn't change much, and/or that they were successful in maintaining their practices over the years and have kept-up with their institutionalized practices and processes, then they would have little or no "gap" in practice performance. They would merely need to put in some sweat equity towards collecting the evidence for the appraisal beforehand.

The mechanics of an appraisal are no different. The lead appraiser (appraisal team leader) must still plan and prepare for the appraisal. The appraisal team must still be qualified and briefed. A pre-appraisal readiness review must be performed by the appraisal team leader. And, the on-site portion of the appraisal must still be performed.

Appraisal team members from prior appraisals can be re-used as long as they have the most up-to-date qualifications as required by the scope and method of the appraisal in question. There is no need for going through lengthy training for prior appraisal team members whose prior training still keeps them qualified for the scope and method of the appraisal planned. (These are Q&A for the lead appraiser. If you don't yet have one, we can probably answer them for you but we'd need some more information from you, so please contact us.)

If the organization has not changed the state of its practices since their prior rating event (i.e., SCAMPI A appraisal), there is no compelling reason to perform a new "present state" (a.k.a. "gap") analysis or to invest in any "improvement" consulting. However, if there is some concern that practices may have dropped off or slipped from the tracks since the last appraisal, a current present state analysis might be a good idea so that the required readiness review can be more productive with fewer risks and unknowns going into the SCAMPI A.

In our experience, renewal appraisals have cost 25%-40% of the original appraisal costs since the original costs included coaching, training, and consulting to bring the organization up to the point where they are ready for the appraisal.


Back to Appraisals/Ratings FAQs


Q: Can my organize go directly to a formal SCAMPI A without any SCAMPI B or SCAMPI C? Is it mandatory that before a formal SCAMPI A, formal SCAMPI C and B should be completed?

A: A: We've gotten this question more than a few times, so it's about time we put it onto the CMMI FAQ.

There is no requirement to perform a SCAMPI C or B prior to a SCAMPI A.
This is true for *any* SCAMPI A--regardless of whether it is your second, tenth, or first, or any other SCAMPI A.

Under certain conditions, a SCAMPI C or B is recommended, and there are many good reasons to perform a SCAMPI C or B (which is why they exist) but under no circumstances are they required.


Back to Appraisals/Ratings FAQs


CMMI, Agile, LifeCycles and other Process Concepts FAQs

What if our development life cycle doesn't match-up with CMMI's?

A: CMMI isn't a development life cycle. It's a model for building an improvement system to continuously improve very particular areas of what goes on during development, regardless of the life cycle. This is a central tenet of Entinex's approach to CMMI, by the way. Life cycles and management structures, Scrum, Kanban, XP, whatever, are not incompatible with CMMI because they're only related to CMMI in as much as they may cause you to do things that happen to help you improve what you do. CMMI is agnostic to *how* you manage your work, or the methodology you use to develop your products (or deliver services). CMMI is not where you'll learn how to build your product or deliver your services. CMMI will not tell you how to operate your business. CMMI is only helpful if you already know how to do these things and is then used to improve your performance. Lifecycles are how you get things done. You choose them and CMMI can help you improve within them.

Back to Agile and Standards FAQs


Doesn't the CMMI only work if you're following the "Waterfall" model?

A: NO! CMMI is not about development life cycles. While a fair criticism of CMMI is that many of the contributors come from a "Waterfall"- centric or a "Big Plan Up Front", "top-down" way of developing wares, they were at least careful not to box anyone into following a specific development method. Nonetheless, it takes very deep understanding of the CMMI to implement it, regardless of which life cycle you follow. We've got more to say on this, so check back in a bit. Meanwhile, you can browse over to our AgileCMMI blog.

Back to Agile and Standards FAQs


How does CMMI compare with ISO/TL 9000 and ITIL? (or other standards?)

A: While there is considerable overlap between these models, frameworks, and "best" practices, they are different from each other and used for different purposes. People who ask this question come from one (or both) of two camps:

  1. They're just totally unfamiliar with CMMI (and/or the others), and are asking innocently enough, and/or
  2. They just look at CMMI (and the others) as some standard they need to comply with, and not as something that can make a positive difference in the operations of business.

(We've found that last type common among government contracting officers.)

Let's address a question of "standards" first.

The process areas and the practices within them are not intended on being or replacing any technical "standard" for doing anything. Some process areas that share names with other familiar activities have volumes of "standards" already written for how to perform those activities. Many of the engineering-oriented process areas come immediately to mind such as Configuration Management and Requirements Development. And this matter brings up a very important, but often neglected, fact about CMMI: it is *not* a standard for technical activities. And, for whatever CMMI *is* supposed to be used for, it does *not* a prescribe how to do anything in it.

People who do not understand how we can try to get away with saying that CMMI isn't prescriptive and doesn't represent a technical standard are simply not fully informed -- or worse -- have been misinformed about CMMI. We'd really love an opportunity to set the record straight.

CMMI is about improving management processes associated with developing and delivering technical products and services. CMMI is not about the technical processes needed to actually do the developing and delivering. The CMMI "process areas" are what the authors believe to be important elements that contribute to a systematic ability to affect process improvement in and among (the management of) those technical process and practices that actually develop and deliver the products and services.

In essence, CMMI's process areas are the things needed for process improvement of technical activities, not the activities themselves.
What CMMI is saying is:
In order to improve your processes, you need to manage your requirements, risks and configurations; you need to plan, monitor and control your projects; you need to measure and analyze the output of your efforts; you need to actually pay attention to the performance of your project to how well they follow processes and to whether your processes are working out for you.
CMMI then says: if you really want to get good at these things you'd have be making a focused effort on your processes, you'd have standardized process assets, an organization-wide training program and a formality to your technical activities that might otherwise be left to fend for themselves.
For the true process zeal: you'd be able to quantify the performance of your projects and processes and you'd be able improve them by focusing on what numbers tell you to focus on, not just what people gripe about the most. CMMI also says that if you're going to do a process, you should have a policy for doing it, a plan for it, resources, assignments, process-level training, stakeholder involvement, and other activities to make them stick.

If process improvement is what you want, it only makes sense, doesn't it?
(The types of activities mentioned here are from the process areas and generic practices, in case they weren't familiar to you.)

You see, CMMI has a number of process areas that are needed for technical activities, but their presence in CMMI is because these process are also needed for process engineering just as much as they are needed for technical engineering.

SO, if we disassemble a process area into its purpose and goals in light of the above understanding we will see that the purpose and goals are not oriented at technical activities, they're oriented towards process improvement activities. We can hope that in this context, the matter of whether CMMI is a technical standard can be laid to rest, and, we hope that we bring a deeper appreciation for how CMMI works.

With that, we can simply explain that ISO/TL 9000 and ITIL have a different focus than CMMI, and just like CMMI has process engineering processes that sound similar to technical engineering processes, these other bodies of knowledge also have their similar-sounding activities that are needed and relevant for the purpose they each represent. Since this isn't a FAQ about ISO/TL 9000 or ITIL, we hope it's enough of an answer for now to explain that wherever CMMI has a practice that seems like it's also in another body, CMMI does not innately conflict with the others.... there are ways of implementing CMMI that can make them all work well... however, an organization can go about implementing any practice under the sun that could conflict with some other practice, CMMI or otherwise, but it would not be because of anything in CMMI.

Back to Agile and Standards FAQs


Aren't CMMI and Agile / Kanban / Lean methods at opposite ends of the spectrum?

A: Not at all. We've got A LOT of content on this subject! Instead of being very redundant by putting something here, please check out the blog on that topic, and the SEI's Technical Note, CMMI or Agile: Why Not Embrace Both!.

We will, however, leave you with this: There is nothing inherently *in* Agile/Kanban/Lean methods that make them incompatible with CMMI. However, both CMMI and the family of approaches commonly and collectively called "agile" have a very wide (perhaps infinite) range of ways of being interpreted and implemented. For example, nothing in CMMI requires that everything be "documented", though, many organizations take this (silly) approach to using CMMI. Similarly, nothing among the agile values or practices insist that a team produces *no* documentation, but that doesn't prevent teams from being just as silly.

One more point: Most of the incompatibilities we've seen -- beyond interpretation and implementation misfires -- come from focusing on the practices (either CMMI or agile) and on tools and/or artifacts (again, either CMMI or agile) instead of the values and principles behind them. Focusing on the wrong thing will most often lead you to the wrong place. Believe it or not, both CMMI and Agile come from the same source. They merely take different routes to get to the same desired outcome, but when used incorrectly, neither is likely to save you from failure.

How are CMMI and SOX (SarBox / Sarbanes-Oxley) Related?

A: They're not. Well... at least not in the way that many people think they might be.
See, many people think that because the Sarbanes-Oxley Act of 2002 (which we'll just call SarBox) frequently involves business process and IT infrastructure and related systems, that it involves CMMI. But, in actually, the connection to CMMI is rather weak and always is a function of the organization's intentional effort to connect the two.

SarBox is about public company corporate governance. It is a US law that aims to eliminate the excuse by corporate leaders of public companies that they "didn't know" some bit of information about their company that could result in mistakes (or outright lies) about accounting, work-in-process, inventory, earnings reports, valuations, sales/revenue forecasts, and so on.

Its origin is in the several accounting scandals revealed in the late 1990's and early 2000's.

The intersection of SarBox and CMMI is only in that companies working towards SarBox compliance are very often relying on systems and software to help them achieve their compliance. When a company says to an auditor, "our software (or systems) are capturing the data, and we rely on that data to be SarBox-compliant," then they might get into questions about the system's requirements, design, configuration, etc.

A company in such a position might look towards CMMI for ways of improving the management of their development practices if, in fact, they are relying on those practices to maintain their SarBox compliance.

The simple answer is this: SarBox and CMMI are not related and don't have similar practices, EXCEPT that some companies *make* them related because of the context in which they're using technology to be SarBox-compliant, and, their reliance on technology development disciplines and/or institutionalization of process improvement disciplines to make sure they have a handle on how they run the company.

That said, another quick connection between SarBox and CMMI is that a company already familiar with CMMI might want to use the GPs and perhaps a few PAs to deploy whatever they need to deploy to "establish and maintain" SarBox compliance. (Sorry, couldn't resist!)

There's another angle to mention:

Many executives seem to have the nasty trait of putting anything that looks, smells, sounds, tastes or feels like "compliance" requirements into the same "bucket".

When they make this mistake, they then jump to a conclusion that goes something like this:

"This is compliance, that is compliance, and that over there is compliance too. They must all be related, and therefore redundant. Which is best? Which is easiest? Which is cheapest? Which should I do first?"

This, of course, is utter nonsense, but it happens. The fact that implementation of these process-related concepts must be driven by the context of the business is just ignored. It is a symptom of an organization looking for the marketing benefit of some auditable achievement and not the benefit of the concepts behind the effort.

Though, in fairness, companies that must comply with SarBox don't have a choice, unless they can afford to go private by buying back all their stock.

Back to Agile and Standards FAQs



SEI / CMMI Institute FAQs


Why is CMMI Being Taken Out of the SEI?

A: CMMI and its predecessors have been worked on by SEI for over 25 years. Much of it was funded by the US Department of Defense (DOD). The DOD stopped funding CMMI several years ago. However, SEI is still an FFRDC (see here) funded by DOD. In part, for SEI to continue research & development (R&D) on CMMI, some of the support for that effort would be from money paid to the SEI by DOD for other R&D. In 2012 the DOD decided that it wanted the SEI to focus all of its resources on evolving other technologies more urgent to DOD than CMMI and that the CMMI is mature enough to support itself. So, instead of dropping CMMI entirely, Carnegie Mellon University (CMU) is creating the "CMMI Institute" to operate CMMI (and People-CMM and a few other things, eventually). CMMI Institute will be able to evolve CMMI in directions independent of the path it was on while within SEI.

Back to SEI / CMMI Institute FAQs


Who Will Operate the CMMI

A: CMMI will continue to be owned and operated by Carnegie Mellon University (CMU) through a start-up entity is created in 2012 called the "CMMI Institute" which will formally assume operation of CMMI on 1 January 2013. This entity will be able to be more market-focused and industry-driven. Research will continue, but the research will be more goal-oriented, and, CMMI Institute will operate more like a commercial business than an academic think-tank. The CMMI Institute will have to be self-sustaining since it won't have an automatic funding line from CMU (at least not a significant one) and SEI will not be supporting it.

Back to SEI / CMMI Institute FAQs


What Will Happen to CMMI? Will CMMI Continue to be Supported?

A: CMMI will continue to be supported by CMMI Institute. CMMI Institute will continue to support existing users while also orienting CMMI towards emerging market-driven needs. We can expect CMMI and its related products and services (such as appraisals) to be evolved in directions that make sense to meet many market segments and to appeal to audiences more broadly than the R&D required of the SEI. We can also expect changes (improvements) in the variety of appraisals, the quality/qualifications of instructors and appraisers and even possible new designations for appraisals, appraisal results, and appraisers.

Back to SEI / CMMI Institute FAQs


Will CMMI Change? What's the Future of CMMI?

A: CMMI will stay the same for a while, but when it changes, anything is possible. While the current version and architecture of CMMI may continue to evolve along its current trajectory, this is only one possibility. When not directed towards DOD R&D, CMMI can evolve along many new paths. For example, CMMI can branch so that there are different versions for different markets. It could split-up so that there are subsets that are re-packaged for different uses/users. Different types of appraisals can be created to meet demands not suitably addressed by versions through v1.3 of CMMI and the appraisal methods. Imagine, for example, versions of CMMI and of appraisals that focus on ongoing improvement in bottom-line performance, or versions that meet the specific targeted needs of start-ups and their venture backers. Imagine appraisers and consultants specifically qualified to work with lean, agile, start-ups, enterprise, operational services, technical debt, or DevOps, each with a version of CMMI, training, and appraisals suited specifically to their business and without the ambiguity currently experienced with only one version of everything for everyone.. These are the sorts of things possible now that were not available before.

Back to SEI / CMMI Institute FAQs


Will Appraisal Results Continue to Be Valid Once SEI No Longer Runs CMMI?

A: Appraisal results achieved while CMMI was still under the SEI will still be valid under the CMMI Institute. Appraisal results will expire as per their original expiration dates. Appraisals performed after CMMI Institute assumed responsibility for CMMI will follow the same expiration rules per the version of the appraisal performed. Changes to appraisals, appraisal methods, appraisal results, and expiration will be made and deployed in a manner consistent with the needs of the market and ordinary refresh and release processes. It should be noted that SEI does not own the intellectual property or related assets of CMMI, Carnegie Mellon University (CMU) owns them. Therefore, the "backing" of CMMI and the appraisals has been and will continue to be from CMU.

Back to SEI / CMMI Institute FAQs


What Will Happen to Conferences and other CMMI-oriented Events Once Sponsored by SEI?

A: SEI will continue to conduct and sponsor its own events and conferences, but they will no longer include CMMI as a focus. Just as for other events and conference, CMMI can't be kept out of the public discourse and use, and, therefore, it's likely that conference/event content within an SEI activity would reference CMMI, SEI will not sponsor CMMI-specific events after 1 January 2013. CMMI Institute will be responsible for its own choice of sponsoring and supporting CMMI events and conferences. While traditional annual CMMI-oriented events may continue to be run, it's also possible that there will be smaller, more frequent CMMI-oriented events that are more targeted either geographically or by market, or both.

Back to SEI / CMMI Institute FAQs


Will We Still Be Able to Work with Our Current "SEI Partner"?

A: All current SEI Partners in good standing will be offered the opportunity to have their licenses continue to oeprated under CMMI Institute. In fact, since the CMMI intellectual property belongs to Carnegie Mellon University (CMU), the licenses are between the Partners and CMU, not SEI. Other than changes to references to SEI and website URLs, the change of relationship between the Partners and CMMI Institute will not change the relationship between you and your Partners.

Back to SEI / CMMI Institute FAQs


Isn't this just a cash cow for the SEI (now CMMI Institute)?

A: Um, well, yeah... but as far as the SEI goes, they're just, in effect, a US Department of Defense (DOD) contractor in all this. You see, the DOD put out an RFP for some university-based research/think-tank to come up with a "solution" to the problem of abysmal performance of software projects. The SEI turned in the winning proposal and was awarded the contract for a Federally-Funded Research and Development Center (FFRDC). FFRDCs are typically established, academic, not-for-profit organizations whose outputs are the intellectual property of the researchers' employers but freely distributed within the government and anyone the government says can use it. And so, Carnegie Mellon University's Software Engineering Institute (SEI) beat out the University of Maryland in the competition to be the FFRDC to "solve" the problem. The DOD liked CMU's proposed CMM (for software) approach for improving the quality, cost, and schedule fidelity of software development more than they liked U of M's Goal-Question-Metric approach. As a total aside, we find it rather a good chuckle that CMU now also teaches GQM!

But, we digress. SEI was mandated to work on and continuously improve the field and body of knowledge for software management and engineering. That's how we now have CMMI v1.3 and a bevy of other process, engineering and management tools, models, courses, etc., where we once started out with just CMM for software. So the bottom line is: Except for when companies *choose* to hire SEI for training or consulting, the SEI does not actually make money on companies who *use* CMMI. The majority of materials are free to use because they were developed with taxpayer money, and those things that aren't free are cost-recovery for administration of everyone using SEI services and licensed products. Let's be clear about something: organizations do not need SEI to improve their processes, and if companies want to avoid what they perceive as high costs, they can invest a relatively small amount to grow their own internal CMMI and SCAMPI wherewithal.

With the transition of CMMI to the CMMI Institute, this entity will need to support itself based on usage of CMMI Product Suite components. Most of this operating revenue will be collected via fees on Partners, not CMMI Product Suite users (except if/when a user goes directly to the CMMI Institute for services).

It is expected that CMMI Institute will be enhancing every aspect of CMMI and its related products and services -- both directly in the materials as well as through its own products and services and those of the Partners who are, effectively, a branding extension to the CMMI Institute. As an entity expecting to be self-sustaining on the merits of a product, CMMI Institute will have to content with market forces. As such, the future and viability of the CMMI Institute depends on the marketability of CMMI and related products and services. The value proposition will be critical as with any other business, and therefore, a steady roll-out of improvements to the CMMI user experience can be anticipated over the next several years.

Back to SEI / CMMI Institute FAQs


What makes SEI the authority on what are "best practices" in software?

A: Lest you think SEI is entirely made-up of ivory-tower academic pinheads, you'd be surprised to learn that SEI is still a university research institute, and as such is as worried as any business or school would be about their credibility, keeping their knowledge-base up-to-date with the latest research, techniques, technology and tools. Besides that, the vast majority of people who work on the CMMI come from industry, not academia. The list of contributors and reviewers is as impressive as it is long. While even we concede that the list is a bit heavy with companies who are Federal contractors and companies who can be described as large, deep-pocketed organizations with plenty of ability to absorb overhead, if we want to be fair, we should note that such companies are not alone, and, that they were among the few companies who showed any interest when things got kicked off. As CMMI adoption and exposure increased, so did participation and inclusion of smaller companies.

It's not so much, then, that SEI is the authority, it's the collection of expert software practitioners from across the business spectrum who are the authority. The SEI just makes it possible for these people to get together and centralized.

With the transition of CMMI from the SEI to the CMMI Institute, it can be expected that there will be an even stronger input from and relevance to companies of all sizes, markets, industries and phases of business evolution from start-ups to mega-mergers in everything from retail to railroads, automotive to zoology. One can expect a number of new products becoming available including CMMI-based models and appraisal methods for different users as well as smaller, more affordable, and segmented user-oriented events throughout the world. Keeping the products relevant will be critical for the CMMI Institute's success.

The question of whether or not there are actual *best* practices is out of scope for this FAQ. Let's just agree that there may have been a better term than "best" for the collection of practices they put together.

Back to SEI / CMMI Institute FAQs


Do the Lead Appraisers work for the SEI?

A: Not all of them. In fact, only a few do. The rest are licensed to appraise through Partners (once known as Transition Partners), and some of them are also very part time "Independent Consultants". CMMI Institute does, however, administer and train people to be certified to take leadership roles and responsibilities for leading appraisals and delivering Introduction to CMMI instruction.

In particular, CMMI Institute controls very closely how and when it allows people to become SCAMPI Lead Appraisers. Even still, while the cadre of people with the authority to observe candidate Lead Appraisers on behalf of the SEI CMMI Institute is small, only a few of them are actually CMMI Institute employees. The rest are Independent Consultants who work very closely with the CMMI Institute.

Back to SEI / CMMI Institute FAQs


What's a "Transition Partner"?
What's the "Partner Network"?

A: Transition Partner is the name previously used for companies/organizations in the SEI's Partner Network. In 2006, the name given to this program (and to these organizations) was changed from Transition Partner to Partner Network. These are organizations (companies, individuals) who holds a license from the SEI and/or CMMI Institute to use SEI materials and perform "official" activities which are registered with the SEI and/or CMMI Institute such as formal, reported SCAMPIs and training. (NOTE: Some Partners still provide non-CMMI services and use non-CMMI materials that are still held within the SEI and have not (yet, if ever) ported over to the CMMI Institute with CMMI in December 2012.) The original term "Transition Partner" comes from the concept of companies who are out in the field as SEI's partners helping other organizations transition to using CMMI. Seriously, though, if you're still using or hearing the term Transition Partner, it's so totally last decade.

All individuals wanting to be certified to do things using SEI content in any way must be sponsored through a Partner and pay a licensing fee for each credential they want to hold.

Back to SEI / CMMI Institute FAQs


How do we report concerns about ethics, conflicts of interest, and/or compliance?

A: Waste, Fraud, Abuse, and Noncompliance with Policies Harms Everyone. If you have concerns about the truth behind an organization's rating, or about the ethics, compliance or conflict-of-interest of a consultant or appraiser, we strongly encourage you to report these concerns to the SEI. You may also want to review the SEI's Partner policies, here, as well to ensure your concern is properly supported. All authorized and licensed individuals and organizations must operate through a Partner, so all investigations will include an inquiry to the Partner.

The SEI's Conflict of Interest form is here,
The Ethics and Compliance site is here, there you can also see other information on expectations and how to report your concerns.
The US Hotline Phone Number (24/7/365) is: 1-877-217-6316, and
The direct reporting email address for Ethics and Compliance concerns is:

We sincerely hope you never have to use any of them, but if you do, we're very sorry. And, we hope you are undeterred from your process improvement aspirations.

Back to SEI / CMMI Institute FAQs


Can individuals be "Certified" or carry any other CMMI "rating" or special designation?

A: Individuals can become:

  • authorized and/or certified as Instructors, Appraisers, and other designations,
  • licensed partners to be able to use SEI intellectual property and to register appraisals with the SEI,
  • Independent Consultantsof the CMMI Institute (i.e., part time or volunteer employees),
  • eligible to be appraisal team members by taking the licensed Introduction to CMMI and registering in the SEI's appraisal system, and
  • other credentials that are evolving over time, and non-CMMI credentials as well.

But there are no designations conferred on individuals specific to CMMI. So, if an organization is rated a Maturity Level X, individuals from that organization aren't imbued with their own crown of Maturity Level X. Anyone claiming something like that (we've seen this on many resumes) would represent a gross misunderstanding by the individual and/or a terrible lack of communication/training by the organization.

Also, taking Introduction to CMMI, or even the next class, Intermediate Concepts of CMMI, does not designate a person as a "certified" or "authorized" CMMI consultant. (We've seen that too.)

Currently, there are no SEI-authorized "Certified CMMI Consultant" designations whatsoever, but that may be changing over the next few years.

Back to SEI / CMMI Institute FAQs



Training FAQs

Is there required training to "do" CMMI?

A: That depends on what you want to accomplish.

  • To just implement CMMI? None whatsoever. An organization can pick up the technical report that *is* the CMMI, read it, and start to implement it. SEI and CMMI Institute require no training to do that. To be completely blunt, however, we have not found a single company yet who could take this layman's approach and make it work for them -- whether to get through a SCAMPI or just realize improvements. There are just some things that a few hours with someone willing and qualified to explain everything -- at least as far as using the model effectively and/or getting to/through a SCAMPI is concerned -- to make a world of difference between success and disillusionment. (Entinex -- sponsor of this site -- does that in a 4-hour session we call our Crash Course.)
  • To be on a SCAMPI team, a prerequisite is the Introduction to CMMI course. Then, in preparation for the SCAMPI itself, team members receive "Appraisal Team Training" from the Lead Appraiser (or an alternative qualified individual) prior to the appraisal -- but this is part of the appraisal process and not training that must be delivered by CMMI Institute or a Partner.
  • To be an Introduction to CMMI Instructor, one also needs Intermediate Concepts of CMMI or two CMMI for Practitioners courses, the CMMI Instructor Training course and then be observed delivering the course before becoming authorized to deliver it on one's own.
  • To be a Lead Appraiser, one needs Introduction to CMMI, Intermediate Concepts of CMMI or two CMMI for Practitioners courses, to participate as a team member on two SCAMPI appraisals, the SCAMPI Lead Appraiser course and also to be observed leading a SCAMPI appraisal.
  • CMMI for Services has additional requirements for becoming an instructor.
  • High Maturity Lead Appraisers (HMLAs) require additional coursework and exams.
    Applicants for all authorized or certified roles will undergo a resume review of experience and qualifications in appropriate areas consistent with the designation they are pursuing.

Back to Training FAQs


Who can provide CMMI-related training?

A: The CMMI Institute itself, and people certified by the CMMI Institute *and* working through a Partner can deliver any training they are authorized to deliver -- if the expectation is that there will be some official registration of the that training event. If there is no such expectation of a Certificate of Completion, or, if there is no intention of using the training as a pre-requisite to future activities, the training is not controlled by the CMMI Institute since they would never know about it. Be sure to be clear with whoever you are receiving the training from about their authority to deliver the expected outcome. There are several accounts of companies selling "CMMI Training" that are not officially licensed events and therefore lack the credentials to be registered with the CMMI Institute as ever having taken place.

At this time, the CMMI Institute is the only source for CMMI-related model training other than Introduction to CMMI, and CMMI for Practitioners courses.

Back to Training FAQs


What sort of CMMI-related training is there?

A: The following are the basic CMMI courses. The CMMI Institute also adds specialized courses all the time. Follow this link for the SEI's list of courses:

  • Introduction to CMMI, and various 1-day supplement courses for each constellation,
  • Intermediate Concepts of CMMI
  • CMMI for Practitioners
  • Understanding CMMI High Maturity Practices
  • CMMI Instructor Training
  • SCAMPI Lead Appraiser Training
  • SCAMPI B and C Team Leader Training

Back to Training FAQs


How can we learn about the appraisal process?

A: For that we have some bad news.
There are only three ways to learn about the appraisal process, and one of them is not recommended, and another requires a lot of commitment:

  1. Download the Appraisal Requirements for CMMI and the Method Definition Document, and study them.
  2. Go through all the training requirements of becoming a Lead Appraiser.
  3. Hire someone who has done #2 to explain it you.

Back to Training FAQs



Specific Model Content FAQs

What is the exact difference between GP 2.8 and GP 2.9?

A:It can be confusing. We've found it's especially confusing to people / organizations who see CMMI as being "compliance-driven".

Mostly, because they don't see the difference between "monitoring and controlling" the process and "objectively evaluating" the process. And, part of it is due to the fact that these two phrases are incomplete. To understand these two generic practices requires that we read the complete practice statement, not just the title of the practice (which is good advice for any practice!) as we spell it out here.

GP 2.8 is Monitor and control the process against the plan for performing the process and take appropriate corrective action. [Emphasis added.]  In other words, GP 2.8 is tied to GP 2.2, Establish and maintain the plan for performing the process.  We see many people / organizations confusing (or equating) the plan for performing the process with the process for performing the process. The plan addresses the resources, timing, tasks, and so forth, for seeing that the process *will* get done at the project level, not necessarily *how* it will get done. The plan is merely knowing how the process will be assured of getting done, not necessarily and not only about getting done right. Sure, it's common to find the process embedded in or referenced by the plan, but that doesn't eliminate the distinction between the plan(s) and the process(es).

A process can be done by the right person, at the right time, using the right resources, and take as long as expected, and still not be done according to the way the process was supposed to be done. (That's what GP 2.9 is for.) How can a process be executed according to the plan and still not be done according to the process descriptions?  Simple: the plan might refer to the process description but whoever performed the process didn't use the process description, or, the project might have created its own procedure but misinterpreted the process and therefore the procedure doesn't really get the process done. Make sense?

Effectively, you can monitor and control the process just as you would (and when you would) be monitoring and controlling activities of the project. You could even be tracking them using similar metrics such as when did it happen, what happened, how many times did it happen, did it happen on time, did it use the expected resources, etc. And, that's the real distinction between GP 2.8 and GP 2.9. GP 2.9 is Objectively evaluate adherence of the process against its process description, standards, and procedures, and address noncompliance.  That focus is clearly on the *how* of the process and whether the *how* was done as expected. An organization may execute a process according to its plan, but do so in a way entirely not according to the process (even in a good way), and conversely, the process could be performed exactly according to the process expectation, but done entirely late, or taking too long, or not by the right people. Hence, the distinct activities of checking that the process was done *both* according to plan, *and* as expected to be done.

Thanks to Gino Tudor for asking this question!


Back to Specific Model FAQs


Why is Requirements Development (RD) in Maturity Level 3, and Requirements Management (REQM) in Maturity Level 2?

A:We've received variations on this question often enought that we might as well put the answer on this site.

This question is often given to students by instructors, but we also get it from skeptics and other people who resist improvement, or who otherwise don't understand the model. Of course, not all teachers, instructors, or professors understand the model either.

The answer is very simple, but let's start with this to keep in mind... CMMI is neither prescriptive nor in a specific order. Furthermore, nothing in CMMI includes the actual practices to carry out work necessary to accomplish a complete process or the products from the process.

Requirements do need to be developed before they can be managed. But, in a maturity growth model of performance improvement, improving the discipline and results of managing requirements is a more basic need than developing the requirements.

A good analogy is this: what use is making money if you don't know how to manage it? You could make lots of money, but unless you know how to manage it first, making it will only be short-lived, and, can even result in terrible debts and other problems -- worse than if you never made the money in the first place. Knowing how to manage your money, even when you don't have much, is far more useful and important than whether you can generate money. Furthermore, knowing how to manage money is useful regardless of how much money you do or don't actually generate.

Now... as far as product development and CMMI are concerned... Some organizations don't actually develop the requirements, they are given the requirements. As such, the need to develop requirements -- or to improve how they develop requirements -- would be less relevant to them than the ability to improve how they manage the requirements.

Another point is that requirements management is a fancy way to say "manage scope". Managing scope is clearly a more broadly-applicable area to grow capability and maturity in than developing requirements is. In fact, all CMMI Maturity Level 2 process areas are more broadly applicable to the immediate needs of projects and products than the process areas in maturity levels 3 and higher. As you climb higher and higher in the maturity levels, the process areas become less and less immediately applicable to everything every project deals with. Some process areas above maturity level 2 don't deal directly with projects or products at all.


Back to Specific Model FAQs


GP 2.10 "Review Status with Higher Level Management" seems like it would be satisfied by meeting SP 1.6 and 1.7 in PMC but that doesn't seem to meet the institutionalization. Would the OPF and OPD SPs also need to be met to meet GP 2.10?

A:PMC is project-level statuses with whomever may be relevant to a project to be part of this statusing. Whereas GP2.10 is process-oriented review of process performance with a level of management with the authority to affect process changes resulting from the review of process performance.

GP2.10 is reviewing the performance of processes--not projects--and would be taking place for all operational processes deemed relevant to the organization. So yes, somewhere\somehow in your organization you would be reviewing the performance of OPF and OPD. It may seem redundant, but it's not. You have a process for planning process activities and those processes would be reviewed (GP2.10). You have a process for creating & maintaining organizational assets, and those processes would be reviewed (GP2.10).

While there's nothing stopping an operation from reviewing with management process performance of any processes while also reviewing project status in terms of when they happen--at the same events, the practices are, in fact, distinct.


Back to Specific Model FAQs


CMM, CMMI, and SCAMPI are ® registered in the U.S. Patent and
Trademark Office by Carnegie Mellon University.
All other content © Entinex, Inc. (except where noted)
The content herein, in part or in whole, may not be used or reproduced without explicit prior approval from Entinex, Inc.
(Just ask, we'll probably let you use it.)
Please read our disclaimer.

Disclaimer: The opinions expressed here are the authors' and contributors' and do not express a position on the subject from the Software Engineering Institute (SEI), CMMI Institute, Clear Model Institute, Carnegie Mellon University or any organization or Partner affiliated with the SEI, CMMI Institute, Clear Model Institute, or Carnegie Mellon University.

Most recent notable update::
26 January 2014

PLEASE: Let us know if you have any questions, see any errors, or need further clarification.




About these ads: The ads that appear below DO NOT reflect an endorsement, recommendation, or suggestion by the authors, editors or contributors to this CMMIFAQ, the SEI, or CMMI Institute.

















PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.
























About these ads: The ads that appear below DO NOT reflect an endorsement, recommendation, or suggestion by the authors, editors or contributors to this CMMIFAQ or the SEI.














PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.






































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.
























About these ads: The ads that appear below DO NOT reflect an endorsement, recommendation, or suggestion by the authors, editors or contributors to this CMMIFAQ or the SEI.














PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.





































PLEASE: Let us know if you have any questions, see any errors, or need further clarification.












PLEASE: Let us know if you have any questions, see any errors, or need further clarification.