More white papers:
Find out more ...
Neuma White Paper:
CM: THE NEXT GENERATION of Configuration Management Planning - Before You Start
Configuration Management planning should not start as you put together
your CM Plan. By then, you've already predisposed yourself to how your
plan is going to play out.
As with most things, the earlier in
the process you do your planning the better. Don't make the mistake of
waiting until you start to define your CM process to do your CM
planning. Your objectives should reach far beyond process. If you
want to achieve the Next Generation of CM Planning, you need to start
with an aggressive set of objectives and be ready to use Next
Generation technology. Even if you don't meet all of your objectives,
you'll find you're ahead of the game. Your preliminary objectives need
to deal with a dozen or so key areas prior to zooming in on your CM
process. Here's what I might recommend as a set of aggressive
- Automation - Automate what can be automated.
- Multiple Site Development - Seamless addition of sites with no effect from geographic separation (other than time zone effects)
- Administrative - Shoot for zero administration of your CM system and processes.
- Zero Cost/Big Payback - You want a zero-cost solution that has big payback.
- Integration of Tools and of the Repository - All tools working from a common repository, using the same user interface
- Next Generation Technology - Benefit from already proven next generation technology that's not yet caught up to the mainstream
and Training - No training is ideal, but perhaps settle for a couple of
hours per role with guidance provided interactively by your technology
- Process Customization and Improvement Capability - Out-of-the-box processes are data driven for easy customization
- Reliability and Availability - 100% and 100% would be nice
- On-demand information - Pre-canned reports and queries, rapid traceability navigation, easy customization of queries
- CM Standards - meets all recognized CM standards
ALM Process, not just CM - CM Process specified well beyond current
requirements, and extended from one end of the ALM process to the other
- Security and Data Integrity - Adequate security levels and guaranteed data integrity
Upgrades and Evolution of your CM System - Easy to service remotely
without downtime, zero-impact upgrades, ability to evolve to handle
wider process areas/requirements
OK, is this a bit
idealistic? Perhaps, but perhaps not as much as you think. If you
don't aim high, you're not going to get what you really want.
are two ways to approach CM Planning: Push the state of the art or
Follow the leader. The former approach uses your CM experience and
outside resources to identify what are feasible goals that will
eliminate traditional problems and maximize payback. The "Follow the
Leader" approach looks at what most everyone else is doing and tries to
stay in the ball game by emulating what has already been tried. This
is more attractive because there are well-known sets of procedures,
tools, expertise, etc. that can be harvested with more or less a
predictable payback and set of problems. Time constraints often push
a project into the "Follow the Leader" approach - but in a sufficiently
large project, it's really worth the effort to push the state of the
art, or at least to set that as a goal. You may find that there are
advanced tools, processes, and technology that are ready for mainstream
and that will give you a competitive edge.
of the big strides we've take in CM were made in CM groups of large
Telecom companies. In those telecom companies where I headed up the CM
group, I was always aggressive. In the late 1970s, when CM was really
just version control and some build assistance, we took the time to
analyze the situation. Two big results seemed to stare us in the face:
Changes, not file revisions, have to drive the Software CM Process.
That's how everyone always worked - they set about implementing a
change but then saved the change as a bunch of file revisions. The key
information was lost. As a result we were held hostage to the practice
of dumping file revisions into the repository, trying to build and then
fixing the problems until we had a stable base to go the next round.
After a couple of iterations on a change-based CM theme, we settled on
the fact that it was the change that had to drive the downstream
effort. Changes, not file revisions, were promoted. Baselines and
non-frozen configurations were automatically computed based on change
states. Context views were based on baselines supplemented by
changes. This was a huge success, but not without the other key result.
In the '70s and throughout CM history, including today, many, if not
most (read "all" in the '70s) software shops believed that branching
was best if it were flexible enough so that you could multiply branches
upon branches upon branches. Of course there was a cost for merging
the branches, eventually back into a main branch, but that was simply
an accepted cost. In 1978 we identified that a branch was required
when doing a change only if the previous code needed continued support
apart from the change. We attacked the question of what that meant and
in the end evolved the stream-based persistent branches instead of a
single main trunk. We pushed further to identify what was required in
order to minimize parallel checkouts and addressed those issues, one by
one. In the end, we built a stream-based CM tool that would grow to
support 5000 users on a small number of mainframes (no real "network"
existed back then).
The results were astounding. Simple
two-dimensional branching, with one branch per stream in one of the
most successful telecom projects of all time (at Nortel). There was
very little training required (less than a day) to educate users on a
command-line based tool (GUIs didn't exist yet). There was no need for
complex branching strategies, labelling, and even, for the most part,
parallel checkouts and merging. 95% of the merges were from one
development stream to another, not using parallel branches. It was a
simple, scalable solution still in use to this day (I think there's a
GUI now though). Quite frankly, we didn't know how good a job we'd
done until a decade later (late '80s) when we started looking at the
industry as a whole and where it was.
The point is that some analysis, and a resolve to do things right, resulted in a highly successful solution.
goal we set for ourselves was to automate. This started at Nortel in
the late '70s, where our nightly build process would automatically test
compile and notify developers of problems before they left for the day,
and automatically produce the builds required each day at the various
promotion levels. In the '80s at Mitel, we took this one step further
so that we could even download (over an RS232 link) the executables
onto the test targets and run predefined test suites against them at
virtually a single push of a button. In both cases we would
automatically compute what needed to be compiled based on change status
and "include/uses" dependencies so that we would not have to compile
the world every night. (A 1 MIP computer was a powerful mainframe back
then [VAX 780], and still could support dozens of users, but could not
take a load of having to perform several thousand compiles in just a
few hours.) So we focused on automating and then optimizing the
automation to use as few resources as possible.
The focus on automation was highly successful.
developing CM+ at Neuma, a couple of focus points were "near-zero
administration" and "easy customization", to be able to support
virtually any process. Because these were objectives in place from the
start, they were easy to meet. We simply looked at the effect of every
feature and of the architecture on administration and customization.
Where we might have otherwise cut corners, we simply refused to, and
often noticed that the net effort was the same apart for some extra
deep, gut-wrenching thought processes that there was a tendency to
resist. The result was a near-zero administration, easily customized
The lesson is: If you want to achieve ideal results, you
have to have ideal objectives from the outset, and then work to them.
And it doesn't seem to cost any extra effort. In fact, the simplicity
and the "this is the way it should be" results gives you plenty of
payback down the road. So if you want to do real CM planning, set high
goals up front and work to them.
If you want to know how to write a CM plan, I'll point you to the CM Crossroads Body of Knowledge. http://www.cmcrossroads.com/component/option,com_wrapper/Itemid,106/
to aggressively attack CM planning, you need to start with the above
objectives and work down through them. So let's take a look at a few
of them in more detail. As we do, you'll start to notice that they are
How big a CM Admin
team will you need? We're not talking about CM functions here, just the
administration that goes with your solutions. Well some of the
traditional chores include:
- Database optimization
- Server (and VOB) administration
- Disk space administration
- Dealing with scalability issues as the project grows
- Maintaining operation as you switch platforms (32- to 64-bit, Linux to/from Windows to/from Unix)
- Multiple-site data synchronization issues
- Upgrades to the database software
- Upgrades to the various ALM tools and the associated glue that integrates them
- Nightly backups and restore capability
- Initial conversion/data loading from your existing solutions
are big issues. You have to ensure high availability. You have to
maintain good performance. Take a look at the various CM solutions out
there. These tasks will require a few hours a week to a dedicated team
of a few people, depending on your project parameters and the solution
chosen. If you want to have fewer people doing admin and more working
on core business, do your research - don't just play Follow-the-leader
blindly. Especially if your CM planning team is familiar with
solutions they've worked with - the tendency is to stick with them
because they know them, including their inherent risks. CM technology
is progressing. There are leaner and meaner solutions; there are more
advanced solutions that require less effort to maintain.
you ever used integrated tools? Were you satisfied with the level of
integration and the speed of cross tool query? You can have the best
of breed for each tool in an ALM solution, and integrate them together,
but that won't give you the best solution. Ideally, you want
management tools that share the same user interface, so you're not
hopping from tool to tool and so that you're not complicating your
training, and that share the same repository, so that data at all
levels is available to all tools. Ideally there's no integration
scripting that you have to maintain, even as you customize each
management function. The second generation tools which knit together
solutions are starting to give way to broader ALM solutions which are
built on one or two core technologies.
Don't get into the
situation where you have to have a team to do tool integration. Even
worse is to have to outsource the same. If you have different groups
picking their own management tools for the various functions, that's
where you'll be headed. Instead, collect requirements and look around.
You'll find that some do-all tools might not be the best in each
function, but they're generally the best in some functions and have an
added value of an integrated architecture. If traceability is
important, integration is important.
Configuration Management Process
pays to understand your process. Don't just throw existing tools in
front of your team along with a handbook. Go through each part of your
CM process. Ask - is this the way the user really wants to be doing
this. The myth is that developers don't want tools and processes -
they just get in the way. The fact is that developers don't want tools
and processes that just get in the way. Most processes and tools get
in the way a bit but provide just enough payback to convince developers
to use them. Your processes and tools need to make sense. If they're
not adding value to the users, take another look. It's not good enough
to say: "we're a team, you're helping the downstream process by using
them". A great number of them aren't interested in downstream process
- which may be a separate problem. Good tools and processes should
benefit the users as well as the rest of the team.
change-based CM instead of file-based CM cuts down the number of steps
the developer must take, and supports a simpler user interface. But
not if the "change" concept is just glued onto a file-based CM tool.
Change-centric CM tools manage change without having to keep revisions
front-and-center. Similarly, an intelligent context-view mechanism
helps to simplify things.
Do your CM plans include the
overloading of branches so that, besides parallel development, branches
are used to track promotion states, releases, change packages, etc. If
so, try to drop the baggage. Make sure your tools and processes deal
with items such as branches, changes, builds/releases, baselines, etc.
as first class objects on their own. A set of labels does not convert
a branch into a build definition. Builds have a state flow all of
their own, very different from branches. The same goes for changes.
And don't confuse a change with a feature. Any number of changes could
be required to implement a feature and any number of features might be
implemented by a change (hopefully not though). Similarly, you should
have a process managing your 100s of features which is distinct from
the process handling your 10s of 1000s of problem reports. They are
very different beasts.
Look carefully at the main trunk vs trunk
per stream scenario. Although a "main" trunk sounds simpler, it
doesn't match reality and so is much more complex than a branch per
stream scenario. Wrestle with this problem ahead of time - don't
commit to one way only to find out too late you made a mistake. There
are many CM issues that have to be addressed, and they have to be
addressed objectively by looking at the way you expect things to happen.
your CM requirements down clearly. It's not that you need a way to
compare the features and problems fixed in one build to another. It's
that you need that at a touch of a button. You can't be expected to
make timely decisions if your information takes hours or days to come
I might challenge you to look back at some of the past CM
journals to identify better ways of doing things. This is generally
quite different from the way most projects do it. Do you want your CM
costs to be 5% or 15% of your project costs - that's quite a difference
off your bottom line.
common high level requirement we found in our initial requirements
gathering for building a CM tool was that the project needs to be able
to customize it to their own processes. Not a problem for some
solutions: they just throw in a compiler, a pencil and an eraser -
voila, it's customizable, NOT!
Your process is going to change
and evolve. As you understand your business better. As you go through
mergers. As you understand your development better. Make sure it's
easy to change. This is a precondition. It doesn't have to have a
voice recognition feature with artificial intelligence that converts
your tools and processes to match your verbal requests. But it does
have to support your processes and allow them to evolve. Data, user
interface, rules, triggers, permissions, roles. Don't settle for:
we'll figure out how to change it later. You want to avoid the mess of
configuring an unconfigurable tool. You don't want to become the tool
supplier, just the process supplier.
If you have to learn some
tools in order to create new user interfaces, etc., so be it. If you
don't, even better. Most tools will allow some level of customization
through an "administrative" GUI, while leaving the rest to your
compiler, pencil and eraser. Find out what capabilities lie on what
side of this fence. And beware of the open-ended questions such as:
Can you do automated approvals? The answer is yes. Find out instead,
how much effort is it to do automated approvals, or even better: show
me how this is done. If it's a 5 minute thing, great! If it's a 5
month thing, not so great.
area of danger is multiple site management. There is often a tendency
to look at a version control tool that can support multiple sites and
say, OK, well if we use that tool our problems are solved. Not so.
Multiple site management means management of file versions, problem
reports, requirements, customer tracking data, documents, project
management tasks, etc. across multiple sites. If you've got multiple
tools doing the separate functions, expect this to be a major issue
unless they all have a common architecture for multiple site
management. A single integrated tool that supports the multiple site
management as a function of the common repository instead of on a
function by function basis will give you fewer worries.
there's the question: what is multiple site management? Do I split up
branches by site? What about projects, problems, requirements?
There's the distributed data solution, and then there's the replicated
data solution. There's also a distributed access to central data
solution. Look at these carefully. Make sure that you don't have to
put sensitive data on a contractor's site. Make sure that you can deal
with all of the ITAR Data Segregation requirements you need to.
looked at some of the details and it's obvious that process and tools
are a very key component of your CM planning. That means you need to do
your research up front. You don't have to do tool evaluation or
selection. You don't have to decide on CMM or RUP. You just have to
know what's available. What technology is available, how aggressive
can you make your objectives. You might have to decide between a high
capacity truck and a fast car, but you might find that there are next
generation solutions that give you the best of both. IT progress is
too rapid to assume that last decades's solutions are more or less the
same today. Would you look at last year's medical progress if you
needed a cure today? Spend time to get informed.
Crossroads forums and get feedback on today's thinking. Agile wasn't
here yesterday, now its all over the map. If you don't have time,
bring in some experts to help you. When you plan for CM, you're not
just doing another feature, you're building corporate backbone.
A Word on Databases
a no-brainer to go with a Relational DBMS for your CM/ALM solution,
right? There's no other standard. Well, I know of at least a couple
of other solutions that deserve consideration. The problem with a pure
relational solution is that it does not match the real world. In the
real world of CM and ALM there are hierarchical constructs: WBS, Source
Tree, Org Chart, Requirements Tree, to name a few. There is a need to
have one record point to an arbitrary number of other records:
requirement allocation, reasons for a change, files modified by a
change, include dependencies, product/component dependencies, etc.
Data records need to have a concept of versions. A file has several
branches, each branch has several revisions. The same goes for
documents, requirements, and perhaps other objects. With an RDBMS, the
tendency is to push the revision data out of the database and into the
file versioning tool's meta-data. All of a sudden, a simple database
operation to identify open revisions turns into a complex, inefficient
task of searching through the versioning tool's set of files.
the database does not match the real world, you need a process to map
the real world objects onto RDBMS data, and you need another process to
map the data back to the real world. This requires effort, resulting
in short cuts that shouldn't be taken. It also requires expertise,
meaning it's not easy to customize your tools and its not easy to get
answers to your real world questions. Hopefully they're all pre-canned
in your solution so that you don't have to frequently turn to your
expert. Another effect of the real-world to database mismatch is one of
performance. You can't go to a requirement and ask for the
sub-requirements. Instead you have to go to all requirements and see if
the requirement is a parent of each requirement. The performance is
orders of magnitude worse. But, data indexing comes to the rescue. So
now you have to (automatically) maintain index files which need
modification whenever the data is modified. You need to specify which
fields need to be indexed, making customization tasks more complex.
You have background processes constantly performing indexing.
there's handling of "note" fields, such as problem descriptions,
running logs, and of file management. RDBMS solutions don't do a good
job here. You need to augment them. However, when you do, you lose
many of the capabilities that you started with because the augmented
tools don't share the same architecture as the RDBMS.
afraid of non-RDBMS repositories, provided they have the necessary
functionality and a host of advantages. Just make sure they are as
good or better in areas of reporting, reliability, scalability, data
integrity, and performance.
we've covered is not how to do CM planning, but rather how to prepare
for your CM planning exercise. Do some research and then set your
objectives high. Understand the tools and the processes out there.
Bring in help if you need it, but make sure the objectives are clear.
And pass your results onto the rest of the CM community - your
successes and your problems - as you go. There's a lot of expertise
available for free at CM Crossroads.
Joe Farah is the President and CEO of Neuma Technology . Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe by email at email@example.com