Neuma White Paper:
CM: THE NEXT GENERATION - Agile Configuration Management
What is agile CM? If you think it's doing the minimal amount of CM, think again. Instead, it's minimizing and streamlining the work to do all of the CM tasks that are necessary. It adapts to changing CM requirements fairly easily. Agile CM doesn't just happen - it's a combination of good CM process, good CM tools, and CM automation. If you fall short on any of these, your CM process will not be very agile.
To start, let's visit the obvious. An agile process which is built on file-based CM, as opposed to change-based CM, may make the up-front tasks easier for some developers, but overall, it will do a lot of damage to any attempt to make the entire process agile. Promoting files, rather than changes, is both unnatural and error prone. It complicates the process forcing short cuts. Often it's left up to the developer to ensure that everything will flow properly.
Changes really need to move through the process as units. This is not to say that tasks, features or issues need to - just the changes themselves. If I go to implement the first part of a feature by changing 4 files, I really don't want someone to try to move any less than those 4 files through the process. Nor do I want to have to specify dependencies on other changes in terms of files. I don't even want to have to know what files are involved in the dependent changes. And as a CM manager, I'm much happier promoting a dozen changes than 47 files.
And change-based CM simplifies peer reviews. A simple change identifier is all that's needed to identify code for a review. Most change-based CM tools will allow you to generate a code delta report from this identifier. Some will even let you track peer review comments against the change so that the comments are not lost in the shuffle.
Promotion Support Without the Blockage
You've got a choice: you either have a branch-based promotion model or you hold up check-in operations until the build has been successfully verified to the appropriate set of criteria. At least that's what many will tell you, but don't believe it. Both choices contribute to a non-agile process.
In a branch-based promotion model, there's merging that has to be done all the way up the model, and the question about whether this is a developer or CM manager task. This in turn tends to result in a compromised model in an effort to minimize merging.
If you hold up check-in operations, not only do you have a lot more parallel check-outs, and resulting merging (and hopefully re-testing), but when merging and testing is done on the main branch, the changes are not always still current in the minds of the developers. And this compromises the quality of builds, which delays the rapid interation goals of your agile process.
So what's the solution? It's two-fold. First realize that, especially in an agile environment, almost all changes are going to be checked-in in an order that is perfectly acceptable for promotion. The process should be optimized to this behavior. The branch promotion model optimizes so that the worst case can be dealt with easily. However the worst case rarely happens and you end up repeating merge operations almost mindlessly, so heaven help you if merges between different promotion levels differ.
Secondly, you need process/technology that allows you to promote a change by changing its (promotion) status and nothing more. You also want it to make you aware of any dependency violoations when you promote, whether these have been explicitly stated or implicitly inferred. Your tool technology should allow you to view baselines based on change promotion states. It should allow automated baseline definitions in the same manner. Now this is not a trivial task for a tool. However, it is possible to automate these tasks so that your views and your baseline definitions are provided with little or no human intervention (other than a view selection).
Stream-based Architecture and Sharing Revisions Across Streams
Many large CM shops that have not switched to a stream-based main branch model (from a single main branch model) will debate the merits of each. Although the "single main" model looks simpler - just one main branch where everything goes - it is not. Everything does not belong in the same main branch. The natural process is to start working on a release when resources permit and to continue to support it until its retirement. The single main branch model forces you to create branches for work done before the main branch switches to the next release, and then to maintain a new set of branches after the switch to the subsequent release. Each release has three different processes to follow. Again the pressure to simplify causes short-circuiting of the process, such as not checking in next release changes until the next release is available in the "main".
The main branch per stream model is simpler and more natural. Each stream has its own branch. The stream process always works the same, both across the life of a stream, and from stream to stream. This eliminates any artificial pushes to close off one stream or open up a new one. Instead, a bit of common sense prevails - don't do widespread changes on your next stream before your current one has started to settle down. This will help minimize parallel changes.
To further minimize parallel changes, most modern CM tools allow you to share revisions from one branch to another. If release 2 uses release 1 versions of some files, than fixes to those files in release 1 can be automatically propagated to release 2. Where this is not the case, the CM tool should allow you to track which fixes have to be applied to which streams.
In an ideal CM tool, you don't have to specify branch/stream history for the tool to use when automatically sharing revisions across branches. This is inferred automatically by the CM tool. The CM tool should be able to use its product history to identify when and from where each branch should be made. It should also allow you to select a product, stream and promotion level and then automatically give you the correct view, without having to write a view specification. When you have a process/toolset with this level of support, your agile CM process becomes more agile, not only within each development stream, but also across development streams, allowing developers to easily switch from one view to another at will.
We've already discussed how automation could help to define baselines and views. But one of the real potential areas for automation is with builds. It's not sufficient in an agile environment to have to always rely on system (i.e. build manager generated) builds. Even if the build manager rebuilds every hour or two, your development productivity will likely be significantly reduced. I remember when Borland Pascal first hit the market and took a complex or long compile/build cycle and turned it into a few keystrokes and typically a few seconds to a few minutes. Great productivity strides were made. These would not have been made if the build capabilities were not in the hands of the developers.
Developers need to turn their work around rapidly. If not, there will be resistance to adapting requirements mid-stream. It will take significantly longer to run unit tests as each system build iteration reveals only a small portion of the developer bugs. In the hands of the developer, 30 or 40 iterations in a day is not uncommon. This in turn supports iterative feature development and rapid prototyping and exploration.
If your CM tool and process requires builds to be done by system build experts, I recommend some changes. Developer workstations are plenty big enough to run most build operations that are of a concern to the developer. If not, then you likely have some more fundamental architectural design problems. The CM tool needs to support developer based builds. This will involve some level of build automation. Ideally, the same process used to do nightly builds should be accessible for private developer builds. So automating the nightly build cycle is a good first step.
In some cases, the biggest obstacle is where the build process involves coordination of dozens of "make" files. You may want to review your "make" process. There are general guidelines for making this easier, and there are even tools which will automatically produce "make" files as part of the build operation. So if someone has added a new file or a new dependency, just the act of performing a new build will incorpoarte such changes into the make file.
The result of developer-based builds - and this is obvious to those who use them - is that your integration efforts have far less risk. This in turn means that your integration cycles are much quicker and hence can be more frequent.
An Integrated Environment
The inner development environment is only part of the story. For an agile process to be most effective, it must allow the product management team to work effectively. Product managers need to see how requirements are being met, what testing success is like, and how developers are coping with changing requirements. Similarly, verification teams need a clear and accurate view of what is ready to test in a particular build and what is not. An accurate picture will be facilitated by an end-to-end integrated environment.
An integrated environment, sharing a common data repository across all management applications, allows the product manager to review bottlenecks and re-assign priorities continuously. For an agile process, project management is generally priority/task driven rather than date/schedule driven. As requirements change, so do priorities. As potential customers change, or as support requirements change, so do priorities. A single environment which allows easy prioritization and assignment of feature tasks and problems reports, will result in a more responsive team. As well, data will generally be more up to date and traceable, allowing more accuracy in decision making.
Taking things one step further, if your CM tool allows you to rapidly calculate differences between builds - not in terms of files or delta reports, but in terms of problems fixed and features addressed, - it will be much easier to present customers with incremental upgrade documentation. This is especially important in non-traditional applications, such as web-based content and software. For the more traditional side of things, support for a series of rapid beta releases is facilitated.
Agile Multiple Site CM
One more item needs to be addressed. An agile CM environment across multiple sites should not be placing burdensome administrative or CM tasks on your shoulders. Instead, multiple sites should operate as if they were a single site. There are varying technologies which will support this scenario.
The goal is to keep it simple. If I can move from one site to another without having to export data, re-align site information or perform other admin tasks, I've got an agile multiple site solution. If I can easily add new sites, it's all the more agile.
Agile CM - It's Worth the Effort
So Agile CM may require changes, not only to your processes, but to your way of thinking. Automation will help to eliminate quality bumps that often cost a day or two to resolve, while at the same time helping with continuous process improvement. Improvements in your technology will facilitate process changes. Transition from your current process will always meet resistance, but in the end, all will acknowledge that it was worth the effort, as you begin to see improved team productivity and communication.
Joe Farah is the President and CEO of Neuma Technology . Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe by email at email@example.com