Neuma Technology: CM+ Enterprise Software Configuration Management for Application Lifecycle Management

Neuma Technology Inc. provides the world's most advanced solution to manage the automation of the software development lifecycle.
Neuma Technology Inc.

More white papers:

Find out more ...


Neuma White Paper:

CM: THE NEXT GENERATION of Build Essentials

CM allows us to repeatably build product  that can be delivered to the customer. In the hardware world, the "build"  process is called "manufacturing" and the deployment is known as "shipping and  installation".  In the software world, our manufacturing is done through  a Build process, and our deployment is often automated, perhaps over the  internet.  However, unlike for hardware development, software teams  continually build and re-build the entire software product continually during  development and can deploy these builds locally for verification.  So the  build process is not just a manufacturing process, but a development process  as well.  Deployment, during development, may involve deployment to the  workspace area for developer testing, to a central test area and/or to lab  equipment.  After that, there are still deployment considerations for  verification, production certification and finally to customers.

Build  is central to CM. It's critical to do it right.

A basic build  capability is founded on 2 key fundamentals:  the ability to reproduce  the build and the ability to automate the build process.  Without these  two fundamentals, you're fighting an uphill battle.  Reproduction of the  build implies that you have a CM system able to capture the build  definition.  Automation helps to ensure that no manual errors can play  into the production.  But this is just a basic build  capability.

To move up the ladder, there's plenty more to be  done.  First of all, automation must be made available to all levels of  the team, from production down to developer.  Each member of the team  needs the same assurance that the right stuff is being built: developer,  integration, verification, production.

It's one thing to automate the  build, but in a continuous or regular integration cycle, there could be a lot  of work in deciding what's going into the build, especially in larger  projects.  So what's the secret to successful regular builds?  Make  sure you have quality going into the builds.  To get a series of quality  builds it's important to make sure of 2 things:  start with a stable  build; ensure the changes going into the build are of high quality.

If  you've got a half-baked build to start with, don't open the floodgates for  other changes.  First stablize the build through intensive debugging and  changes that move the quality forward only.  Once you have a stable build  you're in a good position for the second requirement - high quality  changes.

High quality changes
High quality software  changes are a product of many factors:

  • Good people
  • Good  product architecture
  • Effective peer reviews
  • Feature and  integration testing prior to check-in
  • Ensuring the changes that go in  are approved

Good people are hard to come by, but good  processes, with a good mentor, can make average developers very  effective.   A mix of software experience, application experience,  objectivity and strong design and software engineering skills are  desireable.

Good product architecture is key.   I remember in  my very first job, I had 1 week to get a code generator debugged and  working.  I looked at the architecture and it just wasn't there.   The only way I could make the deadline was to throw out the existing work and  rebuild from scratch, although with a good bit of help from the existing work  that had been done.  The tactic was successful.  Now you might not  want to throw out your entire flight software or telecom system, but if there  are components that are overly complex and lack good architecture, your  changes are going to cause problems.  Your builds will have bugs that  need to be ironed out before restoring a stable build, the first premise for  the next build.

Effective peer reviews will eliminate a significant  percentage of potential build problems, while at the same time helping to  ensure that the architecture remains solid.  It's important to have a  good peer reivew process, but also a key developer involved, one that  understands the architecture and and the wider issues involved.  Peer  reviews are most effective when they become a teaching tool - it's critical  that new developers are aware of this going in.

Your process should not  allow code to be checked in unless it has been tested.  One of the most  effective ways to ensure this is to track the test scripts, whether automated  or manual, against the change, along with the test results.  These should  be reviewed as part of the peer review process.  Another key practice is  to include a demo as part of the peer review

Finally, it's important  that there is control over what is going to go into a build.  High impact  and high risk changes should be accepted early in the process, before the full  team begins working on the release cycle.  Tight control must be  exercised as release dates approach.  It's fine to have a developer who  can fix 30 problems a week, but if those are all low priority problems, and  we're nearing in on the release date, the result may not be desireable.   Typically, somewhere between 10% and 30% of problem fixes will introduce  undesireable side effects.  If only 5 of the problems really needed to be  fixed for the release, there's a much better chance of not having a severe  problem resulting.  Still, if it's earlier in the release cycle, there's  time to absorb such side effects and fix the problems before they go out the  door.

When you're controlling what goes into a build, it is really  ineffective to do so at build time.  This will likely create a minor  revolt among the developers, who have worked hard to complete their  work.  Instead, you want to ensure that change control is starting prior  to assignment of changes to the design team members.  Ideally, developers  begin work by selecting tasks/problems/features which have been approved and  assigned to them.  Then there's no rejection of unwanted functionality or  problem fixes.

Perhaps an even more critical side effect of having a  change control process up front is that work flows from the developer through  the process and into integration in pretty much the same order.  This  reduces the need for rollbacks, branches and merges, and allows you to adopt a  more optimized promotion scheme, and ideally one where you don't even need to  create separate promotion branches.  This in turn makes it easier to  automate selection of changes, as pre-approved changes can flow through the  system as soon as they are successfully peer reviewed.

Manage the  Superset Baseline, Build Subsets
Many organizations create and  manage a baseline for every build.  While this provides for full  reproducibility, managing a large number of baselines can be complex.   Typically a product has a number of variants.  While it is important to  try to move variant configuration into run-time or at least deployment, there  are a number of cases where this is not or can not be done.  In this  case, the best strategy is to manage a superset from a baseline perspective,  while building subsets.  It's natural to configure a product from a  subset of the baseline.  Instead of dozens of baselines, a single  baseline is used for management and reference purposes.  Each build can  be customized, as long as the CM tool tracks the build definition in terms of  the baseline.  

For example, build of the baseline with the  English option; build of the baseline with European standards; build the  baseline with a specific customization for NASA.  Tracking of builds  involves identifying the baseline, identifying the option tags, and  identifying any additional change packages that need to be added to it. As  emergency fixes are added, new builds can be easily readied just by adding in  the change packages for those fixes.  No need to specifically commission  a new baseline.

At Neuma, we create baselines occasionally, and then  create a series of regular builds off of each baseline, not just for customer  delivery, but for "nightly builds".   It's easy to look at the  difference between builds just by looking at the changes that have been added  to each.  It's like building a new house by saying: "I want it just like  that one but with these changes" instead of "Here are the complete plans for  my house".  Both will work, but in the former case we might say:   "Let's bring the team in that built 'that' house".

What's in a  Build
This leads us to a fundamental requirement.  Although it  matters less how you track a build (i.e. baseline + changes + options  vs  new baseline), it is crucial that you can look at two builds and ask:   What new features are there? What problems were fixed?  How does it  differ from what the customer currently has?  What level of retesting is  necessary?  The Sikorsky S-92 helicopter was grounded with a requirement  to replace the titanium studs in the gear box mounting with steel studs.   No new baseline has to be established to correct the hardware.  But a new  build definition is needed for new craft.  It may be due to a new  baseline definition, or to an revised build process that says apply the  following changes to the existing baseline.

The important thing in any  case is that we can ask: what's in this build that's not in that build. I like  a CM tool where I can take the customer's current build and then compare  it  interactively to various candidate delivery builds, just by scrolling  through the candidate build list and then zooming in on the differences list  for details.  If instead I have to commission a team to describe the  differences between every two builds, I've got a working, but painfully slow  process.

Comparing builds is not just necessary for releases.  If  a key feature is noticed to have stopped working and I know it was working a  month ago, I want to trace the changes build by build to see what  functionality potentially impacted the feature.  The more easily I can do  this, the better.  Traceability and zoom-in are critical.   Similarly, these features will come in handy if a customer notices that  something has stopped working.  The ability for an orgainization to  respond the same day, as opposed to days or weeks down the road, will make the  difference between a happy customer and one that might be ready to take you to  court.

Getting There

Your current build process  may be far from ideal.  But if you can describe where you'd ideally like  it to be, you can get there.  There are numerous tools to help.   There's plenty of expertise available.  The demand for quicker turn  around continues to grow, especially as competitive pressures continue to  squeeze profits.  Make sure your processes are moving to the next  generation, and if they're already there, keep on moving.


Joe Farah is the President and CEO of Neuma Technology . Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe by email at farah@neuma.com