Neuma Technology: CM+ Enterprise Software Configuration Management for Application Lifecycle Management

Neuma Technology Inc. provides the world's most advanced solution to manage the automation of the software development lifecycle.
Neuma Technology Inc.

More white papers:

Find out more ...


Neuma White Paper:

CM: THE NEXT GENERATION of ALM Comes From Cutting the Cord

This month I thought I'd write an article from a vendor's perspective.  As vendors, what do we need to do to help the CM/ALM industry move forward?  Now this in itself is a loaded question because forward assumes a starting point and the industry is not at a common starting point.  There are mostly second generation solutions out there, with many vendors trying to build 3rd generation functionality onto them.

There's a problem there.  Elon Musk may have said it best after the splashdown of his Dragon capsule this month, when he said that [b]when you assume existing technologies, you assume their cost structures as well.[/b]  Trying to build the next generation space access vehicle from components of the Space Shuttle (i.e. STS) may save some costs in some parts of the development, but overall, the solution will be relegated to the cost structure of STS.  The Falcon/Dragon technology of SpaceX started with a few key principles which has allowed SpaceX to set themselves apart from the others:

  1. Build what we need internally (keep control of our destiny)

  2. Use common technology across the subsystems and products

  3. Perform extensive testing, with heavy automation

  4. Create the ability to react quickly to problems

  5. Keep administration and overhead low.

  6. Re-use components where possible - that is, re-use of the rocket/capsule for a second launch.

Elon, diplomatically, also said that the NASA-Commercial partnership works.  They've proven it.  That's true.  It does work. But only with the right commercial partners, as we're sure to see in the future.  NASA, to their credit, recognizes that, not only do they have a lot to teach SpaceX, but they have a lot to learn from SpaceX.  Except for a launch escape system (which the shuttle does not have), the Falcon/Dragon is very close to having the ability to support manned flight - at a small fraction of the cost of any previous US Manned Space Program.

So how does this apply to ALM?  Well, from a vendor perspective, it's not easy to just throw everything away and start over again.  That doesn't mean we can't offer more and more functionality and capability.  But it does mean that without a fresh start, what we produce is constrained, in cost structure and architecture.

Let's look at the past to a few examples.

In the 1990 time frame, Atria took the experience of the Apollo DSEE version control model, and applied it to a new product, called ClearCase.  Had Atria attempted to build out from Apollo DSEE, they would have carved out a nice little niche market rather than taking the market by storm.  ClearCase has been dramatically successful, though its 20-year architecture is showing signs of constraint.  Still, a little polish here, some new infrastructure there, and a nice product evolves in 2010 (RTC).  Is it a full 3rd generation CM/ALM product?  Yes and no.  Lots of 3G capability from the end-user experience, lots of 2G from the back end.  But that's because it is somewhat constrained.

In the mid-90s, Perforce started from scratch with a product.  Very successful again.  Why? Again primarily because they were unconstrained in their approach.  They were able to say "we don't want the admin headaches that other tools show" and "we can't have the performance issues characteristic of the leading vendors".  With those goals in mind, they successfully created their CM product.

In the '70s and '80s, yours truly produced some very capable 2G CM/ALM tools for Nortel (then Bell-Northern Research), and Mitel, both large Ottawa telecomm companies.  When an attempt in 1989 to acquire the Mitel ALM technology failed, Neuma started to create a new product from scratch.   However, like Elon Musk did at SpaceX, Neuma looked at the full industry requirements for ALM and decided, not to build a CM tool or an ALM tool, as had been done at BNR and Mitel, but instead to build an architecture that could endure.  As a result of this, Neuma moved forward, with some mistakes, but at the turn of the century, came out with a full 3G CM/ALM tool and moved forward from there to a 4G tool recently released.

Looking back, Neuma was able to do this by following some specific guiding principles:

  1. Build what we need internally

  2. Use common technology across the subsystems and products

  3. Extensive testing, with heavy automation

  4. Ability to react quickly to problems

  5. Low administration.

  6. Ability to do extensive, re-usable, customization, easily.

Sound familiar, at least for the most part?  A recent general forum question asks what's better:  Best-of-breed tools integrated together, or an integrated ALM solution?  I'll go one step further and break down integrated ALM solution into:  common vendor integrated solution  vs. common core integrated solution.

It's a good time to learn a lesson from Elon Musk and SpaceX.  We're heading into a new year shortly.  If we want to continue to deliver next generation tools to the market, as vendors, we need to focus on a few things going "forward".

  • Letting customers define "best"-of-breed

  • Bringing costs down for ALM tools

  • Focus on common core technology

  • Rapid response to change requests.

  • App-accessible ALM functionality

A bit of explanation follows.

Letting customers define "best"-of-breed
The number one requirement of an ALM tool is to support an organization's process the way they want.  Neuma discovered this when they did their market research in 1990.  It's the same today. A best-of-breed tool is not defined by the tools capabilities, it is defined by the customer's requirements.

Each customer is going to have different requirements.  The ALM tool must be able to support these.  By all means provide guidelines - don't let them do file-based CM when change-based CM is so obviously superior in all cases. (I'm talking software CM here - this does not necessarily apply as clearly elsewhere.)  But make sure you can support the customer's process.

No problem - we'll just give them a compiler, or Perl, and they can do anything.  OK, we'll throw in a GUI drawing tool.  And maybe an RDBMS system.  That should do it - except for the word processor for documentation changes.

That certainly let's the customer do what they want.  The problem is that it's too costly for the customer to do what they want.  Not a problem - we'll make expertise available at a reasonable rate.  And this works if the company does not go broke first, and has plenty of time to get the solution in place.  Unfortunately, I've seen first hand where a company went broke first after spending too much to get the solution in place, running out of time before they did.

If you want to let a customer define the "best"-of-breed tool, they must have very high level tools that allow them to do so, not in years or months, but in days or hours.  Is that possible?  In 1970, IBM would have said no if we asked whether or not users could build their own computer to meet their needs in days or hours.  Now, they can log into a Dell (or other vendor) site, select a starting configuration, change options, base software, processors, etc. and in a few days, they have their machine delivered to their door.

Yeah, but computer components/features are much more clearly defined than for CM and ALM.  Precisely.  That's the issue.  CM and ALM components/features need to be much more clearly defined so that infrastructures can be built, just as Dell did, to create the product quickly and easily.  One of the reasons Neuma was successful in creating 4G CM/ALM is because they focused on the infrastructure, and making it easy to let their customers define "best" in their own terms.

Bringing Costs Down for ALM Tools

OK.  Here's a no-brainer.  Need a low-cost ALM tool - open source.  Free.  No cost tool.  That fits the budget, right?  Well, in a sense, as long as we restrict our definition of cost to the cost for acquiring the tool.  What about the costs for:

  • Training

  • Data Import (and, eventually, Export)

  • Maintenance

  • Performing Upgrades

  • Customization and Process Support

  • Integration with other tools and data sources

  • Additional tool components not included

  • Administration

  • Multiple site operation

  • Disaster recovery and backups

  • Security

  • Down time costs

  • Productivity for each end-user role, including communication productivity

  • Hardware/network platforms/performance

  • Repository and Process Engine infrastructure

Want to add some to the list?  Be my guest.  Yes, it is important that license acquisition not cost you an arm and a leg.  But training is typically even a bigger cost.  In some cases administration is an equal cost.  And customization costs run from not-as-much to out-of-the-ballpark, not to mention that such customizations must then be supported and must survive upgrades.  OK, then there's the integration of a few tools, which is fine - a one-time cost - as long as the tools never change.  Multiple site - did you mean for the source code or for the problem report database - oh yeah, there are a few other cases too.  And consistent backups - which may not be a problem if your multiple site solution is just replication of everything everywhere.

The picture is clear enough.  If the goals going in are not to reduce all of these costs, we'll inherit the cost structure of the legacy pieces, even if we use Open Source software.

Focus on Common Core Technology
This is certainly an approach that will help cut down costs.  If we could use the same process engine, database, multiple site capability, and customization technology for all of our ALM tools, we'll reduce training costs, administration and a bunch of other costs.

At the same time, we can focus on the core capabilities - enabling better reporting, traceability navigation, dashboard generation, advanced data management, reliability, etc.  All of the components of the ALM suite can benefit significantly from each significant advance.  And we can afford to spend time making each capability the best, because it is helping all of the tool components, making it cost effective to do so.

Better yet, we won't get different technology variants of the same problems in each of the tools.  One problem, one fix.  And the integration of the separate tools becomes trivial because the common core components are already integrated - so it's more a matter of user interface consistency and good data schema for providing the traceability we need.

Rapid Response to Change Requests

Customers are going to want changes. They're going to find problems.  We need to be responsive to these.  It's no longer good enough to say "we'll look at that for the next major release".  It's not even good enough if we replace "major" with "minor".  Customers want changes now.

Changes will fall into two main categories:  those that can be done through customization of the existing tool release; and, those that need changes to the tool release.

That's very straightforward.  So how do we move forward here.  Neuma claims that more than 95% of its change requests can be done in the existing tool release, and usually through a simple email exchange, or even over the phone.  And most problems can be worked around easily.  Most vendors have some level of this capability, but it's not at a 95% level.

The goals are two-fold:  move as many of the "need new release" changes into the "can be done in current release" category;  and change the "can be done" to "easy to do".  There is wide variation in the industry.  I've seen changes that take weeks on one system that take minutes on another.  Which do think costs more?  Which customer was happier with the response?

ALM tool architecture mandates that since it is easy to change your process, it must be easy to change the tool to support it.  Whether it's terminology, triggers, state-flow, user interface, or even customization tools, it should be easy to make changes to make these work better.  [b]Don't think of it as giving away your customization services.  Think of it as having the ability to do more with a given set of customization services.[/b]  Because if you don't someone else will.  This has to be the vendor's attitude.

Then, when you're planning a new release of your tool, make sure that a very large portion of the proposed features go into making the tool easier to change.  Add in a higher level of change perspective too, so that instead of dealing with "labels", the user can deal with "product road map" or "baseline creation".  The user doesn't want to be in the weeds.  They just want to know that the weeds work the way they're supposed to.  Don't tell me how to place widgets on my dashboard easily - do it for me.  Don't tell me how to create complex widgets - give me a checkbox or a pic-list that will do it.  Don't hand me a bunch of scripts - show me the organized data which drives the process.  That's what the user wants, and that's what the vendor better start delivering.

APP Accessible ALM Functionality
OK.  This one came on like a storm over the past year or so.  Mobile devices, tablets, etc. have a new user interface paradigm.  "Apps." They're easy to use with little or no documentation.  Not only do I need the same ease-of-use in my ALM role, I need an app that will let me do a lot of my work remotely.  Check on progress.  Give approvals.  Create a new baseline.  Identify the new features for the customer whose site I'm at installing a new release.

So, the information has to be mobile.  There are many ways to achieve this.  And I don't have all the answers - the technology is just too new - it's changing too fast.  Maybe we want a smart client on the tablet or phone, rather than a thin client to the central site, so that when I lose connectivity, I still have the answers.  Or maybe that's the wrong architecture or cost structure..

What I do recommend is that vendors watch this space carefully and move forward with the correct decisions - making the wrong moves too soon can be costly. However, making no moves at all may leave you with a knowledge gap.

Conclusions
You're probably reading this, not as a vendor, but as a CM user.  But I'll bet you've heard a lot of things you'd like to hear.  Tell your CM/ALM vendor community that you're ready to move into the next generation.  You don't want to use a mainframe computer when a mobile tablet is now available to you, with more power and a better interface.   You don't want to drive a Model-T when a Nissan Leaf or Chevy Volt is more to your budget and liking (don't worry - the price of electrics will go down quickly, and the range will increase dramatically over the next few years).  Tell them you want a 3G or 4G solution now.

Vendors, the difficult part of CM technology is not the technology itself.  It's the requirements, figuring out the ease-of-use, getting the cost of sales and support down.  Don't be afraid to start a pilot project which has access to your existing technology, and requirements, but which can cut the cords of constraint imposed by legacy components.   It's time to create next generation solutions.  Sure, keep the polish out, keep the band-aids available, add on a bunch of nice-looking contraptions, etc. until your new technology is ready.   But if you don't cut the cords, you'll be left in the dust.  I hope you'll take this article as a partial blueprint.  Because I don't care how much support you throw at that old DOS clone, you won't convince the user that he has a tablet.  And that's why you'll see so many new OS platforms evolving this decade.

The only additional word of advice I can add: make sure the CM/ALM components and features that you're working to are well defined.  Not well-defined to fit all legacy tools.  Well-defined to meet the Next Generation.

Elon Musk said he's not in it for the profit - he's in it to get easy, reliable, cost-affordable space access so that we can go beyond a few cameo space accomplishments.  But be certain, the profits will come because of this.  Similarly, vendors, make sure that you're in it for the advance of CM/ALM capabilties - to take it beyond our techies and software projects to the wider world of information.  If you can achieve this with your tools, the profits will come.


Joe Farah is the President and CEO of Neuma Technology . Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe by email at farah@neuma.com