Building the business case for Software Lifecycle Integration

Posted by

For many 21st century companies and government agencies, software is the not-so-secret sauce for business innovation and competitive advantage. But with 30-70 percent of software projects delayed or failing, delivering successful software projects is still a game of chance. In response, software delivery is always under a constant state of flux, with the practices of development, test, requirements and deployment being modified and improved. But the improvement is often wasted or limited without a way of seamlessly connecting the disciplines of software delivery into an integrated, automated business process. That integration requires a new discipline, the discipline of software lifecycle integration (SLI). SLI is the ALM discipline focused on solving the problems of disconnected, fragmented software delivery. Justifying SLI requires a systematic understanding of the costs and problems associated with software delivery. In this paper, we introduce a five step process for building the business case:

  1. identify integrations
  2. obtain real financial numbers
  3. create annual costs
  4. factor in soft benefits
  5. measure and learn

By applying a measured approach to SLI, it is possible to build a strong financial motivation, which can then drive direct improvements to the business process of software delivery.

Tools and processes are not integrated

Increasingly, a firm’s competitive differentiation is greatly affected by their ability to deliver innovative, high quality software at a low cost.  Accordingly, for many organizations, delivering this software is increasingly becoming a key business process. The defining characteristics for many products are not determined by the actual product but rather the software that surrounds and runs it. Given the sheer costs and potential value, programs to improve an organization’s software delivery capability are easy to justify, with benefit statements associated with reduced time to market, higher quality, improved predictability, and/or reduced cost.  However, when examining the efficiency of the business process of delivering software, it is not enough to look at the process alone; it is also important to understand the people and the tools involved in the end-to-end delivery of software. The tools often focus on improving particular disciplines or automating gaps in the process, and the people employ their own unique working practices using the tools in a certain way.  Automated testing, model-driven deployment, build, IDE’s, requirements and project management tools have traditionally been easy to justify, but they alone do not provide the benefits promised. The missing component to success is integration. Integration, the glue that enables complex tool chains to be connected, has emerged as new tools category.

Integration is hard to justify

In the case of Software Lifecycle Integration, integration is focused on enabling different tools to connect. For many organizations, integration is similar to collaboration, or conceptually being valued, but it can be difficult to prove that value. Integration is even more closely tied to collaboration when you realize that integration is all about moving information between disparate stakeholders, i.e., integration provides the telephone lines needed for collaboration. Integration and collaboration are inherently linked, which is both good and bad.  We have all heard it, ‘improving collaboration adds real business value,’ but for many organizations who are increasingly managing growing business needs with shrinking budgets, investing in collaboration or integrations between groups is hard to justify. It is much easier to invest money in improvements to existing processes that clearly serve one role rather than breaking down the barriers between roles. Unfortunately, doing more of something that you’ve proven is inefficient is just a way of producing more of the same inefficiency.

Despite the negative connotation around silos, optimizing for the silo is easier to do than optimizing for the system as a whole.   In the past ten to fifteen years, the pendulum has swung from command and control, hierarchical, centralized IT organizations to Agile, Dan Pink’s knowledge worker freedom, and decentralized reality. In this time, we’ve had tremendous innovations in the silo e.g., Agile programming methodologies, virtualized testing, open source tooling, etc., but that has come at the cost of optimization across the silos.  With Dev Ops, test-driven development and social coding, we are seeing the pendulum swing back as organizations again try to find ways of ensuring the system is successful. Agile methods have in part caused many organizations to rethink their discipline boundaries, but Agile itself can be hard to justify outside the boundary of the development team. Hence, the reality of Agile adoptions for many organizations is ‘water-scrum-fall’. With disconnects between planning, operations and testing being left to other initiatives to solve, 30 to 70% of software projects result in failure or delays despite the advances in the silos.

Despite overwhelming opportunities for improvement and efficiency, for many organizations, justifying collaboration and their integrations is undermined by:

  • Ownership. It is easy to define who is responsible for improving one discipline, but who is responsible for the interaction between two disciplines? For example, who would be responsible for improving the relationship between development and test? Often it requires a third party to get involved, but that in itself can pose a problem as neither group trusts the input of a third outside party.
  • Geographical, organizational and political boundaries. The reality of any large organization is that organizational structures evolve and are supported by both managerial and political boundaries. Breaking down these barriers is often very difficult when you are pursuing approaches that by their very nature bring down barriers between groups. In our example between test and development, most organizational hierarchies have separate testing and development functions and the common manager is often too senior to be close to the challenges that are blocking the improvement. With outsourcing and remote work a reality of software development, it is rare that QA and development are under the same roof, let alone on the same continent.
  • Measurement.  Software delivery has a history of poor measurement, but even that limited measurement often focuses on one discipline such as testing, development or planning. Integration by its very nature changes the flow of work, adding value in unknown ways; thus its value does not necessarily fit nicely into existing measurements.  Additionally, measuring across fiefdoms is always a challenge to get the right information and to get that information to match up so that it is meaningful.
  • Inertia.  Change is hard, and organizations have a certain inertia which slows down or stops change from happening. Integration often means organizations need to both think about the overall business process of software delivery and each discipline and make changes to their process models to integrate more effectively.

Vendor integrations are limited and hard to use

Connecting tools into one integrated business process sounds like the responsibility of each tool vendor.   After all, the tool vendors are supposed to offer Application Lifecycle Management (ALM) tools.  Tools provide individual value for one discipline but miss the much greater value that comes when integrated into a broader process, and many vendors provide out of the box, and sometimes free, integrations. But integration is not the focus for those vendors. Integrations are often created to appease clients, support migration, or make a sale. This leads to integrations being limited and hard to actually deploy in the complex, customized, and unique working environment that is the norm in today’s enterprise. The reasons tool vendors struggle to provide the best integrations include :

  • You get what you pay for. For the majority of commercial software companies, integration is not a profit center.   Integration teams are funded by other revenue generating groups or paid for out of services and support budgets.   That leads to these teams not being able to keep up with the maintenance and support of integrations, and it often means that when new versions of other tools come out, the integrations are not updated or tested.
  • Integration and migration are synonyms. Many ALM vendors have broad and comprehensive stacks of development tools. Integrating with competitor products is hard to motivate when management would much rather move those customers to their own product. This leads to competitors closing up API’s and making it harder and harder to get access to the underlying data. In turn, the competitive vendors do the same with their tool, and a vicious cycle continues with the customer losing out.     Even when tool vendors build their own integrations, they are typically set up as a one-way integration, bringing data to their tools from a competitive alternative.
  • Licensing and competitors make it hard. In general, competitors do not like working together.   Licensing is one example of where vendors make integration difficult by explicitly stating that competitors cannot use the product or purchase it.   This leads to lawyers reviewing where and when a competitor’s tool can be used and reduces the likelihood that the vendor will build an integration.

Building your own integrations is complex and a maintenance nightmare

For many organizations, the need to integrate point tools has led to them creating internal integration teams who build custom integrations between tools. Even if the building of the initial integrations are outsourced to a systems integrator, the organization often still requires an internal team to maintain the integrations as the point tools update, as required fields change, and servers are renamed, etc.   These tool integration teams provide custom-made software, sometimes extending vendor-based integration frameworks or using integration platforms such as Tibco or Websphere. Though the initial cost of building the custom integration is easy to justify, the ongoing overhead of running these teams is often not accounted for when doing ROI and resource analysis and planning. What often starts out as a simple connection between two systems for one team with very specific requirements slowly evolves into a business critical operational system with a variety of use cases never foreseen when the initial integration was conceived. This is made harder when:

  • You have to build skills in multiple tools. Each tool comes with its own quirks, implementation models and connection patterns. The more tools you integrate, the more skills the integration team needs to manage. For many tools, the integration points are poorly documented and require extensive ‘experimentation’ to implement. This learning is hard to document and often leads to custom built integrations being difficult to maintain by people other than the original creator.
  • Update the tools when new versions come out. Traditional tool vendors release new functionality once or twice a year (the pace of new releases has gotten faster in recent history) whereas SaaS vendors release software with a much higher cadence. Keeping custom bespoke integrations up to date with the latest functionality is a large overhead. Also, many organizations support a number of different versions of a particular tool requiring integration to support multiple versions of a particular product.
  • Testing and support are a burden. Ultimately, creating the integration is far less effort than supporting those integrations over time. Over time, integrations become a mission critical back end service requiring rigorous testing prior to release to avoid data corruption, missing records and process breakage.   That level of testing is a huge burden requiring multiple test environments, test plans and associated test infrastructure. As soon as the software is considered mission critical, any operational and release overhead will have to be applied to this software, requiring formal review and sign-off before the software goes live.

The value is obvious; it just needs the numbers

The value of connecting up separate tools, disciplines and processes seems obvious to many software delivery professionals. After all, the business processes for sales, logistics, accounting and operations were integrated and automated in the 90’s, ironically largely through software as a medium for the automation and integration, but there is a big leap between knowing something is of value and being able to prove it. There are numerous examples that demonstrate the value of integration. An often cited research study from IDC states that that the cost of not finding information is $3,300 per employee per year. But the IDC study focuses on finding the right information and its effect on productivity.   It ignores the powerful added value that collaboration creates when two groups that traditionally did not see each other’s information are presented with it. For example, by connecting the developer and the tester, existing requirements can be re-evaluated, giving the opportunity for a smarter solution. Daniel Moody and Peter Walsh describe the increased value of information in their work ‘Measuring The Value Of Information: An Asset Valuation Approach‘ as ‘in general, sharing of information tends to multiply its value – the more people who use it, the more economic benefits can be extracted from it’. By focusing on the value of sharing, connecting and being able to report on information, application development professionals will be able to provide context on the cost associated with integrating this information.

Give the right people the right information at the right time in the right form

Providing instant, correct information to the right people is the dream of any information system, and the business process of software delivery, like any other business process, is an information system. The majority of software delivery processes rely on manual processes, rekeying information, meetings, or email to connect key disciplines. Processes such as planning, requirements, development, test and deployment are disconnected, working in different tools and using different approaches at different cadences to solve the same problem. Not only does this provide massive process disconnects between groups, but it also undermines the information that should flow between them. Key artifacts such as requirements, defects, tasks and builds go through several transitions between inception and implementation. For example, a requirement needs to be transformed into a plan to be managed that can then be transformed into a set of activities for developers. The requirement also needs to be transformed into a set of tests to ensure that the business requirement is met.   Currently, in most companies, each transformation is manual and error-prone with key context information being lost. Transformations are undermined by:  

  • Rekeying information. A common solution as an artifact is transformed across disciplines is to manually enter information into each tool.   Although this seems like a simple activity, as work moves from planning to analysis, design, development, test and deployment, each artifact is never quite translated completely.   In many cases, the individuals used to handle the manual data entry are either too junior and don’t understand the business processes involved, or those who do understand the business process are wasted doing work they abhor that should be automated.   Even if the individual doing the data entry is at the right level, manual data entry is fraught with mistakes, and the individual’s perceptions often cloud the transformation.   Context, intent and history are often lost when it is moved between disciplines. A great example is a requirement; when defined for planning, it means one thing and is described in that way. The plan that is used to drive requirements work has a simple list of requirements on it but no context and thus the business analyst will take that requirement and enter the details they believe to be true, which may be very different from the original intent of the requirement. All the context from planning is lost as it is translated into the requirements discipline.   The next transition to design again can suffer from these transformations. The process of software delivery starts to resemble the child’s game of Chinese Whispers or Telephone, where everyone converts what they hear to their own perception and changes its intent.
  • Email is a nightmare. For many organizations, the glue that connects the disciplines is email. Artifacts are handed off via email, bugs discussed, and requirements described in often complex and sometimes heated email exchanges. Email however, is a poor proxy for capturing real collaboration, and the results of interactions are often poorly communicated and documented.   Email overload often results in missed communications as different people are included on the ‘to’ and ‘cc’ lists and documents are branched in different email threads.
  • The amazing spreadsheet. Spreadsheets are used to store requirements, defects, priorities, issues and many other artifacts that drive software delivery.   But often these very important documents become large and difficult to work with. Also the challenges of version control and integrating information from a spreadsheet to other artifacts makes spreadsheets a great way for getting started but difficult to use for long-term integration. A great example is the defect spreadsheet bemoaned by everyone, as it quickly becomes out of date and difficult to work with. How many times have you been on a call where the first ten minutes is spent working out which version of the spreadsheet everyone should be working from?
  • Status Meetings.   A normal solution to the challenges of spreadsheets or confusion in emails is a meeting.   Meetings are great for driving collaboration and clarification but are often difficult to schedule with disciplines in different time zones, language barriers, and busy schedules.   As the cadence of software delivery increases, having meetings in a timely matter becomes that much more difficult. Also, add to that the reality of team members being involved in multiple projects and initiatives; not only does this mean that an individual will be attending far too many meetings, but also they will have to context switch, which may mean losing information about a particular artifact.

Agility requires reporting from many sources

Today, the term Agile is used as the catch-all term when describing any improvement to the practice of software delivery. But at the heart of Agility is responding to the environment or building in a feedback loop. Feedback requires rapid, real-time data on how the team is operating, the software they are delivering, and what the customer wants. Daily standup meetings and backlog maintenance requires an integrated understanding of the project. When you add distributed teams and complex software supply chains, it is clear that some form of integration is required. Some of the challenges that arise when integrating for Agility include :

  • A person with a clip board is expensive. Manual processes for gathering project status information and customer feedback are often the preferred method for many Agilisters, but the overhead associated with capturing this information is large. Add to that the increased cadence of Agile projects, and you have created a huge cost for the project manager, scrum master or team. This leads to the scrum master being a full time status checker rather than adding any value to the team, they spend their time updating the whiteboard, spreadsheet and communicating progress and status information to their management.
  • Out of date information means wrong decisions. Agility is about responding to the situation in the most appropriate fashion. Response requires information, and that information needs to be up to date to enable teams to operate in the most effective way. Running a standup without current defect lists or customer feedback reduces the value of that standup and may lead to wrong decisions being made by the team.
  • An individual updating a spreadsheet is error prone. Adding stories, defects and tasks to a ‘super’ spreadsheet is for many teams the answer, but the reality for most software teams is that the spreadsheet is always an afterthought, and the information it holds is never quite up-to-date. The spreadsheet also suffers from version control problems, as many members of the team wish to update it at the same time. Add distribution to the mix, and misunderstandings start to creep in on what people mean by the values on the spreadsheet.

Traceability and compliance is hard to take

For many software projects, the record of the truth is actually held in the heads of the development team. Agile practices accept that realization and encourage practices to share and communicate that tacit knowledge throughout the team. Tacit knowledge is not an adequate way of ensuring compliance and auditability. Having clear, connected software artifacts is key when external parties assess a projects adherence to company or regulatory policy. Also, that information is important when projects have been implemented and problems occur. Software development compliance is often undermined by:

  • Manual steps. Having a clear process that is documented is one thing, but actually ensuring that people follow it is quite another. Manual process steps are likely to be ignored when time becomes of an essence and problems with the plan occur. To ensure compliance, manual steps need to be replaced with automation which is enables ease of audit and manageability.
  • The breadth of information residing in many tools. The sheer complexity of many software projects makes it difficult to provide clear and concise information flows.   Multiple tools, coupled with external organizations and reliance on external APIs make reporting from one tool near impossible without integration.
  • No one really thinks bad things will happen. It is true that the likelihood of litigation for most software projects is small, but as software plays a more important role in a business’s ability to operate, then transparency becomes more important to senior managers. Not only is the risk of litigation or audit a key motivator for improved traceability, but the underlying value of the business is reduced when management cannot really see the impact of change X or the ramifications of change Y.

A five step method for valuing integration

The value of integration will vary greatly based on each organization’s unique situation. Thus, it is impossible to say making defects available to both developers and testers in their respective tools has value $X. Instead the value of the information can only be determined by that particular integration, for that group with one set of data. For example, the integration between development and test would focus on the bugs and builds being generated from each group. That information would drive work for both groups. It is then possible to evaluate the costs / value of different types of integration (see table 1).

Cost Type


Finding the information

Bugs / defects – There is a cost of trying to find the current list of bugs even if it is walking over to a whiteboard.
Build – working off the wrong build can waste countless cycles for both development and test.

Aggregating information

Bugs / defects -compiling and ordering the bug list costs time and effort.
Build – how many times have you been asked ‘what is in the build’? Aggregating the requirements / stories that are in the build costs time and effort.

The ‘Ah-Ha’ value

Bugs / defects – looking at patterns of defects being created provides development with the ability to understand the root cause.

Traceable information

Bugs / defects – seeing the requirements, test cases and builds involved in the bugs provides both development and testing with a great understanding of the context of the bug.
Build – seeing the composition of the build and what it included not only provides context for testing activities, but seeing what is not included helps make team decisions about progress.

table 1 – example cost/value table

Step 1 – Identify integrations

It seems simple, but there are many different integrations possible in the software delivery flow, including requirements to development, development to test, test to requirements, and project management to all disciplines. Each integration has value and risks associated with it. For example, the value of connecting test to development is huge, but when every developer uses a different tool, the integration can be very difficult and thus increases the risk of the integration working or the associated cost. A quick scan of all the possible integrations and a risk, value gut feel is a great place to start (see table 2).




Requirements to development



Development to test



Test to requirements



Project management to development



Release to test



Operations to project management



For many organizations, the integrations to focus on have been pre-decided because of existing process problems, audit concerns or serious communication problems. Thus, any holistic identification of all the integration opportunities is pointless; instead concentrating on the integration required tends to be the focus. However, reviewing the application lifecycle as a whole rather than a series of disconnected disciplines provides insight that is often missed.   It is too easy to focus on one particular integration or aspect of the lifecycle without thinking about how that flow connects to other flows. For example, considering test and development is a great start, but reviewing that in the context of operations and requirements will help you identify business changing epiphanies.

Step 2 – Obtain real financial numbers

Determining value in terms of high, medium and low is a great way of defining which integrations to focus on, but integrations have a cost, thus it is important to identify the potential fiscal value of integration.   The costs of integration are a great place to start in identifying information value. For example, by using a similar process that IDC employed in how they calculated the cost of finding information, ask a team how long it takes them to find the right build or defect list. Look at the number of emails asking questions about requirements and build status. That information should then convert into real time and effort, which has a real cost for each role (see table 3).

Test to Development

Team = 12 developers, and 4 testers
Loaded cost = $100/hour



Financial Impact


Developers finding the information on defects, and testers ensuring the developers have the most up to date defects and details about those defects

Developers spending 1 hour per week getting the defect list from test ($1200 per week)
Testers spending 1 hour per day dealing with requests for information from development ($2000 per week)


The team has to compile a list of defects which are mapped to work items once a week for a team progress meeting. The cadence of this meeting increases to daily in the last sprint before the release.

It takes a tester and a developer 2 hours each to compile the list weekly. ($400 per week). It is not possible currently to produce a daily defect list.


Developers being able to see the defect and how it maps to test cases help them debug it.
Testers want to be able to see the work items / stories the developers are working from to determine which tests to execute.

Numerous interactions between developers and testers reviewing what is in each build and what each defect is in reference to. On average this equates to 1 hour a day for each team member ($8000 per week) if it happens at all.   If this collaboration and interaction doesn’t happen, the risk often results in the wrong thing being fixed or the wrong tests being run, resulting in re-work and missed deadlines.

table 3 – example value

It is also important to factor in the cost of geographic distribution and organizational separation. For example, when teams are in different time zones, traditional techniques for aggregation such as spreadsheets and emails become quickly out of date and cause friction between groups. Distribution is therefore an amplifier of any cost.

Step 3 – Create annual costs and look for overlaps

Often the numbers that surface when looking at the costs and value of locating, aggregating and tracing get everyone very excited, but this needs to be balanced with reality. Factors such as release frequency, distance, and cost overlap can amplify or reduce the value. Also, annual costing implies some assumptions as to the work year, staff utilization and the general flow of work.   Rather than spending too much time calculating the perfect model, concentrate on a best case, worst case scenario, allowing for risk to be calculated into the model (see table 4).


Annual Cost
45 Weeks

Worst Case

Best Case



50% – $70,000

150% – $214,000




300% – $54,000 (value of daily defect list)



50% – $180,000

300% – $54,000 (value of daily defect list)





table 4 – example costs and benefits

Step 4 – Factor in soft benefits

Not all benefits can be measured in terms of monetary value. Some benefits are not predictable; for example, the “a-ha” moment created by providing additional information to a developer about the test plan is hard to predict as it is dependent on the information and the developer paying attention to certain data trends. Also integration provides support in areas such as compliance and governance, but measuring the value of governance on a business is difficult to achieve at the micro level. Everyone is able to appreciate the value of compliance, as everyone can tell a story of a company who did something that was not compliant and dealt with the consequences.    

For some organizations, compliance is mandated, and thus the value of integration is easy to define as it is in replacement of manual compliance reports and processes.   For example, the consequences of non-compliance in regulated industries can take the form of fines, limitations posed on a business, and negative branding.   Even in non-regulated industries, reserves for potential lawsuits, errors and omissions insurance, potential for business process optimization, etc., all have costs and benefits that may result in the soft benefits outweighing the hard, easy-to-measure benefits that we describe above.   Qualitative soft benefits can be found by asking the team and other stakeholders involved in software delivery what their biggest problems with collaboration and process flow are and then using integration to solve those problems.   Additional intangible benefits include :

  • Improved visibility and transparency. Insight and intelligence gained from information across the value chain being available to all disciplines to improve each individual discipline not just individually but as part of a whole
  • Replacing internal development team.   By adopting a systematic approach to Software Lifecycle Integration, it is possible to reduce the overhead of running the business process. For example, moving some of your best programmers off of internal facing IT / DevOps and on to far more valuable opportunity creation has an associated value.
  • Long term trending and a data warehouse. By exposing an end-to-end process and applying automation to the solution, it is possible to use the information in different ways. One great way of using the information is within a data warehouse, which can be used to explore trends and do analytics which you could never do without a consistent integrated view of the software delivery process.

Step 5 – Measure and Learn

The initial business case is a moment in time and is created by spot analysis and time and motion studies. To achieve true long-term process improvement, a continuous measurement process is required, allowing that measurement data to feed into improvement initiatives and Kaizen opportunities. It is therefore important for any software delivery organization to put in place a repository of measures over time. There are many metrics that add value to software projects, but from an integration point of view those metrics include :

  • Batch size – How many requirements, defects, tasks or test plans are moving around the system. Larger batch sizes make sense for well understood systems, smaller are more appropriate for applications where there is a large amount of unknown? By measuring batch size, it is possible to identify opportunities for improvement based on the relative size of the batches.
  • Flow – How does work flowing between the different tools provide insights into bottlenecks and disconnects between tools? Flow is crucial for process success. When an artifact is created in one system and does not flow quickly into its next state or system, those disconnects are often indicative that there are problems with the process and integrations.
  • Artifact change rates – Defects, requirements, test plans and tasks will all change during their lives on a project, but it is important that this happens in a continuous and managed way. Heavy requirements churn or defect creation spikes can indicate certain issues with the process.

What SLI means to you

Software Lifecycle Integration (SLI) is an emerging but important discipline that requires organizations to invest. SLI is the glue, connecting the disciplines that enable software delivery to be treated in the same way as other business processes.   By doing so, SLI provides direct returns on the investment but also allows greater returns on the investments made in each discipline on its tooling.   However building the business case for SLI is always difficult, as integration is less tangible than other tools such automated testing, but the ultimate value is much greater as information is made available in multiple places and different contexts. Application development professionals should:

  • Treat software delivery as a key business process. For many organizations, software delivery is the last key business process to be evaluated, optimized and automated. By treating this business process in the same way as other key business processes such as sales, operations, accounting and logistics you will not only focus the organization on improving its value, but you will also be able to apply the decades of learning on business process improvement. That kit bag of ideas and focus enable you to apply financial management and build a stronger case for automation and integration.
  • Look at the end-to-end flow. The key to building the business case for integration is to consider the end-to-end flow, from inception to implementation and maintenance. By not looking in detail at each discipline, but instead focusing on the transitions between the disciplines, it is possible to optimize the software delivery process without massive process change.   The beauty of SLI is that integration facilitates improvements without causing disruption in any one silo.   Each group can continue to work with their own tools, with their own workflows, but still gaining the visibility and benefits that comes from information being synchronized in real time.
  • Apply the five step process for building the financial model for integration. By understanding the opportunity, the current cost and associated benefit of integrating each discipline it is possible to build a financial model for integration.   Spreadsheets, manual processes and email are easy opportunities for elimination to be replaced with automation via integration.

Look to company initiatives to connect to. Agile, Lean, DevOps and enterprise PMO initiatives are great candidates to associate the cost of change with. These initiatives often have strong leadership looking for tangible benefits that can be achieved with integration. Many of these initiatives require improved flow, better collaboration and cross discipline / tool reporting that software lifecycle integration will provide.

Learn more about SLI