ALM Forum Industry Panel

by Dave West, May 28th, 2015

alm-panel-speakers

In a sunny Seattle on Tuesday the 19th of May, I was fortunate enough to be invited to run an industry panel at ALM Forum. ALM Forum is an annual event in Seattle where ALM practitioners talk about all things tools ranging from how to configure your build pipeline to running an Agile project. It is a small event in terms of numbers, but big in terms of passion and energy. The panel, which was also live streamed included 5 of the most interesting people in the ALM field: Melinda Ballou, research analyst and program director IDC, Aaron Bjork program manager responsible for Visual Studio Microsoft, Andrew Flick product leader HP ALM group, Thomas Murphey, Gartner research director/analyst, and Distinguished Engineer John Wiegand, one of the driving forces behind Eclipse, OSLC and the Rational tools strategy. With a panel comprised of such amazing people it would be hard not to have a great panel, my only challenge was to not run over (which I failed at) and ensure that everyone had a chance to talk. And here are some the highlights:

Open and heterogeneous

The future of ALM is open, heterogeneous, always changing. Those words would not come as a surprise from me, Melinda or Thomas, but IBM, HP and Microsoft all highlighted how the tools landscape is becoming more fractured, varied and ever-changing and that ALM needs to be inclusive of varied tool chains and tool chains that will change. All three companies highlighted how their approaches were becoming more and more inclusive of a variety of tools including open source and competitor tools. Aaron made the point that the role of tools at Microsoft was yes, to help people develop on Microsoft platforms, but also to help developers build more software. I guess if the whole ocean rises then all the boats rise on it. That was echoed by both Andy and John. John talked about the role of IBM in open source projects and why IBM is so committed to initiatives like OpenStack and Cloud Foundry. The reality is that more software ensures a bigger market for companies like HP, Microsoft and IBM. That market allows them to offer capabilities that rely on information being accessible, open and rich.  Tools will continue to cost money, of course, but the focus seems not on making profit from traditional capabilities but instead on capabilities around their use and the use of the applications they build.
That then brings me to…

Analytics and information is the new battleground

The battleground is not activities such as development or requirements, but instead analytics and decision-making around the process of software delivery. And increasingly, smart decision-making with HP Autonomy and IBM Watson will help developers make better decisions. Tom Murphy went one step further describing the need to flow customer or Lean analytics into development to ensure that teams were building the right software. Everyone agreed that the lack of ‘data scientists’ will make it harder to make the right decisions with this information. And that an increased amount of information does not mean the right decisions are being made, but being able to get access to even simple information will be a great start.
This highlighted the distinction between operating the ‘real’ business process and developing the software to support that business process is blurring with truly business focused teams driving business change with technology rather than separate teams in business operations, development and IT. No one talked about DevOps, but instead highlighted something more holistic with the business…  I guess they were describing Business DevOps.
This led the panel to bring up the thorny subject of design…

User Experience must be integrated, but that is hard…

Everyone agreed that User Experience is a crucial property of any modern application but User Experience is still a difficult set of skills to marry into modern development. The turtle-wearing designers want to focus on perfection rather than incrementally delivering capabilities in an Agile way. Andy introduced the idea that UX is a quality attribute and thus should be tested. He talked of extending testing tools to provide subjective measures of UX. Tom and Melinda added the need to instrument applications so that customer information is flowing into development to help augment UX decision-making.
So UX, like everything else needs data to be effective and analytics could not only provide insights into how software works, but also how the experience is being perceived by the users of it.

I believe that ALM Forum will publish a recording of the panel – which I will share when it comes out, but I wanted to publicly thank the participants – Melinda, Aaron, Andy, Tom and John who shared their insights and ideas.

Meet the Interns: Nicholas Folk, Junior Software Engineer

by Nicholas Folk, May 13th, 2015

When I first applied to Tasktop, I admittedly had not heard much about the company. What drew me to the company was when I personally visited for my onsite interview. Almost everyone in the Vancouver office works in one wide open room with floor to ceiling windows giving off a bright and amicable atmosphere where everyone is approachable. My biggest concern, however, was whether Tasktop would help me build a sturdy foundation in software development and design. I had heard horror stories of fellow students’ co-op experiences at other companies being unengaging and unfulfilling. So when I was told that I would be writing production level code and learning high quality design practices while doing it, I was resolved to work for Tasktop and could not be happier with that decision.

Since most of my experience writing software had been classroom-based, I was initially intimidated by the sheer size of the code base I would be working with over my term. Luckily, I have had a great team — lead by full-time Tasktopian and Mylyn contributor Sam Davis — that has ensured a smooth transition from the classroom to the real-world software industry. The engineers here are intelligent problem solvers, friendly, and more than helpful when I’ve needed anything. Everyone works hard, yet still makes time for a cold pale ale during the Friday afternoon socials, where I get to chat with the engineers who I don’t interact with daily. It’s also not uncommon for Mario Kart to make an appearance (or my personal favourite, Rock Band!).

Most importantly, the work I have been doing is engaging, aptly challenging, and useful for the users of the product. While I spend a large portion of my time improving Tasktop Dev, one of Tasktop’s original products, my team is also heavily involved in contributing to the Mylyn open source project. Some of the recent developments I have made involve bringing the Gerrit connector up to speed with the latest API changes and even improving on certain features. The connector now supports cherry picking reviews and navigating the EGit Eclipse editor using parent commit IDs. Here’s a video where I demo a few of these features:

I will be returning to classes at UBC this summer, but I plan to continue contributing to Mylyn, using the skills that I am still developing at Tasktop. It has been a blast working with these folks and I have gained a stronger appreciation for working in a fast-paced Agile environment in the Vancouver tech scene.

Why Integration requires a combination of linking and syncing

by Dave West, May 5th, 2015

Since its inception, Tasktop has been involved with Open Service for Lifecycle Collaboration (OSLC). We co-authored the original specification with IBM and continue to work on the OSLC core technical committee. OSLC provides a set of open standards for connecting lifecycle tools to enable cross team traceability and collaboration. Based on the W3C linked data specification, OSLC enables links to be established across tools, repositories and projects. Those links include semantic information that allows the consuming tool to access not only the data, but also what that data means.

The objective of OSLC is to provide a set of standards that allow all lifecycle tools to connect information in real-time. Imagine a requirement linked to a test case where, in the requirements tool, the test information was accessed and presented to the user without duplicating the test information in the requirements tool. These external links are not just links to HTML pages, but via the specifications provided by the OSLC standard, they also enable the requirements tool to receive certain information from the testing tool. This enables the requirements management tool to do something with this data, such as provide the data in a rich hover for the user, or programmatically update a status field in the requirements tool.

OSLC, however, has been very slow to take off. Vendors are skeptical about the motivation behind OSLC–viewing it as an IBM driven standard. Without strong vendor support, customers don’t see the value. Without customers driving adoption, vendors will not support its adoption. This creates a textbook chicken and egg problem. No motivation to adopt by the vendors, and because of limited adoption by the vendors, no customers asking for it. Complicating the vendor adoption problem further is the fact that customers are not actively looking for a cool architecture for linking data. They simply need to find ways to solve immediate integration problems. They need a lifecycle integration strategy, and OSLC is not an integration strategy. A lifecycle integration strategy is a combination of decisions around workflow, with reviews and discussions about things like how reports are created and what tools do to support traceability with respect to compliance and governance. OSLC standards provide a set of technical capabilities that support aspects of the strategy, but they are not a complete strategy. This fact is also highlighted in the work of the EU Crystal project –a project focused on solving lifecycle integration problems in the area of systems engineering.

By partnering with our customers, Tasktop is able to see OSLC in the context of the integration needs of diverse organizations. This real-world view clarifies the ways that OSLC and linking fit into a broader strategy.

A top down approach

For any strategy to be successful, you need to begin with the needs of your users. Integration is a huge and complicated subject. Each customer organization has its own set of scenarios. Requirements may include:

  • Flowing data:  When a ticket’s state in tool X becomes ‘defect’ create a defect in tool Z.
  • Reporting: Create a report that includes data from discipline A, B for projects Q and R across tools Z and X.
  • Traceability: Link a requirement in tool X to a series of tests in tool Z.
  • Collaboration: When comments raised about artifact B appear in tool X and Z, allow users of those tools to respond and collaborate in the context of that artifact in the tools they use.

Of course, many integration scenarios are a combination of these requirements. For example, customers may combine flow and collaboration, or reporting and traceability. OSLC can provide support for all of these different types of requirements, but the reality of many tools, coupled with the detailed requirements for each type of need, make OSLC the natural choice for enabling traceability. Traceability is all about linking artifacts – OSLC extends that to enable tools to link a semantically rich connection. And linking is important to any lifecycle integration strategy to support traceability, and help define and structure the flow of information. In fact, since the release of OSLC, Tasktop has been on a journey to better understand the relationships inherent in any lifecycle. This understanding has been the driver behind adding capabilities to Tasktop Sync.

Tasktop and Linking

In April 2014, Tasktop released artifact relationship management (ARM). The objective for ARM was to enable customers to express relationships across tools–even when relationship models in each tool were different. For example, RRC and HP ALM describe the relationship between a requirement and a test in very different ways. A customer who has requirements in RRC and tests in HP ALM requires the relationships to be described in a way that both tools understand and can act on (important not to lose the relationship between tools). Another interesting dimension of the ability to integrate links is how those links are stored. ARM enables users to store relationships differently–depending on whether the tool supports that artifact type or not. For example, the RRC data model does not support the test plan artifact. Doors NG is a requirements tool, and would expect a customer to use Rational Quality Manager (RQM) or another testing tool to manage test artifacts. HP ALM has a model representation for requirements and would expect the requirement to be stored within that tool. ARM enables users to express these different integration rules as internal associations (when the tool has the model representation) and external associations (when the tool does not). This means that sometimes internal links (links to artifacts within the tool) and external links (links to the artifact in another tool) make sense depending on the tool and what the customer is trying to achieve. Maybe it is better to provide an external link to a master artifact in another tool rather than synchronizing that artifact into the tool and using an internal link.

So what about OSLC?

Now that we have added context in relation to the customer needs for OSLC and how ARM supports linking, let’s circle back to how Tasktop supports OSLC. We introduced OSLC support in 2011 with our 2.0 product release. This functionality was driven by our commitment to supporting the standard, and to helping a customer who was experimenting with linking information. At that time, we introduced functionality that allowed customers to describe their mapping to a particular project and tool and make the mapping accessible via an OSLC provider. Tasktop Sync acts as an OSLC provider, allowing customers to create mappings for their OSLC interfaces without the need to write complex REST API’s. Tasktop also enables non-developers to create OSLC interfaces that can be consumed by any OSLC compliant tool (when this functionality was first added, only IBM Rational CLM tools were compliant).

Example of OSLC

Enabling OSLC external links

It is no surprise that Tasktop treats OSLC like any external link, enabling the link to be written in an OSLC form. Unlike many implementations of OSLC (where the link is added by the user of one tool), Tasktop Sync creates the links automatically, based on rules. By combining the external OSLC link with the OSLC provider that Tasktop Sync provides for non-OSLC tools, users  can create an OSLC link with ARM, which can then be programmatically executed by the user. One good example of this is DOORS NG to HP ALM. BAs within DOORS NG create requirements. A subset of the requirements information is synced to HP ALM allowing the testers to create the associated test cases. A traditional HP web link is provided to the HP ALM requirements, allowing the tester to see the requirement in RRC. Once the test cases are created within HP ALM, an OSLC link is provided back to RRC allowing RRC to execute traceability reporting. Also, because there is a model element within HP ALM that represents the requirement, it is possible to take advantage of HP reports and also add metadata to the model element that is synced back to DOORs NG. This metadata can include status or rollup test success that can be included in reports in DOORS NG or HP QC. This combination of OSLC and syncing provides tool admins with the flexibility to use the approach that best supports their needs. For example, it might benefit the BA and testers to include in-context collaboration. By having an artifact in HP ALM, comments can be written and then synchronized to DOORs NG, eliminating the need to use email to discuss a requirement. Synchronization of a requirement between RRC and HP ALM is programmatic– it only happens when key data is entered or the state of the artifact reaches a certain point. This allows process management to be undertaken with only certain requirements entering the tester’s backlog to be worked on. This process automation helps manage the volume of work and supports process models such as KANBAN or Scrum. It also allows organizations to set Work In Progress (WIP) limits, allowing the introduction of Lean approaches to the management of work.

OSLC diagram

It not about linking or synchronization it is about flexibility and value

OSLC, like any standard, has its zealots–people who think replication is evil and OSLC is the Holy Grail of integration. But the reality is that integration is a more complex problem than any one protocol or approach can solve. I hope I have demonstrated that our approach combines OSLC with other integration models allowing for a solution that meets customer needs. Customers have particular needs and look to integration to help them solve process, team-work, reporting and/or compliance governance problems. Tasktop is committed to providing the infrastructure that customers need to solve problems and achieve software delivery success. OSLC is a key protocol, and as its adoption grows and usage patterns emerge, we will continue to extend our support for the protocol. It is clear from our support for OSLC that there is the need for infrastructure that connects the protocol and associated interfaces with the reality of the legacy tools and schemas. OSLC will continue to be a fundamental part of our infrastructure solution and Tasktop is committed to help drive the standard to be more inclusive of the realities of customer tool situations. I write this as both the Chief Product Officer of Tasktop AND a member of the OSLC steering group.

Dave West Tasktop CPO and OSLC steering group member

Tasktop Sync 4.2 Released

by Nicole Bryan, April 29th, 2015

I’m thrilled to announce the latest release of Tasktop Sync; as always, we have a terrific mixture of enhancements! With each release we typically add new systems to the Sync Integration Network, as well as adding additional features for systems that we previously added. And Tasktop Sync 4.2 is no exception.

Support for New Systems

As you know, Tasktop Sync now supports over 30 systems and the list is growing fast! In this release alone we added four new systems. Our focus this release was twofold. First, we wanted to expand our support for test management systems. To that end, we added support for IBM Rational Quality Manager and Microsoft Test Manager.

Think of the possibilities! Imagine finding a defect during testing, and having it automatically mirrored from Rational Quality Manager to your development tool of choice… whether it’s JIRA, Rally, TFS or any other of our supported systems. Even better, imagine the comment streams, attachments and the status seamlessly flowing between RQM and your developer tools.

Or, imagine another scenario, where your organization wants to let different teams use various test case management systems, while still keeping a central source of record for all tests. No problem. Teams that want to use Microsoft Test Manager can synchronize those test cases to your organization’s “testing system of record” (such as HP Quality Center) and, again, have seamless flow of information between the two systems.

Our second focus was to increase our support for the ServiceNow platform as they expand into new market areas such as Project and Portfolio Management and Agile Planning (SDLC). If your organization uses ServiceNow SDLC to manage your Agile projects but your test teams and business analysis teams are using other tools, you’ll want to think about setting up this integration. Use Sync to allow your development teams to use ServiceNow to manage their sprints but sync defects and requirements to your test management and requirements management tools.

Maybe you’re embarking on using SAFe (Scaled Agile Framework)? Do it with confidence knowing that you can synchronize high-level business epics originating in ServiceNow PPM, to your Agile planning tool of choice. There, your developers can break those business epics down into features and stories – keeping both the development teams and the PMO happy and productive!

Of course, all our connectors are built to support a wide variety of artifacts, so there are many more interesting scenarios you can create with them. Think of our connectors as a toolbox that allows you to craft integration scenarios that meet your particular business situation.

New Artifacts and Attributes

The second area of enhancements is to extend the support of previously added systems. As we learn more about how our customers use Sync, we add new support for additional artifacts and attributes to existing connectors. To whet your appetite, below is a subset of scenarios that we now support:

VersionOne Tests
It’s all about collaboration. Especially when it comes to effective test case development in an Agile world. With our new support for VersionOne tests, developers and testers can collaborate by having tests synced between VersionOne and their test management tool of choice.

JIRA Cascading Select Fields
Dealing with cascading select fields can be tricky! They dynamically update based on choices made in previous drop down fields (for example, choosing your country and then choosing your state/province). Now we support these complex fields, making your integration even more robust!

JIRA Comments resulting from state transitions
Now whenever JIRA is configured to require a comment as part of a status change, both the status change and the comment are synced to whatever system you are connected with. For example, a development team might want to enforce a policy requiring that a comment is entered when a defect is determined to be a “Won’t Fix,” or a Story is moved to “Not Complete” at the end of a sprint. Now, both the attribute and comment will always be synced together.

CA PPM Incidents
CA PPM can be used for more than just traditional PPM capabilities. We are seeing more customers use CA to manage “Incidents” – so we’ve added support for that artifact type. Imagine, now you can have Incidents automatically create a defect in your defect management system – cool!

CA PPM Dynamic Lookup Fields
Have a lookup field with options that are constantly changing (such as Charge Codes)? These are different than your typical lookup list that always returns the same set of options. These fields are populated by a continually changing list of options. Now we support these fields to enhance your integrations.

Our mission is to Connect the World of Software Delivery… with this release, we’re connecting more of that world, and making it easier to do so!

Meet the Individuals Responsible for Tasktop’s Customer Journey

by Neelan Choksi, April 17th, 2015

As part of our growth at Tasktop, it has been critical that we add in key leaders to sustain and facilitate that growth into the future. I’m thrilled to be talking about the new group of sales leaders we have at Tasktop and feel privileged to be working with this outstanding group. Tasktop now has new sales leaders for Americas West, East, Central as well as a new leader for pre-sales.

Jamie Wetzel has been promoted from an individual contributor role to lead Americas West. I think I am most proud of this as I love when we promote from within. Now in his 5th year with the company, Jamie has proven throughout his years at Tasktop that he’s been management material by embodying our culture and frankly being a leader before ever being given the title. I think Jamie has single-handedly attempted to enable every person who comes to Tasktop, and I am glad that we are able to reward Jamie with this promotion. Truth be told, I think the company is getting the larger reward.

We recently hired John Kapral to lead our Americas East and Central regions. I love Kap’s experience, having started his career in inside sales and working his way up into sales management in a diverse enterprise software career that has spanned such stalwarts (and in some cases Tasktop partners) as Splunk, CA, Symantec, BMC and CSC. In just over 2 quarters at Tasktop, I am already finding that I am relying on Kap’s experience and expertise with sales management and can see marked improvements in Tasktop’s Go to Market process and sales execution.

I am also thrilled that this past summer, we hired Maury Cupitt to lead our pre-sales engineers. Maury has been involved with pre-sales engineering for much of his career, which included working for Netscape, AvantGo, Wily, CA, and Blue Stripe. One of my favorite things about Maury is that he has brought back and instituted the ALM Architecture diagrams to be part of our pre-sales and post-sales processes – something that was so critical in our early days, helping us plot and flow our customers’ ALM tools and the information that needed to be passed between them.

Lance Knight has been given the tremendous responsibility of managing our post-sales activities as VP of Customer Success. His team is responsible for getting our customers deployed, delivering product-related services for customers, providing post-sales support, conducting customer health checks, teaching our customers through our extremely well-reviewed training courses, and driving our knowledge management assets. Like everything Lance touches, he has embraced the role of Customer Success. Lance likes to remind us that his job title is a bit of a misnomer since we all are responsible for Customer Success. Regardless, I feel very fortunate to have Lance leading this team as Tasktop continues to grow.

Yes, we have great innovative products. But working for a company is about the people. And I am proud to be working at Tasktop because we have an outstanding staff of intelligent, hardworking, self-motivated team members who love to win. If you read this and are excited by what you are hearing and are interested in exploring career options at Tasktop, check out our careers pages or contact us.

IEEE DevOps Unleashed calls for a holistic approach to DevOps

by Dave West, April 10th, 2015

Austin is famous for many things, but in the tech community South By Southwest (SXSW) has emerged as one of the go to events for technical innovation. SXSW is considered to be the launch pad for getting the word out on many great ideas. So, when I was asked to speak at IEEE DevOps Unleashed I was excited about the chance to connect with the Austin technical community and catch a little bit of the South By magic without having to deal with the crowds of the real event.

The IEEE CS DevOps symposium is a popular series of regional events that bring people together to talk about the DevOps movement. At the Austin event, I joined Bernard Golden (@bernardgolden), VP of Strategy at Active State, and Bernie Coyne (@berniecoyne) DevOps evangelist at IBM. Both shared very interesting views on the importance, challenges and potential approaches to DevOps.

What was truly interesting about the talks, in my opinion, were the things not being talked about, or only mentioned in passing. For many organizations, DevOps is associated with release management and is squarely focused on the problem of reducing the overhead of software releases. And, although it is true for many organizations that the relationship between development and operations most visibly falls apart during the release process, the actual disconnect is much broader. This holistic definition of DevOps has emerged in part because vendors want to associate their products with a great marketing campaign and industrial movement. But it has also emerged because the breakage between the two groups exists in many places.  Each speaker approached this problem from their perspective.

Bernard described three areas that must change in order to enable DevOps:

  1. Business Agility – the business side of rapid delivery
  2. Technical Innovation – the development practices and approaches that are affected, and
  3. Infrastructure choice – the realities of DevOps on the stack and architecture used.

Only when you consider all of these areas do you create the foundation necessary for DevOps. Release automation was only mentioned as one aspect of Technical Innovation. Important, yes, but not the only focus area.

devops-chart

Bernie provided an overview of the IBM approach to DevOps–mapping the need for DevOps in an organization that is competing with much smaller, nimble software companies. He described the breadth of IBM offerings and how IBM strives for using a DevOps approach in the context of a company that has both the weight of many years of process and the heft of 400K people. Bernie highlighted the importance of an incremental approach–one that focuses first on the practice areas that are the most broken, slowly adopting the changes that make sense. He described DevOps not as standard pattern that every organization can just blindly follow, but rather a series of recipes that will be different for each adoption model. If you reviewed IBM, you would find that very different processes and practices are being employed, but all of them are striving to break down the barriers between operations and engineering.

ibm-devops

Both Bernie and Bernard painted holistic, complete views of the transformation necessary to delivery software faster and reduce waste. I carried on this theme with the forgotten side of DevOps, concentrating on the lifecycle aspects of DevOps, and how we need to manage both the assets (code, infrastructure) and the work in a consistent, automated, and measured way.  Only by connecting or automating both the asset and artifact approaches does an organization effectively break down the barriers that disconnect the lifecycle and create waste.

devops-holistic

In the panel section of the event, there were many interesting questions, including: how can you apply these principles to operating systems or hardware development, and should you treat infrastructure the same way you treat code? A theme also emerged about objectives and measurement–without clear consistent shared goals and an effective way of measuring them you will never drive change in your organization. Bernard described shared metrics as the cornerstone of cultural change. And it became clear that if you measure development for change and operations for stability change is not only hard to execute on, but will never become embedded within an organization.

Faster VMs with Vagrant and Chef

by David Green, April 1st, 2015

Developing and testing changes in an environment that is the same as the deployment environment is one of the magic ingredients of the DevOps way. For engineers at Tasktop working on the Integration Factory, provisioning a new VM can occur multiple times in a day, so any inefficiencies in the process are painful. I recently stumbled across vagrant-cachier which reduces network usage and speeds up local Vagrant-based provisioning, drastically improving VM provisioning times.

Install vagrant-cachier as follows:

$ vagrant plugin install vagrant-cachier

Then add the following to your ~/.vagrant.d/Vagrantfile

Vagrant.configure("2") do |config|
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box
  end
end

That’s all! The next time you provision a VM using Vagrant, you should see a local cache being created under ~/.vagrant.d/cache

As you can see, vagrant-cachier creates a local cache of dependencies including those provided by apt-get, chef, and of course anything under /var/cache.
This technique is especially helpful for remote developers who are not on 100GB Ethernet to servers hosting VM dependencies.

It’s easy to try it yourself and collect before/after results. Here’s what I observed before vagrant-cachier:

$ time vagrant up 
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'ubuntu-14.04-x86'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
...
real	4m44.142s
user	0m4.895s
sys	0m3.132s
$ 

After enabling vagrant-cachier:

$ time vagrant up 
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'ubuntu-14.04-x86'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
...
real	3m18.506s
user	0m7.617s
sys	0m5.261s
$ 

In this real-world example of a simple VM running MySQL downloading dependencies over a VPN, I was able to reduce the provisioning time by 30%! The effect in real time is larger of course for VMs that have more dependencies.

Many thanks to @fgrehm for vagrant-cachier, which helps to eliminate much of the pain of waiting for VMs to come up when using Vagrant and Chef.
Find out more at http://fgrehm.viewdocs.io/vagrant-cachier.

Premium grade fuel for software lifecycle analytics engines

by Mik Kersten, March 24th, 2015

I’ve long taken inspiration from Peter Drucker’s caution that “if you can’t measure it, you can’t manage it.” Technological progress has been punctuated by advances in measurement, ranging from Galileo’s telescope to Intel’s obsession with nanometers. Our industry is starting to go through a profound transformation in measuring how software is built, but only after we work through some big gaps in how we approach capturing, reporting and making decisions on these metrics.

Tasktop - Tasktop Overview v4.3 - DRAFT

Exactly 40 years have passed since Fredrick Brooks cautioned that measuring software in terms of man-months was a bad idea. Pretty much everyone I know who has read that book agrees with the premise. But pretty much everyone I know is still measuring software delivery in terms of man-months, FTEs, and equivalent cost metrics that are as misleading as Brooks predicted. Over the past year I’ve had the benefit of meeting face-to-face with IT leaders in over 100 different large organizations and having them take me through how they’re approaching the problem. The consistent theme that has arisen is that to establish meaningful measurement for software delivery, we need the following (where each layer is supported by the one below it):

  • 1) Presentation layer
    • Report generation
    • Interactive visualization
    • Dashboards & wallboards
    • Predictive analytics
  • 2) Metrics layer
    • Business value of software delivery
    • Efficiency of software delivery
  • 3) Data storage layer
    • Historical cross-tool data store
    • Data warehouse or data mart
  • 4) Integration infrastructure layer
    • Normalized stream of data and schema updates from each tool

Nothing overly surprising there, but what’s interesting is why existing approaches have not supported getting the right kind of reporting and analytics in the hands of organizations doing large scale software delivery.

The Presentation layer (1) is not the problem. This is a mature space filled with great solutions such as the latest offerings from Tableau and Geckoboard, as well as the myriad of hardened enterprise Business Intelligence (BI) tools. What these generic tools lack is any domain understanding of software delivery. This is where the need for innovation on the Metrics layer (2) comes in. Efforts in establishing software delivery metrics have been around as long as software itself, but given the vendor activity around them and the advances being made on lifecycle automation and DevOps, I predict that we are about to go through an important round of innovation on this front. A neat take on new thinking on software lifecycle metrics can be seen in Insight Ventures Periodic Table of Software Development Metrics.

Combining software delivery metrics with business value metrics is an even bigger opportunity, and one where the industry has barely scratched the surface. For example, Tasktop’s most forward thinking customers are already creating their own web applications that correlate business metrics, such as sales and marketing data, with some basic software measures. A lot of innovation is left on this front, and the way that the data is manifested in the Storage layer (3) must support both the business and the software delivery metrics.

The Data Storage layer (3) has a breadth of great commercial and open source options to choose from, thanks to the huge investment that vendors and VCs are making in big data. The one that’s most appropriate depends on the existing data warehouse/mart investment that’s in place, as well as the kind of metrics that the organization is after. For example, efficiency trend metrics can lend themselves best to a time-series based MongoDB, while a relational database can suffice for compliance reports.

For organizations that have already attempted to create end-to-end software lifecycle analytics, the biggest impediment to creating meaningful lifecycle metrics is clear: the Integration infrastructure layer (4). In the past, this was achieved by ETL processes, but that approach has fallen apart completely with the modern API-based tool chain. Each vendor has their massive API set, their own highly customizable schemas and process models, and standards efforts, while important, are years away from sufficiently broad adoption.

Tasktop has long had a reputation for solving some of the hardest and least glamorous problems in the software lifecycle. Our effort is completely focused on expanding what we did with Tasktop Sync to create an entirely new data integration layer with Tasktop Data. Our goal is to support any of the Data Storage, Metrics or Presentation layers provided by our partner ecosystem. There are some truly innovative activities happening on that front, ranging from HP’s Predictive ALM, to IBM’s Jazz Reporting Service, to the Agile-specific views provided by Rally Insights. We are also working with industry thought leaders such as Israel Gat and Murray Cantor to make sure that the Data integration layer that we’re creating supports the metrics and analytics that they’re innovating.

What’s very unique about our focus is that Tasktop Data is the only product that provides the normalized and unified data across your myriad of lifecycle tools (4). We are the only vendor focused entirely on the 4th layer of software lifecycle analytics, and are focusing all of our work entirely on that layer while ensuring that we support the best-of-breed solutions and frameworks in each of the layers above. In doing so, just as we work very closely with the broadest partner ecosystem of Agile/ALM/DevOps lifecycle vendors, we are looking forward to working with the leaders defining this critical and growing space of software lifecycle analytics. If you’re interested in working together on any of these elements by leveraging Tasktop Data, please get in touch!

Tasktop makes it rain in Las Vegas…..No, Really!

by Beth Beese, March 10th, 2015

It really is something else to see a rainstorm with lightning over the Las Vegas strip, but in retrospect I shouldn’t be surprised after a week of non-stop hits for Tasktop at IBM’s InterConnect 2015 conference. The Tasktop team was a force to be recognized throughout the event, making appearances in the IBM seller program and numerous speaking presentations on the main conference agenda, hosting client meetings and strategizing with our business partners.

Tasktop Booth at InterConnect 2015

The IBM InterConnect 2015 conference was a consolidation of three separate IBM conferences – Rational, Tivoli and Websphere. Conference activities were split across two properties with hundreds of presentations and breakout sessions over eight tracks. There was a massive exhibitor hall in the Mandalay Bay, and the dev@conference, with hands-on lab sessions and developer activities alongside the Executive Track and the Inner Circle sessions (sporting coffee bars with baristas and meeting rooms with big cushy chairs), was located in the MGM Grand.

Tuesday’s keynote panel discussion was hosted by guest speakers from the hit show Shark Tank, and an exclusive performance by the legendary band Aerosmith rocked the house on Wednesday night. This conference definitely aimed to please. With four thousand business partners and sixty three hundred customers attending, there was plenty of learning and networking to be done! I attended Tasktop meetings with both client and IBM executives, and I’m thrilled to say our value proposition of integrated SDLC is the golden key to unlocking successful DevOps implementation for clients in the recently restructured IBM.

interconnect3
Tasktop started our conference schedule by attending the IBM Systems Business Unit sales academy on Friday 2/20 with Emmitt Smith as the keynote speaker. Our sales staff participated in a variety of sessions for IBM sellers specific to the newly organized Systems Business Unit, while Tasktop VP Customer Success, Lance Knight, attended the IBM sessions for Technical Enablement. Next up were the Inner Circle sessions, designed for customers to get inside product info from IBM and key partners like Tasktop. During Inner Circle sessions, clients also share their successes and challenges working with IBM software solutions in what can result in some pretty enthusiastic discussions. What happened inside is top secret, but I can tell you that Tasktop and our IBM product manager Gary Cernosek (seen on the photo above, along with Wesley Coelho, left, and Neelan Choksi, right) presented the integrated SDLC story to a room of client execs and we saw a lot of heads nodding. The message is loud and clear: Integrating the software development process is key to deploying DevOps and Continuous Engineering best practices!

The conference kicked off on Sunday 2/22 with Tasktop’s booth in the exhibitor hall owning the corner between the DevOps-Continuous Engineering theater zone and the, ahem, refreshment station. How do you top four presentations in the DevOps-Continuous Engineering track and Tasktop’s CPO Dave West co-hosting a panel discussion on integrated software delivery tools? With an interactive white boarding session around customer software development integration patterns with VP of Product Development Nicole Bryan of course! The Tasktop C-level team also met with IBM executive leaders and reinforced our OEM partnership throughout the conference. The good news is that Taskop’s integration solutions are perfectly aligned with the value-driven reorganized structure of IBM’s complete portfolio of solutions.

Tasktop staff did not escape the lure of Las Vegas nightlife. We hosted an intimate cocktail party for the Systems Business Unit software sellers on Saturday night at the Rouge Lounge in the MGM. Guests were entertained with stories of the glory days from Tasktop president and blackjack guru Neelan Choksi. If you don’t know the background, click here. There was the posh Inner circle reception at Hakasaan nightclub in MGM, and a poolside reception at Mandalay Bay, which moved indoors due to rain (in the desert). Tasktop staff also attended the Canadian team reception at the Shark Reef at Mandalay Bay and the System Z party at Pulse nightclub. Needless to say, my feet were sore, but it was a great conference!

interconnect4

CA World – Building bridges between development and the PMO by scaling Agile

by Dave West, December 1st, 2014

Two weeks ago I spent three days in sunny, yet chilly Las Vegas at CA World (yes Vegas was suffering from a cold spell). CA World is the premier event for CA customers, a chance to hear what is new with CA technologies, share customer experiences and connect with product management and CA leadership on the future of CA products and offerings. Tasktop had a booth in the ca Intellicenter. This zone included many tools including CA PPM–The market leading PPM tool.

We have a long relationship with CA, delivering OEM versions of Tasktop Sync and Tasktop Dev in support of their Agile planning and requirements tools. But what is interesting today is how customers are using Tasktop Sync to connect to other tools in the stack, and how organizations are marrying traditional PMO practices associated with allocation, planning and financial measurement to development teams following Agile development practices. Many of you have heard me talk about water-scrum-fall, and how as Agile crosses the chasm to be mainstream the reality of modern development is a hybrid of traditional approaches and Agile development. The first part of the phrase ‘water-scrum’ references the marriage between traditional planning and reporting lifecycles with Agile or Scrum based processes. During CA world I talked to many large companies who are wrestling with that need and here are some of the observations I took away from the event:-

Agile and the PMO can work together in ‘almost’ perfect harmony.

There are some in the Agile community who state that the only way to create an Agile enterprise is to discard all traditional planning models and embrace an approach that marries Agile teams to the business, allowing them to directly deliver business value in response to business need. Planning is done at the business level, allocating development resources by business priority. The role of the PMO is removed, because the business plans the work, thus negating the need for a middle-man. But the reality is that ‘the business’ is much more complex than one group, and the supporting applications are a spider’s web of interconnected, disparate systems. There is the need to manage the spider’s web – allowing Agile teams to serve one set of objectives whilst managing the large number of cross team dependencies. The PMO and the practices they provide enable teams to bridge the gap between Agile teams– focused on clear business problems-and complex portfolio release problems.  But it does require some level of structure and discipline.

Time to discard old ways of connecting the PMO and move to a disciplined approach

Ironic that for many organizations the adoption of Agile at Scale requires them to introduce more, rather than less discipline. Existing ‘traditional’ processes provided only a limited amount of structure. They encouraged certain meetings and artifacts, but because they did not mirror the reality of modern software development and did not support the level of chaos created by software development, often they required very skilled project managers or PMO staff to marry the processes together with manual, email heavy, spreadsheet centric practices. This knitting process happened at least twice—once, in the translation of the portfolio plan to development activities and second, when status reporting is required. These heroes buzz around the collection of teams, setting them in the right direction and then helping them report status in a way that makes sense to the original plan. But with Agile, that process is bound to fail. Not only do Agile teams need to have a clear direction at the start, but work is grouped into much smaller batch sizes, encouraging more frequent feedback into the portfolio. That means that the PMO heroes will become a friction point in the process and will either greatly reduce the effectiveness of the development teams, or have to increase their workload. The reality is these roles burn out, or the Agile teams get so frustrated that they ignore the PMO providing limited information. This lack of information results in lack of planning of foresight leading to last minute decisions and lots of emergencies. Instead, smart organizations are actually stepping up their game and putting in place an automated, disciplined process that enables Agile teams and the PMO to work together. It requires definition of what various artifacts mean and how those artifacts map together. For example, does development actually have a one to one mapping of projects to the PMO projects, or is a project just a backlog fed by many projects? By making real, implementable decisions about how the lifecycles connect and then automating them, teams not only allow for frequent reporting, but also ensure that work flows from planning to development. It also enables, often for the first time, the ability to change a plan based on real feedback and data.

Resource Management is different, but manageable.

Yes, Agile teams do not like to think in terms of people, instead focusing on team capability (known as velocity). But at the end of the day people cost money and need to be accounted for. One of the key debates at CA World was the role of the resource and how timesheets fit into the model. And, the answer is ‘it depends, but you have to pick one model and follow it consistently.’ For some organizations work comes in, is planned, by the PMO, allocated to people and those people happen to be in teams working in an Agile way. Thus, tasks in CA PPM relate to stories in the development tool and work is defined by the story, time is tracked in that context. But it’s different for other organizations. Work is not really planned in detail within the PMO. Instead, it is allocated to a team based on some sizing metric. That work is then planned in the development/Agile tool of choice, stories are broken down into tasks and those tasks are estimated and worked on by the development teams. All of that information is then passed back to the PMO, allowing a reconciliation of the original plan to the work being done. Of course there are many variations on these two planning themes, but that is ok, as long as the organization, or at least the team, does it consistently. Also, the different levels of planning can be taken advantage of, allowing an iterative, cross lifecycle planning model.

So what is the bottom line…

CA World reminded me that Agile has grown up. There were many meetings, and discussions at our booth where people who traditionally would have been anti-Agile were discussing not only the problems of Agile development, but how they can make it work within the constraints of their existing planning, reporting and accounting processes. Over the next few months we at Tasktop are going to be working on sharing some of the models we are seeing around scaling Agile, and connecting the water-scrum part of the lifecycle. CA World not only got me excited again about the role of the PMO in Agile, but also introduced me to many organizations that are working out great solutions… Exciting times ahead, so watch this space.