Premium grade fuel for software lifecycle analytics engines

by Mik Kersten, March 24th, 2015

I’ve long taken inspiration from Peter Drucker’s caution that “if you can’t measure it, you can’t manage it.” Technological progress has been punctuated by advances in measurement, ranging from Galileo’s telescope to Intel’s obsession with nanometers. Our industry is starting to go through a profound transformation in measuring how software is built, but only after we work through some big gaps in how we approach capturing, reporting and making decisions on these metrics.

Tasktop - Tasktop Overview v4.3 - DRAFT

Exactly 40 years have passed since Fredrick Brooks cautioned that measuring software in terms of man-months was a bad idea. Pretty much everyone I know who has read that book agrees with the premise. But pretty much everyone I know is still measuring software delivery in terms of man-months, FTEs, and equivalent cost metrics that are as misleading as Brooks predicted. Over the past year I’ve had the benefit of meeting face-to-face with IT leaders in over 100 different large organizations and having them take me through how they’re approaching the problem. The consistent theme that has arisen is that to establish meaningful measurement for software delivery, we need the following (where each layer is supported by the one below it):

  • 1) Presentation layer
    • Report generation
    • Interactive visualization
    • Dashboards & wallboards
    • Predictive analytics
  • 2) Metrics layer
    • Business value of software delivery
    • Efficiency of software delivery
  • 3) Data storage layer
    • Historical cross-tool data store
    • Data warehouse or data mart
  • 4) Integration infrastructure layer
    • Normalized stream of data and schema updates from each tool

Nothing overly surprising there, but what’s interesting is why existing approaches have not supported getting the right kind of reporting and analytics in the hands of organizations doing large scale software delivery.

The Presentation layer (1) is not the problem. This is a mature space filled with great solutions such as the latest offerings from Tableau and Geckoboard, as well as the myriad of hardened enterprise Business Intelligence (BI) tools. What these generic tools lack is any domain understanding of software delivery. This is where the need for innovation on the Metrics layer (2) comes in. Efforts in establishing software delivery metrics have been around as long as software itself, but given the vendor activity around them and the advances being made on lifecycle automation and DevOps, I predict that we are about to go through an important round of innovation on this front. A neat take on new thinking on software lifecycle metrics can be seen in Insight Ventures Periodic Table of Software Development Metrics.

Combining software delivery metrics with business value metrics is an even bigger opportunity, and one where the industry has barely scratched the surface. For example, Tasktop’s most forward thinking customers are already creating their own web applications that correlate business metrics, such as sales and marketing data, with some basic software measures. A lot of innovation is left on this front, and the way that the data is manifested in the Storage layer (3) must support both the business and the software delivery metrics.

The Data Storage layer (3) has a breadth of great commercial and open source options to choose from, thanks to the huge investment that vendors and VCs are making in big data. The one that’s most appropriate depends on the existing data warehouse/mart investment that’s in place, as well as the kind of metrics that the organization is after. For example, efficiency trend metrics can lend themselves best to a time-series based MongoDB, while a relational database can suffice for compliance reports.

For organizations that have already attempted to create end-to-end software lifecycle analytics, the biggest impediment to creating meaningful lifecycle metrics is clear: the Integration infrastructure layer (4). In the past, this was achieved by ETL processes, but that approach has fallen apart completely with the modern API-based tool chain. Each vendor has their massive API set, their own highly customizable schemas and process models, and standards efforts, while important, are years away from sufficiently broad adoption.

Tasktop has long had a reputation for solving some of the hardest and least glamorous problems in the software lifecycle. Our effort is completely focused on expanding what we did with Tasktop Sync to create an entirely new data integration layer with Tasktop Data. Our goal is to support any of the Data Storage, Metrics or Presentation layers provided by our partner ecosystem. There are some truly innovative activities happening on that front, ranging from HP’s Predictive ALM, to IBM’s Jazz Reporting Service, to the Agile-specific views provided by Rally Insights. We are also working with industry thought leaders such as Israel Gat and Murray Cantor to make sure that the Data integration layer that we’re creating supports the metrics and analytics that they’re innovating.

What’s very unique about our focus is that Tasktop Data is the only product that provides the normalized and unified data across your myriad of lifecycle tools (4). We are the only vendor focused entirely on the 4th layer of software lifecycle analytics, and are focusing all of our work entirely on that layer while ensuring that we support the best-of-breed solutions and frameworks in each of the layers above. In doing so, just as we work very closely with the broadest partner ecosystem of Agile/ALM/DevOps lifecycle vendors, we are looking forward to working with the leaders defining this critical and growing space of software lifecycle analytics. If you’re interested in working together on any of these elements by leveraging Tasktop Data, please get in touch!

Tasktop makes it rain in Las Vegas…..No, Really!

by Beth Beese, March 10th, 2015

It really is something else to see a rainstorm with lightning over the Las Vegas strip, but in retrospect I shouldn’t be surprised after a week of non-stop hits for Tasktop at IBM’s InterConnect 2015 conference. The Tasktop team was a force to be recognized throughout the event, making appearances in the IBM seller program and numerous speaking presentations on the main conference agenda, hosting client meetings and strategizing with our business partners.

Tasktop Booth at InterConnect 2015

The IBM InterConnect 2015 conference was a consolidation of three separate IBM conferences – Rational, Tivoli and Websphere. Conference activities were split across two properties with hundreds of presentations and breakout sessions over eight tracks. There was a massive exhibitor hall in the Mandalay Bay, and the dev@conference, with hands-on lab sessions and developer activities alongside the Executive Track and the Inner Circle sessions (sporting coffee bars with baristas and meeting rooms with big cushy chairs), was located in the MGM Grand.

Tuesday’s keynote panel discussion was hosted by guest speakers from the hit show Shark Tank, and an exclusive performance by the legendary band Aerosmith rocked the house on Wednesday night. This conference definitely aimed to please. With four thousand business partners and sixty three hundred customers attending, there was plenty of learning and networking to be done! I attended Tasktop meetings with both client and IBM executives, and I’m thrilled to say our value proposition of integrated SDLC is the golden key to unlocking successful DevOps implementation for clients in the recently restructured IBM.

interconnect3
Tasktop started our conference schedule by attending the IBM Systems Business Unit sales academy on Friday 2/20 with Emmitt Smith as the keynote speaker. Our sales staff participated in a variety of sessions for IBM sellers specific to the newly organized Systems Business Unit, while Tasktop VP Customer Success, Lance Knight, attended the IBM sessions for Technical Enablement. Next up were the Inner Circle sessions, designed for customers to get inside product info from IBM and key partners like Tasktop. During Inner Circle sessions, clients also share their successes and challenges working with IBM software solutions in what can result in some pretty enthusiastic discussions. What happened inside is top secret, but I can tell you that Tasktop and our IBM product manager Gary Cernosek (seen on the photo above, along with Wesley Coelho, left, and Neelan Choksi, right) presented the integrated SDLC story to a room of client execs and we saw a lot of heads nodding. The message is loud and clear: Integrating the software development process is key to deploying DevOps and Continuous Engineering best practices!

The conference kicked off on Sunday 2/22 with Tasktop’s booth in the exhibitor hall owning the corner between the DevOps-Continuous Engineering theater zone and the, ahem, refreshment station. How do you top four presentations in the DevOps-Continuous Engineering track and Tasktop’s CPO Dave West co-hosting a panel discussion on integrated software delivery tools? With an interactive white boarding session around customer software development integration patterns with VP of Product Development Nicole Bryan of course! The Tasktop C-level team also met with IBM executive leaders and reinforced our OEM partnership throughout the conference. The good news is that Taskop’s integration solutions are perfectly aligned with the value-driven reorganized structure of IBM’s complete portfolio of solutions.

Tasktop staff did not escape the lure of Las Vegas nightlife. We hosted an intimate cocktail party for the Systems Business Unit software sellers on Saturday night at the Rouge Lounge in the MGM. Guests were entertained with stories of the glory days from Tasktop president and blackjack guru Neelan Choksi. If you don’t know the background, click here. There was the posh Inner circle reception at Hakasaan nightclub in MGM, and a poolside reception at Mandalay Bay, which moved indoors due to rain (in the desert). Tasktop staff also attended the Canadian team reception at the Shark Reef at Mandalay Bay and the System Z party at Pulse nightclub. Needless to say, my feet were sore, but it was a great conference!

interconnect4

CA World – Building bridges between development and the PMO by scaling Agile

by Dave West, December 1st, 2014

Two weeks ago I spent three days in sunny, yet chilly Las Vegas at CA World (yes Vegas was suffering from a cold spell). CA World is the premier event for CA customers, a chance to hear what is new with CA technologies, share customer experiences and connect with product management and CA leadership on the future of CA products and offerings. Tasktop had a booth in the ca Intellicenter. This zone included many tools including CA PPM–The market leading PPM tool.

We have a long relationship with CA, delivering OEM versions of Tasktop Sync and Tasktop Dev in support of their Agile planning and requirements tools. But what is interesting today is how customers are using Tasktop Sync to connect to other tools in the stack, and how organizations are marrying traditional PMO practices associated with allocation, planning and financial measurement to development teams following Agile development practices. Many of you have heard me talk about water-scrum-fall, and how as Agile crosses the chasm to be mainstream the reality of modern development is a hybrid of traditional approaches and Agile development. The first part of the phrase ‘water-scrum’ references the marriage between traditional planning and reporting lifecycles with Agile or Scrum based processes. During CA world I talked to many large companies who are wrestling with that need and here are some of the observations I took away from the event:-

Agile and the PMO can work together in ‘almost’ perfect harmony.

There are some in the Agile community who state that the only way to create an Agile enterprise is to discard all traditional planning models and embrace an approach that marries Agile teams to the business, allowing them to directly deliver business value in response to business need. Planning is done at the business level, allocating development resources by business priority. The role of the PMO is removed, because the business plans the work, thus negating the need for a middle-man. But the reality is that ‘the business’ is much more complex than one group, and the supporting applications are a spider’s web of interconnected, disparate systems. There is the need to manage the spider’s web – allowing Agile teams to serve one set of objectives whilst managing the large number of cross team dependencies. The PMO and the practices they provide enable teams to bridge the gap between Agile teams– focused on clear business problems-and complex portfolio release problems.  But it does require some level of structure and discipline.

Time to discard old ways of connecting the PMO and move to a disciplined approach

Ironic that for many organizations the adoption of Agile at Scale requires them to introduce more, rather than less discipline. Existing ‘traditional’ processes provided only a limited amount of structure. They encouraged certain meetings and artifacts, but because they did not mirror the reality of modern software development and did not support the level of chaos created by software development, often they required very skilled project managers or PMO staff to marry the processes together with manual, email heavy, spreadsheet centric practices. This knitting process happened at least twice—once, in the translation of the portfolio plan to development activities and second, when status reporting is required. These heroes buzz around the collection of teams, setting them in the right direction and then helping them report status in a way that makes sense to the original plan. But with Agile, that process is bound to fail. Not only do Agile teams need to have a clear direction at the start, but work is grouped into much smaller batch sizes, encouraging more frequent feedback into the portfolio. That means that the PMO heroes will become a friction point in the process and will either greatly reduce the effectiveness of the development teams, or have to increase their workload. The reality is these roles burn out, or the Agile teams get so frustrated that they ignore the PMO providing limited information. This lack of information results in lack of planning of foresight leading to last minute decisions and lots of emergencies. Instead, smart organizations are actually stepping up their game and putting in place an automated, disciplined process that enables Agile teams and the PMO to work together. It requires definition of what various artifacts mean and how those artifacts map together. For example, does development actually have a one to one mapping of projects to the PMO projects, or is a project just a backlog fed by many projects? By making real, implementable decisions about how the lifecycles connect and then automating them, teams not only allow for frequent reporting, but also ensure that work flows from planning to development. It also enables, often for the first time, the ability to change a plan based on real feedback and data.

Resource Management is different, but manageable.

Yes, Agile teams do not like to think in terms of people, instead focusing on team capability (known as velocity). But at the end of the day people cost money and need to be accounted for. One of the key debates at CA World was the role of the resource and how timesheets fit into the model. And, the answer is ‘it depends, but you have to pick one model and follow it consistently.’ For some organizations work comes in, is planned, by the PMO, allocated to people and those people happen to be in teams working in an Agile way. Thus, tasks in CA PPM relate to stories in the development tool and work is defined by the story, time is tracked in that context. But it’s different for other organizations. Work is not really planned in detail within the PMO. Instead, it is allocated to a team based on some sizing metric. That work is then planned in the development/Agile tool of choice, stories are broken down into tasks and those tasks are estimated and worked on by the development teams. All of that information is then passed back to the PMO, allowing a reconciliation of the original plan to the work being done. Of course there are many variations on these two planning themes, but that is ok, as long as the organization, or at least the team, does it consistently. Also, the different levels of planning can be taken advantage of, allowing an iterative, cross lifecycle planning model.

So what is the bottom line…

CA World reminded me that Agile has grown up. There were many meetings, and discussions at our booth where people who traditionally would have been anti-Agile were discussing not only the problems of Agile development, but how they can make it work within the constraints of their existing planning, reporting and accounting processes. Over the next few months we at Tasktop are going to be working on sharing some of the models we are seeing around scaling Agile, and connecting the water-scrum part of the lifecycle. CA World not only got me excited again about the role of the PMO in Agile, but also introduced me to many organizations that are working out great solutions… Exciting times ahead, so watch this space.

You win with people: A look back at Lean Into Agile

by Jeff Downs, November 24th, 2014

As people entered tWoodyHayeshe Fawcett Center on campus of The Ohio State University for the inaugural Lean Into Agile conference, a wall-sized picture of legendary Buckeye football coach Woody Hayes greeted us. The quote, left, could not have been better foreshadowing for the theme of the day.

Michael Sahota used his opening keynote to break down the Agile Manifesto to a single phrase, Individuals and interactions over processes and tools, then further boiled it down to a single word: “PEOPLE”.

As a former tool administrator who loves using tools to solve problems, hearing the phrase “people over process and tools” always gives me a bit of anxiety. Still, I listened intently to the Agile community as they presented their case.

There was plenty of talk and consensus on the challenges of Agile. This always led to collaborative discussions about how certain elements of Lean and Agile can be leveraged to deliver better software. What resonated with me, however, was the candor around two themes:

Being Agile versus Doing Agile

Agile is not the goal of an organization. Reducing time to market, delivering high quality software that meets the customer’s needs–those are goals. Being Agile and Lean is a culture that allows an organization to achieve its goals. They should be used as a framework, rather than a strict methodology, incorporating only the aspects that make sense for your business.

People over Process and Tools

Many stories were told throughout the day about struggles with mandated tools. From my experience in software testing and tool administration, I could definitely relate. And the message was clear: people shouldn’t have to struggle to succeed despite a tool; rather, put your people first and let them drive the processes and tool selection.

That last message is one where I can see Tasktop playing a key role for the Agile community. Yes, letting people choose tools that best meet their needs may lead to a larger collection of tools and disparate data. But through proper integration, the data can still synchronize and flow just as it would (or better) than if it were all one big tool suite. Tasktop bridges the gap – we connect tools, and more importantly, the people that use them.

I think we can all agree with the great Woody Hayes. You win with people. Let’s empower people to use the best tool for the job through integration.

What do Gene Kim, Agile, DevOps and Continuous Delivery Have in Common? Pretty much Everything.

by joyce.bartlett, November 19th, 2014

We are just back from the Agile, Continuous Delivery and DevOps Transformation Summit and, from what we can tell it more than lived up to its promotion. For three days, software development and delivery professionals availed themselves of talks, educational sessions, vendor information and networking in the beautiful Bay Area. It’s our understanding that this was the first time Electric Cloud hosted an industry summit, but the attendees seemed very satisfied. At least half of the sessions used case studies to educate attendees on lean principles, the speakers were diverse and the legendary Gene Kim hosted the event, providing the keynote and bringing a large following of Development and Operations professionals with him.

Kim is obviously a huge draw, and contributed substantially to the success of the summit, but the attendees really made the summit from our point-of-view. This was a crowd seriously focused on making DevOps work, solving DevOps problems and basically learning as much as they could about what kinds of tools and methodologies other organizations were employing to realize success with Agile and continuous delivery. They ate, slept and talked lean and achieving interoperability.

This kind of crowd makes it fun for Tasktop. There were lots of serious questions from people meeting difficult challenges. These are exactly the kind of conversations we like to be part of. We’re always happy to talk about planning, tools, collaboration, testing, process… the whole spectrum of the lifecycle.

After talking to lots of people the observation that DevOps has become more of a state-of-mind than a series of actions or achievements was reinforced to us. It’s a way of thinking about the hard work of software delivery. There were lots of conversations about the need for all of the disciplines involved in the software delivery process, people and the tools, to work together more easily and closely. They were singing our song.

And it wasn’t 100% work. An interesting and fun trip to The Computer Museum added some perspective to the event. It’s hard to imagine that technology has come so far in a few decades. It seems like a long time for those of us who have been in the industry for a while, but it’s a speck on the time continuum. It’s also amazing (and a good thing) how much computer design has evolved. Those machines were as ugly as they were groundbreaking.

photo 1   photo 4

We hear that there were hundreds of people on the waiting list once the event reached capacity, so we feel pretty confident that you’ll be seeing this event again next year. In the meantime, we look forward to continuing the conversations with the people we met at the summit.

Southern Fried Agile: A shift in perspective

by Larry Maccherone, November 5th, 2014

SFALogoIt was a day of firsts. It was the first time where the name and logo for the event indicated the lunch menu (Yes, we had fried chicken for lunch, of course.). It was the first time that I got to see Dave West, Tasktop’s Chief Product Officer, deliver one of his clever, information rich, and hilarious stand up comedy… err… umm… keynote speeches. And, it was the first time for me discussing agile metrics insights with a large agile audience since I left Rally Software and became Tasktop’s Data Scientist.

It represented a big shift in perspective for me — from the perspective of one of the leading agile application lifecycle management (ALM) tool vendors; to a more heterogenous perspective where there is both more than just one tool’s ALM data to analyze AND there is more than just ALM data from which to glean insight.

The perspective of others about me, I noticed, has also shifted. I was Rally’s quant. Now, I’m more of the agile community’s quant. One of the other speakers at the conference heard me speak for the first time and tweeted something to the effect that he would have normally skipped a talk by a vendor as not of much value, but that he was glad he came to my talk. Before, I was biased, now I am perceived as more neutral.

It’s amazing how a change of perspective can make all the difference.

Besides my personal perspective shifts, Southern Fried Agile was representative of a number of other perspective shifts ongoing in the agile community. The attendance was up by roughly 50% from prior years. They sold out in terms of attendance (over 600) AND sponsorship. I didn’t see a single room that wasn’t at least mostly full. Part of the credit for this goes to the excellent way in which it was run (shout out to Neville Poole and Kelley Horton, just two of the organizers that I had personal contact with). However, I think there is a shift in the industry going on here as well. We’ve been saying that, “agile is going mainstream” for a while now, but this is the first year where I think that’s totally true.

Beyond that though, I think there is a broadening of perspective occurring. It used to be that agile was the domain of developers with some input from QA. Now, DevOps, Architecture, Business Analysts, User Experience folks, and the rest of the business are getting into the act. We’re also using analytics more to make software engineering decisions. This broadening of the scope of agile is a definite change in perspective with significant repercussions.

There were many talks at the conference that representing this shift. The alternative title to Dave West’s keynote was, “Building a strategy that marries Scaled Agile, DevOps and Lean Analytics into a transformational approach that will kill any wicked witch”. Dave’s final advice was to focus on four things:

  1. Flow – How information moves around your organization
  2. Collaboration – How people cross teams communicate
  3. Reporting / Analytics – What information we need to see
  4. Traceability – How things are connected and governed

Roughly half of the talks at this conference represent this shift including:

  1. Eric King & Dr. Victoria Ann Kline: What Does Agile in the Non-IT Space Look Like
  2. Tim Wise: How to Successfully Scale Agile in Your Enterprise
  3. Roy Miller: NoOps: More Dev Less Ops
  4. Linda Butt & Todd Biedrzycki: Scaling Agile in the Real World
  5. Brad Murphy: Moving from Agile Software Dev to Scaled Business Agility & Radical Innovation

However, one talk more than any other represented this shift — Mark Wanish, David Poore, and Richard Thomas: “Optimizing the Whole” Development Business & Architecture Delivering in Harmony. I got the sense, both during their talk but perhaps more so when speaking to them outside of their talk that it currently takes a certain kind of leadership and a ton of determination to spread agile outside of the development teams at a larger organization like Bank of America, where they work. The first step was to get the infrastructure, DevOps if you will, to work in a more agile manner. Internal IT shops are competing with (or at least being compared to) hosted services like Amazon Web Services. If you can get a virtual machine in minutes and few clicks there, agile organizations shouldn’t put up with 3-6 month lead time for hardware. They seem to have gotten past that hurdle (or wave as they describe it). The next waves for them are architects, business analysts, and user experience folks.

Be sure to check out some of the presentations that have been posted. Keep coming back as they are still being added. My presentation hasn’t made it up there yet, but you can see the slides here.

Note: I still owe this audience a description of why I left Rally and came to Tasktop. Now that we’ve released our Tasktop Data product, I think some folks can start to guess, but I’m now free to talk about that openly and will come back with a blog post on this topic before too long.

New Tasktop Data product launched with Tasktop 4.0, unlocks Agile, ALM and DevOps

by Mik Kersten, October 27th, 2014

From Galileo’s telescope to the scanning electron microscope, scientific progress has been punctuated by the technology that enabled new forms of measurement. Yet in the discipline of software delivery, robust measurement has been elusive. When I set out on a mission to double developer productivity, I ended up spending a good portion of my PhD first coming up with a new developer productivity metric, and then even more time implementing a tool for measuring it (now a core part of Eclipse Mylyn). Over the past few years, while working with the largest software delivery organizations in the world, I’ve noticed almost every one of them going through a similar struggle. All are looking for the best ways to scale or improve their software delivery via enterprise Agile frameworks and tools, DevOps automation technologies, and end-to-end ALM deployments. The problem is that nobody is able to reliably measure the overall success of those efforts because we are missing the technology infrastructure that allows for measurement across software delivery disciplines, methods and tools.


Register for Sync 4.0 and Data Webinar to learn more

With the launch of Tasktop Data we have a single goal: to unlock the data flowing through the software lifecycle. New measurement ideas have recently arrived on the market, ranging from the metrics backing the Scaled Agile Framework (SAFe) to methods for tracking cycle time through the DevOps pipeline originating from Sam Guckenheimer. There’s also no shortage of tools out there to allow you to visualize such data, ranging from generic Business Intelligence (BI) tools, to innovative new DevOps-specific reporting such as the IBM Jazz Reporting Service. The problem that’s plaguing any large-scale software delivery organization is that there’s simply no way to get at the end-to-end data to drive those metrics and reporting tools. Database-driven approaches such as ETL no longer work due to the fact that databases do not contain the complex business logic of modern Agile/ALM/DevOps tools, and are additionally inaccessible for SaaS solutions. Single tool approaches, such as Scrum or CI metrics, only work for one stage of the software lifecycle and cannot deliver end-to-end analytics such as cycle time. We need a new measurement technology in in order to take the next step in improving how software is built. That new technology is Tasktop Data.

Tasktop has created two key innovations that make Tasktop Data possible. The first is our semantically rich data model of the end-to-end software lifecycle. This is at the core of the Tasktop products and allows us to map and synchronize artifacts across the various tools and levels of granularity that define software delivery. The second is the massive “integration factory” that allows us to test all of the versions of all the leading Agile, ALM and DevOps tools that we support. With Tasktop Data, we are leveraging this common model and all our integrations, allowing organizations to stream the data that defines their software lifecycle to the database & reporting solution of choice. What makes this new technology even more profound is that we are exposing the models within the Tasktop platform, enabling software lifecycle architects to author the models that will drive their reports. The end-result is the real-time flow of clean lifecycle data flowing in to your reporting tool of choice. Running Enterprise Agile DevOps analytics and metrics that were previously impossible is now easy. Check out the demo above for a start-to-finish setup of Tasktop Data that connects Rally and HP ALM to Tableau in minutes. Then imagine this working for your entire tool chain, with your reporting solution of choice.

Tasktop 4.0 Connectors

Tasktop Data is being released as part of Tasktop 4.0, which includes significant updates across our entire product portfolio. The most notable is the fact that we’re releasing 6 new Sync connectors (BMC Remedy, GitHub, IBM Bluemix, Polarion ALM, Serena Dimensions RM and Tricentis Tosca) in addition to bringing Tasktop Dev up-to-speed with the latest developer tools (e.g., Eclipse/Mylyn Luna, Jenkins, Gerrit as well as commercial tools that leverage Dev such as HP Agile Manager).

We’re thrilled that the past 7 years of creating the de facto integration layer for software delivery is now materializing in a whole new way of measuring and improving how software is built. This is just the start of a new journey, as the most interesting aspects of data will arise from the way that our customers and partners leverage it in order to create unique and valuable insights in the software delivery process. For more information on how you can become a part of that journey:

Connecting Microsoft TFS, Visual Studio Online and ServiceNow

by Neelan Choksi, September 25th, 2014

I’m very excited to share this video with Tasktop followers, partially because I’ve been promising it for upwards of 3 months (Tom, Sam, and Will: sorry for the delays), and partially because this has been an interesting integration case that customers are asking for more and more.

Conceptually, large companies are having to deal with heterogeneous tool stacks where developers are in one tool, testers are in a second tool, and the help desk is in a third tool. When the tools are a mixture of on-premise and cloud, the problems are exacerbated even if the tool is from a single vendor.

In this example, we highlight an integration via Tasktop Sync among:
Microsoft’s Team Foundation Server (being used for test management by the QA team),
– Microsoft’s Visual Studio Online (being used for defect management by the development organization), and
ServiceNow (being used for IT Service Management by the Help Desk / Support Team)

visualizer

Sync Studio Integration Visualizer showing integration between ServiceNow, and Microsoft TFS and VS

Of course, every customer’s stack is different, and Tasktop’s breadth of connectors allows each customer and even each line of business within a company (and frankly, even a couple of teams) to mix and match from all of the integrations we support to make sure the right information is flowing at the right time regardless of heterogeneity of the stack. And we are coming out with new integrations on a regular basis, so if we don’t support your particular tool today, come talk to us and let us know what you are looking for.

The great thing about connecting development to test to service management and the help desk is that it really highlights a couple of the SLI/DevOps Integration Patterns we’ve been developing over the past decade. The patterns that are most relevant in this case are Defect Unification and Help Desk Escalation and the benefits are:
– making sure that when a problem is found in ServiceNow it gets to the developers who can fix it and testers who can ensure the fix is tested,
– making sure test and dev are in sync and the right actions happen at the right time, and
– closing the loop by ensuring that the help desk knows when a defect they’ve identified has been fixed, so they can let the customer (whether internal or external) know that it has been fixed.

Additional Screenshots

vso-from-tfs-sn

vso-from-tfs-sn-side-by-side

processing

mapping editor

Tasktop Teams up with Appfire – A Leading Atlassian Platinum Expert

by Wesley Coelho, September 17th, 2014

We’re delighted to be kicking off a new partnership with Appfire to more effectively deliver DevOps integration to the Atlassian JIRA community. Atlassian has always done things a little differently from other vendors in this space. For example, while most companies’ top leadership wear suits, at Atlassian you can identify the most senior people by their Converse sneakers and hoodies. Another way Atlassian operates differently is by offering an almost entirely self-service model with no salespeople. Yes, Atlassian is dipping its toes into the world of sales with new Enterprise Advocates and Technical Account Managers. But there are currently only 7 of these folks in a 1000+ person company with a massive customer base and they are just scratching the surface.

So let’s say you’re a big bank with 60+ instances of JIRA and you need a little more help than just purchasing the software off the website with a credit card. Where do you get enterprise-level consulting to make this work? This is where the Atlassian Platinum Experts come in. The Experts are companies that partner with Atlassian to provide enterprise sales and services for large customers. And Appfire is among the very top providers for the Atlassian community.




Me and George Lannan from Appfire at Atlassian Summit 2014

Because Atlassian and their customers do things a little differently, it makes a lot of sense to partner with an organization that knows both the products and the community inside and out, including the dress code. Our new partnership with Appfire will make it easier for this community to discover and deploy Tasktop integration technology. More importantly, we’ll work together to help customers benefit from the ability to connect JIRA with the rest of the enterprise development tool stack without ripping and replacing existing investments. Exciting times.

For more information check out the news release or contact us.

Time to put Security into the Software Development Lifecycle

by Dave West, September 4th, 2014

On September 4, 2014 WhiteHat and Tasktop announced their partnership, while simultaneously introducing the WhiteHat Integration Server. The WhiteHat Integration Server is an OEM of Tasktop Sync technology, which includes a connector to WhiteHat Sentinel, and a selection of connectors. The addition of security to the Tasktop ecosystem is important for so many reasons.

Security must be deeply integrated into software development and delivery

Information security has been an important topic since the advent of computing, but over the last three years, high-profile security breaches have focused everyone’s attention on ensuring their web applications and sites are not easy pickings for crackers. But even though information security is important for many organizations, ensuring it is a separate activity from their normal development process. That disconnect slows down development since major security decisions are often left to the end. Agile and Continuous Delivery have taught us the value of integrating the disciplines, but for many organizations that integration is difficult. The release of the WhiteHat Integration Server and the creation of a Tasktop Sync connector for Sentinel provide automation that connects security vulnerabilities to defects, stories, issues and the rest of the lifecycle artifacts. This will allow organizations that use WhiteHat to embed security into the software development lifecycle earlier – reducing rework, increasingly quality, visibility and ultimately improving time-to-market.

Complete information enables better decisions

Software delivery, like all business processes, is about trade-offs. As software professionals we have to balance the needs of time to market, architecture, features and quality. The iron triangle of software delivery tells you that when considering quality, features or cost – you can have only two. But the most worrying part of these compromises isn’t the fact organizations are making them, it is that they are making without a complete view of all the information. Feature Leads are making decisions about their ever-growing list of features; testers are looking at defect lists; and project managers are trying to work out what to do with a project plan that is no longer valid. Security is yet another trade-off to make, and the use of WhiteHat Sentinel provides you with great information on what, why and how security vulnerabilities and issues will undermine your website or web application. But often this information is separated from the other defects, requirements and issues. Without a complete, single view of the truth, software delivery and business leadership are making decisions without all the facts. With the release of the WhiteHat Integration Server, organizations can synchronize the security artifacts into the right reporting and planning tools, enabling decisions to be made based on a more complete view of the truth.

It is all about flow, not access

Initial attempts to provide developers access to the information from security tools have focused on the IDE, allowing security observations to be surfaced within the developer’s IDE. The release of the WhiteHat Integration Server surfaces these observations, but in a different way. Instead of just enabling security vulnerabilities to be surfaced in the IDE, the integration server synchronizes the information into the tools managing the work for development – at a server level. By synchronizing security vulnerabilities with tools such as JIRA, Microsoft TFS, IBM RTC, Rally, or VersionOne, a developer will get a consistent and integrated view of their work, rather than a separate list of work items from the security tool. This allows them to manage security work in the same manner as other work. This is not only a key objective for development approaches such as Agile development, but also fundamental to building high-performance teams. By synchronizing the security information, you also have the ability to extend information in both artifacts, allowing the work item in a tool like JIRA to add additional development specific information without complicating the security artifact.

It’s more important than ever to connect security teams to their colleagues

The bottom line is that security – like the PMO, Agile teams, quality and service management – must be integrated in real-time to allow rapid, agile, and informed software delivery. The release of the WhiteHat Integration Server enables customers of WhiteHat to take the next step – connecting their security professionals to the rest of the software development and delivery lifecycle, in real-time. And from a Tasktop point of view, this is another BIG STEP in our mission of connecting the world of software delivery.

Things continue to get more exciting and more secure at Tasktop.

Dave