Archive for the ‘Developers’ Category

Faster VMs with Vagrant and Chef

Wednesday, April 1st, 2015

Developing and testing changes in an environment that is the same as the deployment environment is one of the magic ingredients of the DevOps way. For engineers at Tasktop working on the Integration Factory, provisioning a new VM can occur multiple times in a day, so any inefficiencies in the process are painful. I recently stumbled across vagrant-cachier which reduces network usage and speeds up local Vagrant-based provisioning, drastically improving VM provisioning times.

Install vagrant-cachier as follows:

$ vagrant plugin install vagrant-cachier

Then add the following to your ~/.vagrant.d/Vagrantfile

Vagrant.configure("2") do |config|
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box

That’s all! The next time you provision a VM using Vagrant, you should see a local cache being created under ~/.vagrant.d/cache

As you can see, vagrant-cachier creates a local cache of dependencies including those provided by apt-get, chef, and of course anything under /var/cache.
This technique is especially helpful for remote developers who are not on 100GB Ethernet to servers hosting VM dependencies.

It’s easy to try it yourself and collect before/after results. Here’s what I observed before vagrant-cachier:

$ time vagrant up 
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'ubuntu-14.04-x86'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
real	4m44.142s
user	0m4.895s
sys	0m3.132s

After enabling vagrant-cachier:

$ time vagrant up 
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'ubuntu-14.04-x86'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
real	3m18.506s
user	0m7.617s
sys	0m5.261s

In this real-world example of a simple VM running MySQL downloading dependencies over a VPN, I was able to reduce the provisioning time by 30%! The effect in real time is larger of course for VMs that have more dependencies.

Many thanks to @fgrehm for vagrant-cachier, which helps to eliminate much of the pain of waiting for VMs to come up when using Vagrant and Chef.
Find out more at

What do Gene Kim, Agile, DevOps and Continuous Delivery Have in Common? Pretty much Everything.

Wednesday, November 19th, 2014

We are just back from the Agile, Continuous Delivery and DevOps Transformation Summit and, from what we can tell it more than lived up to its promotion. For three days, software development and delivery professionals availed themselves of talks, educational sessions, vendor information and networking in the beautiful Bay Area. It’s our understanding that this was the first time Electric Cloud hosted an industry summit, but the attendees seemed very satisfied. At least half of the sessions used case studies to educate attendees on lean principles, the speakers were diverse and the legendary Gene Kim hosted the event, providing the keynote and bringing a large following of Development and Operations professionals with him.

Kim is obviously a huge draw, and contributed substantially to the success of the summit, but the attendees really made the summit from our point-of-view. This was a crowd seriously focused on making DevOps work, solving DevOps problems and basically learning as much as they could about what kinds of tools and methodologies other organizations were employing to realize success with Agile and continuous delivery. They ate, slept and talked lean and achieving interoperability.

This kind of crowd makes it fun for Tasktop. There were lots of serious questions from people meeting difficult challenges. These are exactly the kind of conversations we like to be part of. We’re always happy to talk about planning, tools, collaboration, testing, process… the whole spectrum of the lifecycle.

After talking to lots of people the observation that DevOps has become more of a state-of-mind than a series of actions or achievements was reinforced to us. It’s a way of thinking about the hard work of software delivery. There were lots of conversations about the need for all of the disciplines involved in the software delivery process, people and the tools, to work together more easily and closely. They were singing our song.

And it wasn’t 100% work. An interesting and fun trip to The Computer Museum added some perspective to the event. It’s hard to imagine that technology has come so far in a few decades. It seems like a long time for those of us who have been in the industry for a while, but it’s a speck on the time continuum. It’s also amazing (and a good thing) how much computer design has evolved. Those machines were as ugly as they were groundbreaking.

photo 1   photo 4

We hear that there were hundreds of people on the waiting list once the event reached capacity, so we feel pretty confident that you’ll be seeing this event again next year. In the meantime, we look forward to continuing the conversations with the people we met at the summit.

Time to put Security into the Software Development Lifecycle

Thursday, September 4th, 2014

On September 4, 2014 WhiteHat and Tasktop announced their partnership, while simultaneously introducing the WhiteHat Integration Server. The WhiteHat Integration Server is an OEM of Tasktop Sync technology, which includes a connector to WhiteHat Sentinel, and a selection of connectors. The addition of security to the Tasktop ecosystem is important for so many reasons.

Security must be deeply integrated into software development and delivery

Information security has been an important topic since the advent of computing, but over the last three years, high-profile security breaches have focused everyone’s attention on ensuring their web applications and sites are not easy pickings for crackers. But even though information security is important for many organizations, ensuring it is a separate activity from their normal development process. That disconnect slows down development since major security decisions are often left to the end. Agile and Continuous Delivery have taught us the value of integrating the disciplines, but for many organizations that integration is difficult. The release of the WhiteHat Integration Server and the creation of a Tasktop Sync connector for Sentinel provide automation that connects security vulnerabilities to defects, stories, issues and the rest of the lifecycle artifacts. This will allow organizations that use WhiteHat to embed security into the software development lifecycle earlier – reducing rework, increasingly quality, visibility and ultimately improving time-to-market.

Complete information enables better decisions

Software delivery, like all business processes, is about trade-offs. As software professionals we have to balance the needs of time to market, architecture, features and quality. The iron triangle of software delivery tells you that when considering quality, features or cost – you can have only two. But the most worrying part of these compromises isn’t the fact organizations are making them, it is that they are making without a complete view of all the information. Feature Leads are making decisions about their ever-growing list of features; testers are looking at defect lists; and project managers are trying to work out what to do with a project plan that is no longer valid. Security is yet another trade-off to make, and the use of WhiteHat Sentinel provides you with great information on what, why and how security vulnerabilities and issues will undermine your website or web application. But often this information is separated from the other defects, requirements and issues. Without a complete, single view of the truth, software delivery and business leadership are making decisions without all the facts. With the release of the WhiteHat Integration Server, organizations can synchronize the security artifacts into the right reporting and planning tools, enabling decisions to be made based on a more complete view of the truth.

It is all about flow, not access

Initial attempts to provide developers access to the information from security tools have focused on the IDE, allowing security observations to be surfaced within the developer’s IDE. The release of the WhiteHat Integration Server surfaces these observations, but in a different way. Instead of just enabling security vulnerabilities to be surfaced in the IDE, the integration server synchronizes the information into the tools managing the work for development – at a server level. By synchronizing security vulnerabilities with tools such as JIRA, Microsoft TFS, IBM RTC, Rally, or VersionOne, a developer will get a consistent and integrated view of their work, rather than a separate list of work items from the security tool. This allows them to manage security work in the same manner as other work. This is not only a key objective for development approaches such as Agile development, but also fundamental to building high-performance teams. By synchronizing the security information, you also have the ability to extend information in both artifacts, allowing the work item in a tool like JIRA to add additional development specific information without complicating the security artifact.

It’s more important than ever to connect security teams to their colleagues

The bottom line is that security – like the PMO, Agile teams, quality and service management – must be integrated in real-time to allow rapid, agile, and informed software delivery. The release of the WhiteHat Integration Server enables customers of WhiteHat to take the next step – connecting their security professionals to the rest of the software development and delivery lifecycle, in real-time. And from a Tasktop point of view, this is another BIG STEP in our mission of connecting the world of software delivery.

Things continue to get more exciting and more secure at Tasktop.


EclipseCon 10 years old

Thursday, April 17th, 2014

This was the tenth year of EclipseCon and my fourth time attending. I have been using the Eclipse IDE for over 12 years and so much has changed with technology and developers’ expectations. At the beginning, just having a single Java IDE that was smart enough to edit, compile, run, and debug your code was a major breakthrough. And for many years, the trend was to integrate into the IDE, with the expectation that all tools be absorbed into the IDE. Now, this trend is reversing and developers are starting to expect IDEs that can seamlessly connect to tools, allowing developers to use the tools in the way that makes the most sense for them.

The most interesting theme for me at this conference was trying to figure out how developers will be working another 10 years from now.

The future of the IDE

The Orion IDE is a prime example of this reverse-integration trend. Orion is a hosted IDE with a light-weight plugin model. One way that the plugin model allows integration with third party tools is by linking out to them in other browser pages. This allows developers to do programming tasks in the IDE, but still have easy access to external tools for deployment, testing, and monitoring. Ken Walker gave a good talk showing this, cheekily called Browser IDEs and why you don’t like them.

Another example of this is the Mylyn hosted task list, which Gunnar Wagenknecht and I introduced in our talk Unshackling Mylyn from the Desktop. The main idea is that current Mylyn task lists are restricted to working inside of a single Eclipse instance, but developers are looking to interact with tasks outside of the IDE (in the browser, on the desktop, on their phones, etc). The hosted task list provides developers with this capability using different kinds of clients that all operate on a single notion of a task list that is hosted remotely. We provide a simple API so that third parties can provide their own clients. What we presented at EclipseCon was early work and we are looking forward to showing more of this at future conferences.

Lastly, Martin Lippert and Andy Clement, my former colleagues at Pivotal, introduced Flux, a novel approach to a more flexible development environment. The main idea is that developers may want the simplicity of developing in the cloud, but also don’t want to give up the possibility to develop in a more traditional way on the desktop. Flux allows you to connect your local workspace to a cloud host that provides services like editing, compilation, and refactoring. This way, you can work in the cloud while still being able to drop down to the desktop whenever you need to. Watch the screencast.

It is still early work, but it is quite compelling and I’m looking forward to seeing how this progresses.

Java 8

It shouldn’t be surprising that the Java 8 track was the most popular at EclipseCon this year. Java 8 was released in the middle of Alex Buckley’s The Road to Lambda talk. What I thought was most interesting about this talk was that it focused on the design decisions around implementing Java 8 lambdas, why it took so long many years to do so, and why Java is now a better language because of its relatively late adoption of lambdas. I found Let’s make some 0xCAFEBABE particularly interesting with its deep dive into byte code. Lastly, API design in Java 8 showed how libraries and APIs can improve by taking advantage of new language features.


This year, I helped organize an Eclipse hackathon. It was a great experience. We had over 50 attendees, split roughly evenly between project committers and new contributors. The goal was for new contributors work on some real bugs in Eclipse, while being helped by project committers who guided them through the process of setting up a workspace, choosing a bug, and submitting a patch. By the end of the evening we had 7 contributions accepted to the code base. It was encouraging to see so much passion about working with open source.

Everyone's hacking!

There are plenty more photos from the event on the Eclipse Foundation’s Flickr stream.

It was truly a wonderful conference and there was so much more including 3 excellent keynotes and of course lots and lots of beer.

Now that I’m back, I need to catch up on sleep, but I’m already looking forward to next year!

Tasktop Dev 3.5 and Mylyn 3.11 Released

Monday, April 14th, 2014

Tasktop Dev 3.5 and Mylyn 3.11 are now available. These releases include some cool new features that result from the combined efforts of many people in the Mylyn community and at Tasktop.

Speaking as a Tasktop Dev and Mylyn user, I am already loving the new support for finding text in the task editor. It was originally contributed to Mylyn Incubator a few years ago, but unfortunately the implementation suffered from serious performance problems. Recently I had the pleasure of supervising a Co-op student at Tasktop, Lily Guo, as she reimplemented it with dramatically better performance and improved the UI. Thanks to her efforts this very useful feature is now released and the task editor supports searching for text within the comments, description, summary, and private notes.

Screnshot of Find in the Task Editor

Another long-awaited feature in this release is task-based breakpoint management, which extends the concept of task context to include Java breakpoints. This was implemented as part of a Google Summer of Code project by Sebastian Schmidt, a graduate student at Technical University Munich. It provides several important benefits for Java developers. First, the debugger will not stop at breakpoints that aren’t related to the active task. Second, only breakpoints created while the task was active will appear in the IDE – when working on a task, the breakpoints view and Java editors are no longer cluttered with dozens of irrelevant breakpoints. Because the breakpoints related to a task are only present while that task is active, there is no need to delete or disable these breakpoints – which often contain valuable information such as which lines of code and which runtime conditions trigger a defect – when a task is complete. Finally, breakpoints can be shared with other developers as part of task context.

Screenshot of the context preview page showing breakpoints in the context

In a single view, the Hudson/Jenkins connector provides quick access to status information about selected builds across multiple build servers, even when you are offline. This includes information about build status, build stability, and test failures. But one thing I realized was missing was a quick summary of how many builds are passing, unstable, and failing. This information is now displayed at the top of the Builds view and, when the view is minimized, in a tooltip, making it really easy to tell when there’s a change in the number of failing or unstable builds.

Screenshot of builds view showing summary of build statuses

This release also includes a number of bug fixes and enhancements to the Gerrit connector. Among them are support for comparing images, buttons for navigating to the next/previous inline comment in the compare editor, and a code review dashboard that lets you see which of your changes have been reviewed and whether they have passing or failing builds. The connector also remembers your previous votes on a patch set, so that posting additional comments doesn’t reset your vote. Thanks to Jacques Bouthillier and Guy Perron from Ericsson for their work on comment navigation and the dashboard.

Screenshot of the Code Review Dashboard

To help you prioritize the tasks you are responsible for, a “Show only my tasks” filter has been added to the task list, and the tooltip uses the gold person icon to accentuate those tasks.

Screenshot of the task list showing the new filter button

Tasktop Dev 3.5 is built on Mylyn 3.11, including all of the above features. This release includes a number of bug fixes as well as support for VersionOne 2013. We have also upgraded Tasktop Dev for Visual Studio to support Visual Studio 2013, bringing the benefits of a unified task list to users of that IDE.

For more information, see Tasktop Dev New & Noteworthy and Mylyn New & Noteworthy, or try out Tasktop Dev and Mylyn for yourself.

When it comes to Software Delivery, The E in Email Stands for Evil

Thursday, March 27th, 2014

Dr Evil - Photo courtesy of New Line Media Most organizations will experience failed software projects. They won’t necessarily crash and burn, but they will fail to deliver the value the customer wants. In other words, the software projects will not deliver an appropriate level of quality for the time and resources invested.

The extent of failed software projects is calculated every year in the Standish Chaos report– reporting software failure as 61% in 2013. There are many reasons for that kind of project failure rate. They range from poor requirements processes, to badly managed deployment, and a collection of other issues. Many authors have written about this, from the industry changing first work by Fred Brooks, The Mythical Man Month, to more recent works by the Agile and lean communities, but I don’t wish to re-hash these ideas here. I do, however, want to point out something that causes much trouble, pain and confusion in software projects– Email when used as a primary collaboration and process tool.

Imagine the following situation…

A tester discovers a bug, he documents this bug in his tool of choice (for this example, HP Quality Center), but because it is an important and interesting bug, he also sends an email to three developers who have worked with him on previous bugs. One of those developers replies directly to our tester with notes. The tester replies, adding a business analyst who is no longer involved in this app. Said analyst forwards the email to another business analyst who replies to the tester and yet another developer. The tester, business analyst and developers all agree on a solution, but the tester does not document this in Quality Center. That night, a CSV comes out of Quality Center and is converted into a spreadsheet by the project manager who allocates the bugs to the team. The special bug that the tester and colleagues were working on is allocated to a different developer, someone not involved in the email discussion. This developer talks to a whole different set of people to build a resolution. Sounds far-fetched? It’s not. I have seen much more complex communication networks created on projects. They resulted in confusion and a general disconnect between teams. And this issue is exacerbated across the departmental boundaries of QA, Dev, the PMO and operations because email is a poor way to collaborate. Why? because the “TO” and “CC” lists vary depending on whom the sender knows. And the original bug is just a memory by the 2nd or 3rd reply. Email isolates teams rather than uniting them. In fact, I would go one step further. If email has become one of the primary ways your project team collaborates/works, I would say email has become Evil, from a project success point of view.

To quote Wikipedia, ‘Evil, in its most general context, is taken as the absence of that which ascribed as being good.’ This description accurately describes email when it’s used to drive development, resolve issues or communicate progress. Most email begins with good intentions–adding additional people, replying out of your goodness of heart–but after the first few replies, it descends into layers of confusion, miscommunication and errors.

Email is evil because it:

  1. Does not automatically go to the right people
  2. Does not always include all the right information
  3. Is easy to drop and add new people, adding to confusion
  4. Is not stored, indexed or reference-able after the fact (unless you are the NSA, but that is a whole different blog post ;))
  5. Holds no status or meta-data that allows for quick search or filtering

Maybe labeling email evil is a bit dramatic, but you get the idea. Email can easily, if not effectively managed, cause problems in your software projects. For that reason, we should reduce our reliance on using email to drive real work.

If you doubt your reliance on email, answer the following question: Can you stop using email for project related activities for one week and instead use your systems, processes and tools to get the work done?

If your answer is no, what can you do instead?

Ideally, use the tools you are meant to be using to manage the artifacts they are meant to manage. For example, a tester shouldn’t need to email anyone. The defects he identifies should be automatically visible to the right group of developers, project managers and fellow testers. Everyone should be able to collaborate on that defect in real time–with no need to send an email, or update a spreadsheet. In fact, testers and their developer colleagues should not have to leave the comfort of their tools of choice to manage their work. Comments, descriptions and status should be available to everyone in the tool they’re using.

Admittedly, the company I work for develops integration software. So I come from that world, but even if you build integrations yourself or use tools that are tightly integrated, the value of working in your tool of choice while the lifecycle is automatically managed through notification the correct people and capturing all collaboration in the context of one set of data, is massive. Miscommunication, misunderstanding and general disconnects among teams and disciplines have a huge and negative impact on projects. How many times have you thought a problem was resolved when it wasn’t because you were ‘out of the loop?’ We would find it unacceptable if our e-retailer or bank didn’t share information across processes, but we accept that our software delivery lifecycle is disconnected and relies on manual processes like email and spreadsheets. This problem is only made more acute with the adoption of Agile methods, lean thinking and dev-ops–movements that encourage smaller batch sizes, faster feedback and more team collaboration.

I encourage you to do two things:

  1. Measure the amount of time/work you are spending/doing in email
  2. Replace that with integration or use of common tools

It is time to use email for what it is intended, sending out happy hour notifications and deciding what pizza to buy at lunch.

Tasktop 2.8 released, Serena partnership announced, death to timesheets

Tuesday, August 6th, 2013

Filling out time sheets is about as fulfilling as doing taxes. This mind-numbing activity is an interesting symptom of what’s broken with the way we deliver software today. What’s worse than the time wasted filling them out is the fact that the numbers we fill out are largely fictitious, as we have no hope of accurately recalling where time went over a course of a week, given that we’re switching tasks over a dozen times an hour. As Peter Drucker stated:

Even in total darkness, most people retain their sense of space. But even with the lights on, a few hours in a sealed room render most people incapable of estimating how much time has elapsed. They are as likely to underrate grossly the time spent in the room as to overrate it grossly. If we rely on our memory, therefore, we do not know how much time has been spent. (Peter Drucker. The Essential Drucker, ch. 16. Know your Time)

Tracking time is not a problem. When done well it’s a very good thing, given that time is our most scarce resource. Done right, time tracking allows us to have some sense for what the burn downs on our sprints are, and to predict what we will deliver and when. It allows us to get better at what we do by eliminating wasteful activities from our day, such as sitting and watching a VM boot up or an update install.

Effective knowledge workers, in my observation, do not start with their tasks. They start with their time. And they do not start out with planning. They start out by finding where their time actually goes. (Effective Drucker, ch 16. Know your Time) (Peter Drucker. The Essential Drucker, ch. 16. Know your Time)

Drucker was a big advocate of time tracking systems for individuals. With Agile, we have now learned how effective tracking story points and actuals can be for Scrum teams. Yet all of this goodness feels very distant when the last thing that stands between you and Friday drinks is a time sheet.

What we need is a way to combine the benefits of personal and team-level time tracking with those needed by the Project Management Office (PMO). With the Automatic Time Tracking feature of Tasktop Dev (screenshot below), we validated a way to combine personal time tracking with team estimation and planning. I still use this feature regularly to be a good student of Drucker and figure out where my own time goes, and many Scrum teams use it to remove the tedious process of manually tracking time per task.

While that automation is useful for the individual and the team, it did not help the PMO, that works at the program, enterprise and product level. PMOs use specialized project and portfolio management software such as CA Clarity PPM. So now, in our ongoing effort to create an infrastructure that connects all aspects of software delivery and to keep people coding and planning to their hearts’ content, we have stepped out of the IDE in order to bridge the divide between the PMO and Agile teams.

The Tasktop Sync 2.8 release includes updates to the leading Agile tools, such as support for RTC 4, HP ALM, CA Clarity Agile and Microsoft TFS 2012. It also ships the first Sync support for Rally and the TFS 2013 beta. The other big news is that we now are announcing a partnership with Serena in which both Tasktop Sync and Tasktop Dev will be OEM’d as part of the Serena Business Manager lifecycle suite. This new integration, which further cements Tasktop’s role as the Switzerland of ALM, will be showcased at Serena xChange in September, and ship this fall.

With Tasktop Sync 2.8, we have finally managed to connect the worlds of Agile ALM and PPM both in terms of task flow, and time reporting. While the support currently works for CA Clarity only, integrating these two worlds has a been a major feat in terms of understanding the data model and building out the integration architecture for connecting “below the line” and “above the line” planning (Forrester Wave). For the individual, it’s like having your own administrative assistant standing over your shoulder filling out the PPM tool for you, only less annoying and easier to edit after the fact. For the Agilistas, it’s about getting to use the methods that make your teams productive while making the PMO happy. And for the organization, it’s the key enabler for something that Drucker would have been proud of: automating the connection between strategy and execution.

The Case for Integration

Tuesday, March 5th, 2013

Putting the L in ALM – Making the case for Lifecycle Integration

I think everyone agrees that software delivery is not an ancillary business process but is actually a key business process, and the ability to deliver software faster, cheaper, and of a higher quality is a competitive advantage. But delivering software is difficult, and if you believe the Standish Chaos report, anywhere from 24 to 68 percent of software projects end in some form of failure.

Even the criteria for success has been questioned by many, as ‘on time, on budget, delivering the functionality requested’ can still mean software that fulfills requirements but adds no business value. Billions of dollars a year are spent on software development tools and practices in the desire to increase project success and reduce time-to-market. Each year, development, testing, requirements, project management and deployment roll out new practices and tools. Many of these additions bring value, thereby increasing the capability of each individual discipline. But ultimately, the problem is not the individual discipline; the problem is how those disciplines work together in pursuit of common goals and how the lifecycle is integrated across those disciplines.

It has been a year since I joined Tasktop, and during numerous customer visits and partner discussions, two things are very clear: 1. the landscape of software delivery tools and practices is going through a major change, and 2. to be effective in software delivery you need to automate flow and analytics.

The ever-changing face of software tools and practices

Add Agile, Lean Startup and DevOps to a large amount of mobile, cloud and open web, and not only do you have the perfect book title, you have all the ingredients necessary for a major change in the practice of software delivery. Agile and Lean encourage rapid delivery, customer feedback and cross-functional teams focused on delivering customer value. Mobile and cloud are changing the landscape of delivery platforms, architectural models and even partner relationships. Never before have we needed to build flexible development processes that encourage both feedback and automation. Imagine spending three months writing a specification for your next mobile application when your competitors deploy new features on a daily basis. Imagine not connecting your new sales productivity application to LinkedIn, where your sales people have all their contacts. Our development approach needs to not only include partner organizations and services but also deliver software at a much higher cadence.

Automation of Flow and Analytics (reporting) is key.

I have noticed a strange relationship between increased speed, reporting and integration. When you increase the speed of delivery, traditional manual processes for reporting and analytics stop working or become an overhead. For example, one customer spent two days compiling the monthly status report spreadsheet across development, test and requirements. This two day effort required meetings with numerous people and emailing the spreadsheet around for comment and review. When the organization adopted two week delivery sprints, this work was an overhead that no one wanted to endure. Now the company had a choice: drop the status report, or look to an automated solution. Because more frequent releases meant the need to collaborate better, they opted for an automated solution that connected the test, development and requirements tools, providing a report that described the flow of artifacts among these three groups.

The automation not only resulted in creating the report but also improving the flow between these different disciplines. Suddenly there was clarity as to the state of a story or when a defect should move into test. This clarity was avoided in the manual approach, which left large amounts of ambiguity. The report drove the creation of automated flow, which resulted in a better process, which then fed the report with better data.

That means there is a sixth discipline in software delivery

Lifecycle Integration is emerging as a key discipline for ALM. It provides the glue that enables the disconnected disciplines of requirements, development, testing, deployment and project management to connect. It unifies the process and data models of the five software delivery disciplines to enable a unified approach to Application Lifecycle Management (ALM).

Without integration, many of the disconnects go unrecognized, and the flow between groups is never optimized. The larger your software delivery value chain, the more pronounced the impact of these disconnects. Factor in external organizations, either through outsourcing, application integration, service usage or open source, and these impacts can mean the difference between not just project, but business success and failure.

Perhaps we in the software industry are suffering a bit from the ‘cobbler’s children syndrome,’ with integration being a first-class citizen in the traditional processes we have integrated for our business clients for years. But the time is right to apply these lessons and build a discipline around lifecycle integration for the practice of software delivery.

Incremental code coverage as a debugging tool

Monday, March 12th, 2012

(See also this article’s translation to Serbo-Croatian language by Vera Djuraskovic at

I joined Tasktop in part because I share the goal of increasing programmer productivity, especially by filtering out unimportant information.  I also liked how Tasktop is committed to being involved with and connected to the broader Eclipse community.

In this spirit, I suggested a feature to the EclEmma project: letting developers create and view incremental code coverage results.  This would let developers see a much smaller, but more relevant, set of classes and methods which they could then investigate.

It had bothered me how difficult it was to find where things happen in code, especially in large, unfamiliar code bases.  Yes, you can step through the code, but sometimes you have to step for a long time.  This is boring and tedious, and frequently you step one step too far and overshoot the place you wanted to see — losing the information about the values of the variables at the point you cared about.  If an asynchronous process gets spawned as the result of a Listener attached to a GUI element, it is nearly impossible to step through.

You can search for text on or around the GUI element to help you find where the actions related to that element are processed, but sometimes this isn’t practical.  Sometimes the GUI element has text that is so common that it is impractical to search for it, like “Finish” or “Next”.  Sometimes the GUI element doesn’t have text associated with it, like a button with a picture on it (and no tooltip).

In practice, I have observed that people usually guess at what words might be included in the class or method names, and then when they think they are close, simulate stepping through code by reading through it and making informed guesses about where the execution flow will go.  Unfortunately, frequently they guess wrong, usually in the choice of a starting point or the value of an if-condition.

Another particularly pernicious mistake is not realizing that you are tracing through a superclass of the class that is actually executed.  If you Command/control-click on a method name, Eclipse will preferentially take you to the implementation of that method in the same class; this means that if you ever trace into a superclass, Eclipse will tend to keep you in the superclass; realizing that you need to go back to the subclass is not always obvious.

I thus suggested to the EclEmma code coverage team that they add a feature to the code coverage tool to let users start and stop coverage so that they can see which code was executed for specific short periods of the execution.

With the 2.1 release of EclEmma, the EclEmma team has implemented incremental code coverage — a very useful feature!

How to use it

First, install EclEmma.

Next, go to Preferences > Java-Code Coverage, and check the “Reset execution data on dump”.

Open a Coverage view.  In the Coverage toolbar, there is a “Dump Execution Data” button.

Pressing the “Dump” button will now display coverage and reset the code coverage results.  Thus, if you press the dump button right *before* you do the action you are interested in (e.g. pressing a certain button), and then again right after you do the action, then the code coverage results will show only the exact classes and methods that were executed in response to that action.  Among other things, this means you won’t get misled to look in the superclass instead of the appropriate class.

If you need finer-grained information about which code executed, EclEmma colours your lines of code based on whether they were executed in that interval: green if they were fully executed, red if they were not executed at all, and yellow if they were partially executed (if, for example, the line uses a “?” ternary operator, as in the “drightSide” assignment below).

By default, EclEmma only shows coverage information for your code, not for all the libraries you bring in.  To change this, uncheck “Source folders only” in Preferences > Java > Code Coverage.

Note: while for code coverage, you probably want to see the coverage results for all classes and methods, when using incremental code coverage to locate places in code, you should select “Hide unused elements” from the Coverage toolbar’s drop-down menu.

Incremental code coverage is a very powerful technique: instead of wandering through thousands of classes and methods to find the handful of classes and methods that are interesting, you can spend a few minutes to get their names directly.

I do need to give a slight caveat: exceptions interfere with the code coverage instrumentation, interrupting the marking of that branch of code as executed.  That is a known limitation with the way that code coverage is done.  Thus if your code uses exceptions a lot, EclEmma might incorrectly say that a branch of code was not executed when it was in fact executed.   However, if EclEmma tells you that code was executed, it really was executed.

Using EclEmma in conjunction with Tasktop Dev or Mylyn is an exciting prospect.  Mylyn and Tasktop Dev tell you what you (or someone else, if you are looking at their context) had looked at; EclEmma gives you hints on what you should look at. We have only just started thinking about how those two could be combined, but are excited by the possibilities.

For further information, see EclEmma, Tasktop Dev, or Mylyn.

Note: the screenshots used from the open-source Java GIS tool GpsPrune, isolating the action of showing the scale legend.

Tasktop at JavaOne: Drinkup with GitHub and Continuous Integration Talk and Panel

Tuesday, October 4th, 2011

Meet Tasktop at JavaOne at Booth #5004. Tasktop team members will be happy to show you the latest from Tasktop including Tasktop Sync, Tasktop Dev and Eclipse Mylyn.

Also, if you have some time, Tasktop and GitHub are co-hosting a Drinkup on Tuesday night starting at 8pm at Jasper’s Corner Tap. Jasper’s Corner Tap is located at 401 Taylor St. We hope to see you there.

Monday’s Panel: The Future of Java Build and Continuous Integration

  • Ted Farrell, Chief Architect, Tools & Middleware
  • Mik Kersten, CEO, Tasktop, @mik_kersten
  • Mike Milinkovich, Executive Director, Eclipse Foundation, @mmilinkov
  • Mike Maciag, CEO Electric Cloud
  • Max Spring, Tech Lead, Cisco Systems
Mylyn Contribution Workflow

from Mik's JavaOne talk

We saw a great turn out at the panel, with attendees driving a discussion of how Hudson and Continuous Integration in general is becoming a central part of the modern ALM stack. Tomorrow (Tuesday October 4th), Mik will elaborate on the story in his talk titled “ALM Automation with Mylyn and Hudson”.

Tuesday’s Talk: ALM Automation with Mylyn and Hudson

Date: Tues., Oct. 4, 2011, noon – 1 p.m. Pacific
Location: Parc 55 – Divisidero

With the shift to PaaS and a new breed of open source ALM tools, the deployment loop of enterprise apps is going through its biggest transition since the creation of Java. Kersten will explore connecting the enterprise Java stack to cloud deployment via task-focused continuous integration based on Hudson. Distributed version control systems, code review and Agile planning, based on the Eclipse Mylyn interoperability platform, can be used to create a new level of connectivity and automation between the team and the running application. This talk outlines a roadmap for transforming productivity by connecting developers’ desktops to the release, and automating all the steps in between, from provisioning the IDE to monitoring the running application.