Archive for the ‘Developers’ Category

EclipseCon 10 years old

Thursday, April 17th, 2014

This was the tenth year of EclipseCon and my fourth time attending. I have been using the Eclipse IDE for over 12 years and so much has changed with technology and developers’ expectations. At the beginning, just having a single Java IDE that was smart enough to edit, compile, run, and debug your code was a major breakthrough. And for many years, the trend was to integrate into the IDE, with the expectation that all tools be absorbed into the IDE. Now, this trend is reversing and developers are starting to expect IDEs that can seamlessly connect to tools, allowing developers to use the tools in the way that makes the most sense for them.

The most interesting theme for me at this conference was trying to figure out how developers will be working another 10 years from now.

The future of the IDE

The Orion IDE is a prime example of this reverse-integration trend. Orion is a hosted IDE with a light-weight plugin model. One way that the plugin model allows integration with third party tools is by linking out to them in other browser pages. This allows developers to do programming tasks in the IDE, but still have easy access to external tools for deployment, testing, and monitoring. Ken Walker gave a good talk showing this, cheekily called Browser IDEs and why you don’t like them.

Another example of this is the Mylyn hosted task list, which Gunnar Wagenknecht and I introduced in our talk Unshackling Mylyn from the Desktop. The main idea is that current Mylyn task lists are restricted to working inside of a single Eclipse instance, but developers are looking to interact with tasks outside of the IDE (in the browser, on the desktop, on their phones, etc). The hosted task list provides developers with this capability using different kinds of clients that all operate on a single notion of a task list that is hosted remotely. We provide a simple API so that third parties can provide their own clients. What we presented at EclipseCon was early work and we are looking forward to showing more of this at future conferences.

Lastly, Martin Lippert and Andy Clement, my former colleagues at Pivotal, introduced Flux, a novel approach to a more flexible development environment. The main idea is that developers may want the simplicity of developing in the cloud, but also don’t want to give up the possibility to develop in a more traditional way on the desktop. Flux allows you to connect your local workspace to a cloud host that provides services like editing, compilation, and refactoring. This way, you can work in the cloud while still being able to drop down to the desktop whenever you need to. Watch the screencast.

It is still early work, but it is quite compelling and I’m looking forward to seeing how this progresses.

Java 8

It shouldn’t be surprising that the Java 8 track was the most popular at EclipseCon this year. Java 8 was released in the middle of Alex Buckley’s The Road to Lambda talk. What I thought was most interesting about this talk was that it focused on the design decisions around implementing Java 8 lambdas, why it took so long many years to do so, and why Java is now a better language because of its relatively late adoption of lambdas. I found Let’s make some 0xCAFEBABE particularly interesting with its deep dive into byte code. Lastly, API design in Java 8 showed how libraries and APIs can improve by taking advantage of new language features.

Hackathon

This year, I helped organize an Eclipse hackathon. It was a great experience. We had over 50 attendees, split roughly evenly between project committers and new contributors. The goal was for new contributors work on some real bugs in Eclipse, while being helped by project committers who guided them through the process of setting up a workspace, choosing a bug, and submitting a patch. By the end of the evening we had 7 contributions accepted to the code base. It was encouraging to see so much passion about working with open source.

Everyone's hacking!

There are plenty more photos from the event on the Eclipse Foundation’s Flickr stream.

It was truly a wonderful conference and there was so much more including 3 excellent keynotes and of course lots and lots of beer.

Now that I’m back, I need to catch up on sleep, but I’m already looking forward to next year!

Tasktop Dev 3.5 and Mylyn 3.11 Released

Monday, April 14th, 2014

Tasktop Dev 3.5 and Mylyn 3.11 are now available. These releases include some cool new features that result from the combined efforts of many people in the Mylyn community and at Tasktop.

Speaking as a Tasktop Dev and Mylyn user, I am already loving the new support for finding text in the task editor. It was originally contributed to Mylyn Incubator a few years ago, but unfortunately the implementation suffered from serious performance problems. Recently I had the pleasure of supervising a Co-op student at Tasktop, Lily Guo, as she reimplemented it with dramatically better performance and improved the UI. Thanks to her efforts this very useful feature is now released and the task editor supports searching for text within the comments, description, summary, and private notes.

Screnshot of Find in the Task Editor

Another long-awaited feature in this release is task-based breakpoint management, which extends the concept of task context to include Java breakpoints. This was implemented as part of a Google Summer of Code project by Sebastian Schmidt, a graduate student at Technical University Munich. It provides several important benefits for Java developers. First, the debugger will not stop at breakpoints that aren’t related to the active task. Second, only breakpoints created while the task was active will appear in the IDE – when working on a task, the breakpoints view and Java editors are no longer cluttered with dozens of irrelevant breakpoints. Because the breakpoints related to a task are only present while that task is active, there is no need to delete or disable these breakpoints – which often contain valuable information such as which lines of code and which runtime conditions trigger a defect – when a task is complete. Finally, breakpoints can be shared with other developers as part of task context.

Screenshot of the context preview page showing breakpoints in the context

In a single view, the Hudson/Jenkins connector provides quick access to status information about selected builds across multiple build servers, even when you are offline. This includes information about build status, build stability, and test failures. But one thing I realized was missing was a quick summary of how many builds are passing, unstable, and failing. This information is now displayed at the top of the Builds view and, when the view is minimized, in a tooltip, making it really easy to tell when there’s a change in the number of failing or unstable builds.

Screenshot of builds view showing summary of build statuses

This release also includes a number of bug fixes and enhancements to the Gerrit connector. Among them are support for comparing images, buttons for navigating to the next/previous inline comment in the compare editor, and a code review dashboard that lets you see which of your changes have been reviewed and whether they have passing or failing builds. The connector also remembers your previous votes on a patch set, so that posting additional comments doesn’t reset your vote. Thanks to Jacques Bouthillier and Guy Perron from Ericsson for their work on comment navigation and the dashboard.

Screenshot of the Code Review Dashboard

To help you prioritize the tasks you are responsible for, a “Show only my tasks” filter has been added to the task list, and the tooltip uses the gold person icon to accentuate those tasks.

Screenshot of the task list showing the new filter button

Tasktop Dev 3.5 is built on Mylyn 3.11, including all of the above features. This release includes a number of bug fixes as well as support for VersionOne 2013. We have also upgraded Tasktop Dev for Visual Studio to support Visual Studio 2013, bringing the benefits of a unified task list to users of that IDE.

For more information, see Tasktop Dev New & Noteworthy and Mylyn New & Noteworthy, or try out Tasktop Dev and Mylyn for yourself.

When it comes to Software Delivery, The E in Email Stands for Evil

Thursday, March 27th, 2014

Dr Evil - Photo courtesy of New Line Media Most organizations will experience failed software projects. They won’t necessarily crash and burn, but they will fail to deliver the value the customer wants. In other words, the software projects will not deliver an appropriate level of quality for the time and resources invested.

The extent of failed software projects is calculated every year in the Standish Chaos report– reporting software failure as 61% in 2013. There are many reasons for that kind of project failure rate. They range from poor requirements processes, to badly managed deployment, and a collection of other issues. Many authors have written about this, from the industry changing first work by Fred Brooks, The Mythical Man Month, to more recent works by the Agile and lean communities, but I don’t wish to re-hash these ideas here. I do, however, want to point out something that causes much trouble, pain and confusion in software projects– Email when used as a primary collaboration and process tool.

Imagine the following situation…

A tester discovers a bug, he documents this bug in his tool of choice (for this example, HP Quality Center), but because it is an important and interesting bug, he also sends an email to three developers who have worked with him on previous bugs. One of those developers replies directly to our tester with notes. The tester replies, adding a business analyst who is no longer involved in this app. Said analyst forwards the email to another business analyst who replies to the tester and yet another developer. The tester, business analyst and developers all agree on a solution, but the tester does not document this in Quality Center. That night, a CSV comes out of Quality Center and is converted into a spreadsheet by the project manager who allocates the bugs to the team. The special bug that the tester and colleagues were working on is allocated to a different developer, someone not involved in the email discussion. This developer talks to a whole different set of people to build a resolution. Sounds far-fetched? It’s not. I have seen much more complex communication networks created on projects. They resulted in confusion and a general disconnect between teams. And this issue is exacerbated across the departmental boundaries of QA, Dev, the PMO and operations because email is a poor way to collaborate. Why? because the “TO” and “CC” lists vary depending on whom the sender knows. And the original bug is just a memory by the 2nd or 3rd reply. Email isolates teams rather than uniting them. In fact, I would go one step further. If email has become one of the primary ways your project team collaborates/works, I would say email has become Evil, from a project success point of view.

To quote Wikipedia, ‘Evil, in its most general context, is taken as the absence of that which ascribed as being good.’ This description accurately describes email when it’s used to drive development, resolve issues or communicate progress. Most email begins with good intentions–adding additional people, replying out of your goodness of heart–but after the first few replies, it descends into layers of confusion, miscommunication and errors.

Email is evil because it:

  1. Does not automatically go to the right people
  2. Does not always include all the right information
  3. Is easy to drop and add new people, adding to confusion
  4. Is not stored, indexed or reference-able after the fact (unless you are the NSA, but that is a whole different blog post ;))
  5. Holds no status or meta-data that allows for quick search or filtering

Maybe labeling email evil is a bit dramatic, but you get the idea. Email can easily, if not effectively managed, cause problems in your software projects. For that reason, we should reduce our reliance on using email to drive real work.

If you doubt your reliance on email, answer the following question: Can you stop using email for project related activities for one week and instead use your systems, processes and tools to get the work done?

If your answer is no, what can you do instead?

Ideally, use the tools you are meant to be using to manage the artifacts they are meant to manage. For example, a tester shouldn’t need to email anyone. The defects he identifies should be automatically visible to the right group of developers, project managers and fellow testers. Everyone should be able to collaborate on that defect in real time–with no need to send an email, or update a spreadsheet. In fact, testers and their developer colleagues should not have to leave the comfort of their tools of choice to manage their work. Comments, descriptions and status should be available to everyone in the tool they’re using.

Admittedly, the company I work for develops integration software. So I come from that world, but even if you build integrations yourself or use tools that are tightly integrated, the value of working in your tool of choice while the lifecycle is automatically managed through notification the correct people and capturing all collaboration in the context of one set of data, is massive. Miscommunication, misunderstanding and general disconnects among teams and disciplines have a huge and negative impact on projects. How many times have you thought a problem was resolved when it wasn’t because you were ‘out of the loop?’ We would find it unacceptable if our e-retailer or bank didn’t share information across processes, but we accept that our software delivery lifecycle is disconnected and relies on manual processes like email and spreadsheets. This problem is only made more acute with the adoption of Agile methods, lean thinking and dev-ops–movements that encourage smaller batch sizes, faster feedback and more team collaboration.

I encourage you to do two things:

  1. Measure the amount of time/work you are spending/doing in email
  2. Replace that with integration or use of common tools

It is time to use email for what it is intended, sending out happy hour notifications and deciding what pizza to buy at lunch.

Tasktop 2.8 released, Serena partnership announced, death to timesheets

Tuesday, August 6th, 2013

Filling out time sheets is about as fulfilling as doing taxes. This mind-numbing activity is an interesting symptom of what’s broken with the way we deliver software today. What’s worse than the time wasted filling them out is the fact that the numbers we fill out are largely fictitious, as we have no hope of accurately recalling where time went over a course of a week, given that we’re switching tasks over a dozen times an hour. As Peter Drucker stated:

Even in total darkness, most people retain their sense of space. But even with the lights on, a few hours in a sealed room render most people incapable of estimating how much time has elapsed. They are as likely to underrate grossly the time spent in the room as to overrate it grossly. If we rely on our memory, therefore, we do not know how much time has been spent. (Peter Drucker. The Essential Drucker, ch. 16. Know your Time)

Tracking time is not a problem. When done well it’s a very good thing, given that time is our most scarce resource. Done right, time tracking allows us to have some sense for what the burn downs on our sprints are, and to predict what we will deliver and when. It allows us to get better at what we do by eliminating wasteful activities from our day, such as sitting and watching a VM boot up or an update install.

Effective knowledge workers, in my observation, do not start with their tasks. They start with their time. And they do not start out with planning. They start out by finding where their time actually goes. (Effective Drucker, ch 16. Know your Time) (Peter Drucker. The Essential Drucker, ch. 16. Know your Time)

Drucker was a big advocate of time tracking systems for individuals. With Agile, we have now learned how effective tracking story points and actuals can be for Scrum teams. Yet all of this goodness feels very distant when the last thing that stands between you and Friday drinks is a time sheet.

What we need is a way to combine the benefits of personal and team-level time tracking with those needed by the Project Management Office (PMO). With the Automatic Time Tracking feature of Tasktop Dev (screenshot below), we validated a way to combine personal time tracking with team estimation and planning. I still use this feature regularly to be a good student of Drucker and figure out where my own time goes, and many Scrum teams use it to remove the tedious process of manually tracking time per task.

While that automation is useful for the individual and the team, it did not help the PMO, that works at the program, enterprise and product level. PMOs use specialized project and portfolio management software such as CA Clarity PPM. So now, in our ongoing effort to create an infrastructure that connects all aspects of software delivery and to keep people coding and planning to their hearts’ content, we have stepped out of the IDE in order to bridge the divide between the PMO and Agile teams.

The Tasktop Sync 2.8 release includes updates to the leading Agile tools, such as support for RTC 4, HP ALM, CA Clarity Agile and Microsoft TFS 2012. It also ships the first Sync support for Rally and the TFS 2013 beta. The other big news is that we now are announcing a partnership with Serena in which both Tasktop Sync and Tasktop Dev will be OEM’d as part of the Serena Business Manager lifecycle suite. This new integration, which further cements Tasktop’s role as the Switzerland of ALM, will be showcased at Serena xChange in September, and ship this fall.

With Tasktop Sync 2.8, we have finally managed to connect the worlds of Agile ALM and PPM both in terms of task flow, and time reporting. While the support currently works for CA Clarity only, integrating these two worlds has a been a major feat in terms of understanding the data model and building out the integration architecture for connecting “below the line” and “above the line” planning (Forrester Wave). For the individual, it’s like having your own administrative assistant standing over your shoulder filling out the PPM tool for you, only less annoying and easier to edit after the fact. For the Agilistas, it’s about getting to use the methods that make your teams productive while making the PMO happy. And for the organization, it’s the key enabler for something that Drucker would have been proud of: automating the connection between strategy and execution.

The Case for Integration

Tuesday, March 5th, 2013

Putting the L in ALM – Making the case for Lifecycle Integration

I think everyone agrees that software delivery is not an ancillary business process but is actually a key business process, and the ability to deliver software faster, cheaper, and of a higher quality is a competitive advantage. But delivering software is difficult, and if you believe the Standish Chaos report, anywhere from 24 to 68 percent of software projects end in some form of failure.

Even the criteria for success has been questioned by many, as ‘on time, on budget, delivering the functionality requested’ can still mean software that fulfills requirements but adds no business value. Billions of dollars a year are spent on software development tools and practices in the desire to increase project success and reduce time-to-market. Each year, development, testing, requirements, project management and deployment roll out new practices and tools. Many of these additions bring value, thereby increasing the capability of each individual discipline. But ultimately, the problem is not the individual discipline; the problem is how those disciplines work together in pursuit of common goals and how the lifecycle is integrated across those disciplines.

It has been a year since I joined Tasktop, and during numerous customer visits and partner discussions, two things are very clear: 1. the landscape of software delivery tools and practices is going through a major change, and 2. to be effective in software delivery you need to automate flow and analytics.

The ever-changing face of software tools and practices

Add Agile, Lean Startup and DevOps to a large amount of mobile, cloud and open web, and not only do you have the perfect book title, you have all the ingredients necessary for a major change in the practice of software delivery. Agile and Lean encourage rapid delivery, customer feedback and cross-functional teams focused on delivering customer value. Mobile and cloud are changing the landscape of delivery platforms, architectural models and even partner relationships. Never before have we needed to build flexible development processes that encourage both feedback and automation. Imagine spending three months writing a specification for your next mobile application when your competitors deploy new features on a daily basis. Imagine not connecting your new sales productivity application to LinkedIn, where your sales people have all their contacts. Our development approach needs to not only include partner organizations and services but also deliver software at a much higher cadence.

Automation of Flow and Analytics (reporting) is key.

I have noticed a strange relationship between increased speed, reporting and integration. When you increase the speed of delivery, traditional manual processes for reporting and analytics stop working or become an overhead. For example, one customer spent two days compiling the monthly status report spreadsheet across development, test and requirements. This two day effort required meetings with numerous people and emailing the spreadsheet around for comment and review. When the organization adopted two week delivery sprints, this work was an overhead that no one wanted to endure. Now the company had a choice: drop the status report, or look to an automated solution. Because more frequent releases meant the need to collaborate better, they opted for an automated solution that connected the test, development and requirements tools, providing a report that described the flow of artifacts among these three groups.

The automation not only resulted in creating the report but also improving the flow between these different disciplines. Suddenly there was clarity as to the state of a story or when a defect should move into test. This clarity was avoided in the manual approach, which left large amounts of ambiguity. The report drove the creation of automated flow, which resulted in a better process, which then fed the report with better data.

That means there is a sixth discipline in software delivery

Lifecycle Integration is emerging as a key discipline for ALM. It provides the glue that enables the disconnected disciplines of requirements, development, testing, deployment and project management to connect. It unifies the process and data models of the five software delivery disciplines to enable a unified approach to Application Lifecycle Management (ALM).

Without integration, many of the disconnects go unrecognized, and the flow between groups is never optimized. The larger your software delivery value chain, the more pronounced the impact of these disconnects. Factor in external organizations, either through outsourcing, application integration, service usage or open source, and these impacts can mean the difference between not just project, but business success and failure.

Perhaps we in the software industry are suffering a bit from the ‘cobbler’s children syndrome,’ with integration being a first-class citizen in the traditional processes we have integrated for our business clients for years. But the time is right to apply these lessons and build a discipline around lifecycle integration for the practice of software delivery.

Incremental code coverage as a debugging tool

Monday, March 12th, 2012

(See also this article’s translation to Serbo-Croatian language by Vera Djuraskovic at http://science.webhostinggeeks.com/tasktop-blog)

I joined Tasktop in part because I share the goal of increasing programmer productivity, especially by filtering out unimportant information.  I also liked how Tasktop is committed to being involved with and connected to the broader Eclipse community.

In this spirit, I suggested a feature to the EclEmma project: letting developers create and view incremental code coverage results.  This would let developers see a much smaller, but more relevant, set of classes and methods which they could then investigate.

It had bothered me how difficult it was to find where things happen in code, especially in large, unfamiliar code bases.  Yes, you can step through the code, but sometimes you have to step for a long time.  This is boring and tedious, and frequently you step one step too far and overshoot the place you wanted to see — losing the information about the values of the variables at the point you cared about.  If an asynchronous process gets spawned as the result of a Listener attached to a GUI element, it is nearly impossible to step through.

You can search for text on or around the GUI element to help you find where the actions related to that element are processed, but sometimes this isn’t practical.  Sometimes the GUI element has text that is so common that it is impractical to search for it, like “Finish” or “Next”.  Sometimes the GUI element doesn’t have text associated with it, like a button with a picture on it (and no tooltip).

In practice, I have observed that people usually guess at what words might be included in the class or method names, and then when they think they are close, simulate stepping through code by reading through it and making informed guesses about where the execution flow will go.  Unfortunately, frequently they guess wrong, usually in the choice of a starting point or the value of an if-condition.

Another particularly pernicious mistake is not realizing that you are tracing through a superclass of the class that is actually executed.  If you Command/control-click on a method name, Eclipse will preferentially take you to the implementation of that method in the same class; this means that if you ever trace into a superclass, Eclipse will tend to keep you in the superclass; realizing that you need to go back to the subclass is not always obvious.

I thus suggested to the EclEmma code coverage team that they add a feature to the code coverage tool to let users start and stop coverage so that they can see which code was executed for specific short periods of the execution.

With the 2.1 release of EclEmma, the EclEmma team has implemented incremental code coverage — a very useful feature!

How to use it

First, install EclEmma.

Next, go to Preferences > Java-Code Coverage, and check the “Reset execution data on dump”.

Open a Coverage view.  In the Coverage toolbar, there is a “Dump Execution Data” button.

Pressing the “Dump” button will now display coverage and reset the code coverage results.  Thus, if you press the dump button right *before* you do the action you are interested in (e.g. pressing a certain button), and then again right after you do the action, then the code coverage results will show only the exact classes and methods that were executed in response to that action.  Among other things, this means you won’t get misled to look in the superclass instead of the appropriate class.

If you need finer-grained information about which code executed, EclEmma colours your lines of code based on whether they were executed in that interval: green if they were fully executed, red if they were not executed at all, and yellow if they were partially executed (if, for example, the line uses a “?” ternary operator, as in the “drightSide” assignment below).

By default, EclEmma only shows coverage information for your code, not for all the libraries you bring in.  To change this, uncheck “Source folders only” in Preferences > Java > Code Coverage.

Note: while for code coverage, you probably want to see the coverage results for all classes and methods, when using incremental code coverage to locate places in code, you should select “Hide unused elements” from the Coverage toolbar’s drop-down menu.

Incremental code coverage is a very powerful technique: instead of wandering through thousands of classes and methods to find the handful of classes and methods that are interesting, you can spend a few minutes to get their names directly.

I do need to give a slight caveat: exceptions interfere with the code coverage instrumentation, interrupting the marking of that branch of code as executed.  That is a known limitation with the way that code coverage is done.  Thus if your code uses exceptions a lot, EclEmma might incorrectly say that a branch of code was not executed when it was in fact executed.   However, if EclEmma tells you that code was executed, it really was executed.

Using EclEmma in conjunction with Tasktop Dev or Mylyn is an exciting prospect.  Mylyn and Tasktop Dev tell you what you (or someone else, if you are looking at their context) had looked at; EclEmma gives you hints on what you should look at. We have only just started thinking about how those two could be combined, but are excited by the possibilities.

For further information, see EclEmma, Tasktop Dev, or Mylyn.

Note: the screenshots used from the open-source Java GIS tool GpsPrune, isolating the action of showing the scale legend.

Tasktop at JavaOne: Drinkup with GitHub and Continuous Integration Talk and Panel

Tuesday, October 4th, 2011

Meet Tasktop at JavaOne at Booth #5004. Tasktop team members will be happy to show you the latest from Tasktop including Tasktop Sync, Tasktop Dev and Eclipse Mylyn.

Also, if you have some time, Tasktop and GitHub are co-hosting a Drinkup on Tuesday night starting at 8pm at Jasper’s Corner Tap. Jasper’s Corner Tap is located at 401 Taylor St. We hope to see you there.

Monday’s Panel: The Future of Java Build and Continuous Integration

  • Ted Farrell, Chief Architect, Tools & Middleware
  • Mik Kersten, CEO, Tasktop, @mik_kersten
  • Mike Milinkovich, Executive Director, Eclipse Foundation, @mmilinkov
  • Mike Maciag, CEO Electric Cloud
  • Max Spring, Tech Lead, Cisco Systems
Mylyn Contribution Workflow

from Mik's JavaOne talk

We saw a great turn out at the panel, with attendees driving a discussion of how Hudson and Continuous Integration in general is becoming a central part of the modern ALM stack. Tomorrow (Tuesday October 4th), Mik will elaborate on the story in his talk titled “ALM Automation with Mylyn and Hudson”.

Tuesday’s Talk: ALM Automation with Mylyn and Hudson

Date: Tues., Oct. 4, 2011, noon – 1 p.m. Pacific
Location: Parc 55 – Divisidero

With the shift to PaaS and a new breed of open source ALM tools, the deployment loop of enterprise apps is going through its biggest transition since the creation of Java. Kersten will explore connecting the enterprise Java stack to cloud deployment via task-focused continuous integration based on Hudson. Distributed version control systems, code review and Agile planning, based on the Eclipse Mylyn interoperability platform, can be used to create a new level of connectivity and automation between the team and the running application. This talk outlines a roadmap for transforming productivity by connecting developers’ desktops to the release, and automating all the steps in between, from provisioning the IDE to monitoring the running application.

Tasktop Dev 2.1 released

Tuesday, August 9th, 2011

Hot on the heels of the Tasktop Sync 1.0 release, we are pleased to announce the availability of Tasktop Dev 2.1. As an indication of our focus on the Agile and ALM needs of the developer, the product line previously known as Tasktop is now called Tasktop Dev. This release builds on the Eclipse Indigo release of Mylyn 3.6, includes the latest connectors, productivity features and new Agile planning support, and is a significant step forward in terms of connecting developers to both the Agile and the traditional planning process, while ensuring that we get to use the best-of-breed ALM and open source technologies that make us productive.

James Governor (RedMonk founder and Principal Analyst) and I discussed the release and walked through some of the key features:

Here are a few highlights from the Tasktop 2.1 New & Noteworthy:

HP ALM & Quality Center 11 on Mac, Linux and 64-bit Windows
HP ALM Requirements, Defects and Tests can now be retrieved on Mac, Linux and 64-bit Windows machines using the REST connection provided by ALM 11 instead of the native connection. This feature is only supported when connecting to ALM 11 Patch 2 or higher.

HP ALM & Quality Center Tests
You can now bring HP ALM Tests into your Task List along side your HP ALM Defects and Requirements.

Tasktop for VS: Ability to View Task Associations
The Visual Studio task editor now displays task associations, making it easy to see the parent and child relationships and external dependencies inside Visual Studio. Double-clicking an associated task opens it in the task editor, allowing you to quickly access its content.

Planner Story Board and Kanban
The planning tools now support Kanban for compatible ALM tools, and includes a story board and WIP limits. The release planner now supports grouping stories and tasks by activity or assignee, allowing you to organize your planning around these high level concepts.

Focus plan on My Tasks
The task board and story board now include a “Focus on My Tasks” button which shows you only the tasks or stories that are assigned to you.


Download the free trial

Mylyn visiting Skills Matter

Tuesday, May 31st, 2011

Ever wondered what is going on inside the brain of people working at Tasktop?
Skills Matter
Last week, I had to honor to speak at Skills Matter, Europe’s largest provider of open-source and agile trainings in the London area. It was a great time in London and Skills Matter was kind enough to provide a recorded version of the talk for people who could not attend.

As part of their branding, the talk was titled: “In The Brain of Benjamin Muskalla: Mylyn: Closing the Agile ALM loop with task-focused collaboration” so I thought: when you’re able to know what’s going on in my brain, I don’t even need to talk about anything. You’ll expierence some silence up to 5:20 as we forgot to turn on the microphone. As the whole talk is about 1:20h, it’s still worth watching it if you want to find out how to do task-focused collaboration with Mylyn and Tasktop. In addition, there is a quick outlook in the connectors coming up around code review with Gerrit and the Git integration for Mylyn.

You can watch the whole talk at the Skills Matter website.

Meet for Beers, Not for Code Reviews

Monday, May 2nd, 2011
Figure A: Friends enjoying each other’s company over beers in a community hall Figure B: Developer held hostage during a code review meeting in the 1970s

Code review meetings are notoriously difficult to do well. In Jonathan Lange’s article “Your Code Sucks and I Hate You: The Social Dynamics of Code Review” he identifies several places where code review meetings commonly get off track. The author of the code can feel attacked (see Figure B), discussions escalate to arguments, and filibustering can occur. Adding to these woes the code review meeting introduces a communication bottleneck where several reviewers wait to provide feedback to a single author, which wastes reviewers’ time. Given all of the issues associated with code review meetings it’s a wonder they are ever successful.

Fortunately, there are options for conducting code reviews sans code review meetings. Developers have traditionally used a range of approaches. Some prefer lightweight approaches (i.e., commenting on patches) while others appreciate the more robust support offered by code review servers. Code review servers, such as CodeCollaborator, are recently enjoying an uptick in popularity as they handle all of the logistics of code reviews, from collecting relevant files, to moderating comments, to tracking discovered defects and ensuring they are corrected.





 

In an upcoming webinar with SmartBear I will be discussing how Tasktop and CodeCollaborator work together to eliminate many of the common pain points of code reviews, including the code review meeting itself. The bulk of this webinar will be a live demo of an author and a reviewer working asynchronously to complete a code review. While this demo will connect to the CodeCollaborator server, the we’ll show that to create, review, rework, and complete a code review developers never have to leave their IDE.

Sounds interesting? Join us on May 5th so that you too can eliminate code review meetings. If you just can’t wait, visit our product page to learn more about Tasktop’s CodeCollaborator integration.