uncategorized

When are scheduled code drops not enough?

In the past week and a half I’ve had some great conversations with one of my coworkers about the scheduling of our code drops. As background to those of you who don’t work with me (which is darn near all of you), our current project has a set of scheduled two week iterations. Every two weeks we release the completed code to the testing team and they begin to test the most recent additions and fixes to previously found defects. While they are testing we developers carry on our merry way, writing code for the next set of new functionality and fixing the bugs being reported.

We have two primary goals on the development team. First we are trying to keep to our release schedule so we can meet our final release date. Next we are attempting to have a 0 bug count at the time we do our every two week release. The first goal is a matter of life and unemployment (or perhaps lower employment). The second goal is a great one to strive for, but it’s a difficult one to achieve. So far we’ve had two code drops and we’ve had 0 and 1 bug carry over on each of those.

Where am I going with all this you ask? Well, the conversations with my coworker center around his, and admittedly mine too, urge to release bug fixes as quickly as we possibly can. In my mind this is a good trait for a developer to have. My coworker wants to get the best code, with the most functionality, to the testers and the business in the most timely fashion. This is perfect. A developer that’s concerned about quality and punctual delivery. There is one drawback though, and I’m not sure how to strike the right balance with it.

Because we are performing continuous development cycles, we have to do our bug fixes at the same time as we are working on new code and in the same code base. This is fine but for the times that we want perform non-scheduled releases of bug fixes. So far I’ve been been managing this by fighting the urge to release more often than scheduled. I am however prepared to, and slightly reorganizing the work orders to help accommodate, build when a defect that seriously impeded the progress of the testers is raised. In my mind this takes an exceptional case.

One of the reasons I’m fighting both my and my coworker’s urges to release each time bugs are fixed, is that I think this can set a very dangerous precedence with the test team. If the testing team becomes comfortable with us jumping to fix and build on every defect raised event, they will begin to expect and ultimately demand that we do this at all times. This ties in with my second reason for not doing on demand bug fix builds: Time, effort and scheduling. Each time you perform a build, uninstall the previous version from the servers, install the new version and perform a smoke test you have eaten up time. Depending on your project and schedule, time may be a valuable and rare commodity. In my environment each build requires the following efforts to get to a state where testing can resume.

  • coordinate with development team to ensure that the code base is in a state that the testers will find acceptable
  • Kick start build process (Forced through CruiseControl.NET)
  • Uninstall previous version from the developer’s test servers
  • Install new version to the developers test servers (no xcopy here)
  • Smoke test software in the developer test environment
  • Notify test team that their environment will be upgraded
  • Wait for test team to reach “good stopping points”
  • Uninstall previous version from test environment
  • Install new version to test environment
  • Smoke test software in the test team’s environment
  • Notify test team of the upgrade’s completion and which fixes were included with itAs you can see a simple bug fix can turn into more than an hours effort just where deployment is concerned. Certainly bundling multiple bug fixes can help reduce the per-bug effort required, but grouping bugs together depends on the timing of the bugs coming in from the test team. If we get three or four bugs in the first day or two of a new iteration, then we are able to easily fix and bundle these for a new release. But what happens if we get no bugs until half way through the iteration? By that point we most likely have some significant pieces of code opened up for changes and we’ve probably checked in some full or partial new functionality that may not be ready for release to the test team. This is the situation that is the most difficult to work with while still staying nimble in our releases.

Perhaps releases every two weeks is as nimble as we should strive for. If any of you have experienced this situation and are willing to share some of your thoughts with me, please do. I’d really appreciate any ideas that you may have.

I’m the Igloo Coder and every time I take some of my ice flow to melt for water my situation gets a little more precarious.