For the past 9 months I’ve been working on a project that is dominated by a desire to have heavy documentation created prior to development work commencing. As a result of this there is a trend to having things laid out in a place that the developer and tester can access. Even with this style of development (also known as BDUF or waterfall) there are a lot of little things that don’t get written in the documents and some larger things that get missed completely too. Regardless of the tone I take in this post, those things need to be coded and provided to the client (assumption: the client really needs them). What I’ve seen happen on my project in the last 9 months is a trend towards calling everything that is either broken or not included a defect. The problem that I have is the determination that these are actually problems in the code. Take for instance a scenario where the design document simply states that a screen should have a date picker on it and the ability for the user to select the date. How is it that a tester can subsequently test that screen and then log a defect claiming that the date picker should not allow the user to select dates in the future? I’m not arguing that this isn’t what the client wants or needs, but rather how is this determined to be a defect? I, as a programmer, can only code within the specifications and environment that I’m provided with. If your process is to rely on heavy design documents, I will fall back to stating that “…the document didn’t say it should do that”, which is a very poor place to position myself. The problem is that I can fall back on that with full confidence that I have a piece of verified and vetted paper that will back me up. Another of my problems is that the testing is obviously being done based on a different set of expectations then what I am developing with. Where does that second set of expectations come from? Why does it exist? Why are we fighting over the words on a piece of paper instead of working to get the client what they want? For me, a defect occurs when we have created a piece of functionality that does not meet the needs of the client. There are some gray areas (UI standards, consistency, etc.), but the majority of the time I find that answering “No” to the question “Is it what the client wants?” is the most effective way to make a determination. I’ve never worked on a strong agile project before. How do they handle the creation/definiton of defects when there can be so much client interation during the coding phase? I assume that the increased client prescence dramatically decreases the opportunity for defects to arise, it’s just how they are handled that I’m not overly clear on. I’m going to be very glad to work on something non-waterfall in the future. Let’s just hope it’s not two projects away.