uncategorized

Environmental Repeatability

Right now at work I’m fighting with a situation where I have to rebuild a testing environment consisting of 6 servers, a complete Active Directory installation (one of the servers is a domain controller) and the installation of some software that extends Active Directory and provides web based domain security. For a networking person this might not seem like a lot, but for me this is a huge amount of work, not to mention the stretching of my skillset (and I have worked as a network admin before), that has been added to the stack of development tasks that I already had. So why am I writing this story of wallowing self pity? Because I see absolutely no reason for this type of work given today’s technologies. I also see no reason that development process can’t be used in the networking and infrastructure realms.

First the technologies. Today we have a number of technologies that are available to us that make deployment of computers much easier than in the past. Probably the most prominent right now is virtualization. Servers for testing, development and R&D environments are perfect candidates for virtualization. These instances of operating systems are volatile by nature. The operating systems themselves are prone to configuration changes, repetitive installation and uninstallation of software and general abuse by the teams that are controlling them. Because of these things, a fresh start is sometimes required to ensure that the environment is in a state that is helpful and workable for the current tasks. On top of the volatility imposed on the software, testing, development and R&D tasks come and go much more frequently than production environments. If a development project ramps up and creates a small application, or performs enhancements and maintenance on an existing application, the life of the environment might be measured in months, not years. Again, being able to quickly setup environments for short or volatile projects quickly and accurately is a benefit to both the project and the group that provides the environments.

One of the primary themes that flows through a number of different aspects of software development is repeatability. When you are creating software you want to be able to repeat functionality consistently (centralized code), repeat testing in a controlled and explainable manner (automated unit testings, testing harnesses and test scripts) and repeat build processes with the same results (continuous integration). All of these ‘repeat’ ideas are in place with the goal of consistently creating a high quality product. I would like to see this mantra taken to the networking and server department(s) and used for the creation of environments. Test, development and R&D environments are created often, each time being a repetition of the previous (with a few minor tweaks for things like server names and IP addresses). Why can’t this repetition be controlled in a manner similar to the way software development repetition are handled?

There is no way that you can guarantee with 100% certainty that a piece of software created in a one-off environment will function when it is migrated to production. Manually creating environments creates a situation where there is a definite chance that the environment being created is not identical to the one being mimiced. The simple fact that the environment is one-off dictates that failure is not just possible, but probable. With technologies such as virtualization there are few if any valid reasons for this to happen. Environments created today should be identical to those created next month as well as those that were created last year.