Wednesday, November 12, 2008

Justifying the Use of Build Scripts

Have you read about all of the great benefits of automation and continuous integration?  Have you talked about tools like Nant and a build server to your colleagues, bosses, and your friends?  Have you pitched the idea and desire to see it implemented (in part or completely)  only to have it justified too costly of an initial investment?  If you have, then you're probably like many individuals out there that have tried to find the right resource or answer to implementing automation in some fashion even though the benefits are already justified to yourself.  In this post, I'm going to discuss some of the components of continuous integration and also discuss how to get data to justify the (manual) use of build scripts to "automate" a scenario initially.  The scenario is not ideal (since build servers would enhance automation to a whole new level);however, the tiny victories are sometimes the best ones to start with.

 

Some Traditional Pieces of Continuous Integration

While the grand vision of continuous integration sounds great, getting there isn't as simple as everyone lets on or would like.  While implementing a continuous integration strategy can sound simple compared to other implementations, the road to such is marked with multiple milestones.  In order to get from a completely manual process to a completely automated one, it's best to identify some of the various pieces that many consider under the continuous integration title.  Below is a small list of items that are traditionally part of continuous integration:

  • Build Scripts
    • Traditionally XML files that are used by a build utility like Ant, Nant, Rake, or MSBuild.
    • Have the ability to execute a sequence of (possibly dependent) tasks that goes beyond just building the code.
  • Build Server
    • A server used to read and execute the build scripts
    • Configured to integrate with a version control system
  • Unit Testing
    • Code based tests used for testing the application's code
    • Typically associated with frameworks like MSTest, NUnit, and xUnit
  • UI Testing
    • Similar to Unit Testing; however, scripted usability test of the UI.
    • Frameworks like Selenium provide the ability for this level of automation.
  • Documentation Generation
    • Tools used read comments from code and/or the version control system to generate and update documentation
    • SVN2Wiki (by Neal Ford), yDoc, Sandcastle, and others provide ways to address this item.
  • Automatic Deployment
    • This is ultimately the culmination of all of the items.  The ability to check in code, test it, document it, and deploy it all in an automated fashion.
    • Many times is considered the truly last stage due to hesitancy to automatically placing code into a production environment.

 

What to automate first?

Looking at the list above, it is fairly easy to see identify that the Build Scripts provide the amount automation and flexibility that can extend as the road to continuous integration evolves down the milestones.  Because of their flexibility, they hold the key in solving the pain points that will fuel your justification. 

 

A Scenario That Could Use Automation

Imagine that you are in an environment where the developer builds and configures the code for multiple environments, copies the code to a staging location on a file share, and then waits while other teams (i.e. QA) takes the code to deploy it to their location.  In many instances, a bug comes back from the QA team and the process repeats itself; a rebuild and deployment into the staging area for each environment. 

Let's look at just this process in more details to understand the cost of it.

  • The Developer may have to do separate builds for each environment
  • The Developer has to navigate out to the file share
  • The Developer has to backup or delete the files currently in the staging location
  • The Developer navigates to each separate build destination
  • The Developer has to copy the files out to the staging area from each build destination
  • The Developer sends notification to various parties that the code is ready.

Now, your environment may be a bit different than what is portrayed here (the number of different builds and how they are distributed, such as .Net's Publish feature or using an FTP instead, for example), these events usually happen in some fashion in many companies based on my discussions about this with other developers. 

 

Identifying the Cost Savings of Build Scripts

Looking at these 6 steps, how much time does it take you to do them individually?  How long does it take your computer to do each build, for you to navigate to each folder, and to copy/backup files over the network?  How much time do you spend being distracted between these steps?  How many times do you have to do this because of bugs from QA? These are the question that will allow you to begin quantitatively justifying automating just these 6 steps using build scripts. 

Depending on your project size and complexity, the above 6 steps can take anywhere from 5 minutes to 30minutes (and even longer in some instances).  Multiplying this amount of time by the number of times the process occurs over release or a few months will help paint a better picture on the amount of time just the build scripts can save. 

Now, while this identifies the cost in time we are hoping to automate, we also have to look into the time it takes to create the build script.  If you have never researched build scripts before, this may take some time through research as well as implementation.  Build utilities, like those listed above, offer a wide variety of predefined tasks that will make the above tasks fairly simple once you learn it.  The documentation in most cases is decent to help make this a simple transition; however, when testing the script itself until you are comfortable with it may take an hour or two.  The good news is that once you have a script for a build utility written, they are very easy to turn into templates (thanks to variables/properties being able to be set in the scripts) for reuse on other projects.

Now that we have identified the length of time the process takes and the amount of time it will take to create the script, we only have to identify how many deployments it will take until the savings is seen. 

 

Conclusion:

While continuous integration is the goal on the horizon, we looked at a scenario where we justified the first step in simple automation.  This post describes creating build scripts that would be executed manually by the developer (in our scenario) that would do everything in that script automatically in the background.  This isn't the ideal situation; however, it is a step closer to automating the process even more (through a build server).  The main thing to identify here is that just because a build server is not available or cannot be justified at this time, trying to move forward with the build scripts will save time now and in the future. 


kick it on DotNetKicks.com

No comments:

Post a Comment