Wednesday, November 19, 2008

Book Review: The Productive Programmer by Neal Ford

I'm going to do something a bit different than my normal posts and review a book I just finished reading.  I read a number of books each month and this will provide me an outlet of to point people to on some of my recommendations.

 

 

I don't know how I discovered The Productive Programmer by Neal Ford (O'Reilly Press 2008); however, I know that it was a great purchase.  The book covers various practices on how to become a more productive programmer through tips, tricks, and tools.  The book is split into two parts.  The first part really focuses on mechanics; how things are done and how they can be done better.  This section of the book focuses on a number of great recommendations for tools to use to make yourself faster as well as recommendations on good practices that make you faster or more automated.  The second part of the book is about the practices of programming.  This section focuses on refactoring tips, code analysis, metrics, and general philosophies.

 

The books is great for anyone who is looking to do their job better or make it easier by finding ways to remove the repetitive ceremony that we all encounter.  While the book is very Java and Ruby focused, he does point out a few examples in C# and tools specific to the .Net community as well.  He also draws on the philosophy that to truly know a subject, one should focus beyond their current problem domain to see how others address it.  I found very few items in this book that I couldn't find a counterpart in the .Net world or be able to translate it very easily.

 

Pros:

  • Easy to read
  • Very good tips starting at page 1
  • A large amount of small side notes drive key points home
  • Lots of examples

Cons:

  • Would like a larger variety of examples in different languages

 

If you are interested in finding out more about the book, please refer to the following links:

O'Reilly

Amazon.com

ProductiveProgrammer.com

Google Books

Neal Ford's Website



kick it on DotNetKicks.com

Wednesday, November 12, 2008

Justifying the Use of Build Scripts

Have you read about all of the great benefits of automation and continuous integration?  Have you talked about tools like Nant and a build server to your colleagues, bosses, and your friends?  Have you pitched the idea and desire to see it implemented (in part or completely)  only to have it justified too costly of an initial investment?  If you have, then you're probably like many individuals out there that have tried to find the right resource or answer to implementing automation in some fashion even though the benefits are already justified to yourself.  In this post, I'm going to discuss some of the components of continuous integration and also discuss how to get data to justify the (manual) use of build scripts to "automate" a scenario initially.  The scenario is not ideal (since build servers would enhance automation to a whole new level);however, the tiny victories are sometimes the best ones to start with.

 

Some Traditional Pieces of Continuous Integration

While the grand vision of continuous integration sounds great, getting there isn't as simple as everyone lets on or would like.  While implementing a continuous integration strategy can sound simple compared to other implementations, the road to such is marked with multiple milestones.  In order to get from a completely manual process to a completely automated one, it's best to identify some of the various pieces that many consider under the continuous integration title.  Below is a small list of items that are traditionally part of continuous integration:

  • Build Scripts
    • Traditionally XML files that are used by a build utility like Ant, Nant, Rake, or MSBuild.
    • Have the ability to execute a sequence of (possibly dependent) tasks that goes beyond just building the code.
  • Build Server
    • A server used to read and execute the build scripts
    • Configured to integrate with a version control system
  • Unit Testing
    • Code based tests used for testing the application's code
    • Typically associated with frameworks like MSTest, NUnit, and xUnit
  • UI Testing
    • Similar to Unit Testing; however, scripted usability test of the UI.
    • Frameworks like Selenium provide the ability for this level of automation.
  • Documentation Generation
    • Tools used read comments from code and/or the version control system to generate and update documentation
    • SVN2Wiki (by Neal Ford), yDoc, Sandcastle, and others provide ways to address this item.
  • Automatic Deployment
    • This is ultimately the culmination of all of the items.  The ability to check in code, test it, document it, and deploy it all in an automated fashion.
    • Many times is considered the truly last stage due to hesitancy to automatically placing code into a production environment.

 

What to automate first?

Looking at the list above, it is fairly easy to see identify that the Build Scripts provide the amount automation and flexibility that can extend as the road to continuous integration evolves down the milestones.  Because of their flexibility, they hold the key in solving the pain points that will fuel your justification. 

 

A Scenario That Could Use Automation

Imagine that you are in an environment where the developer builds and configures the code for multiple environments, copies the code to a staging location on a file share, and then waits while other teams (i.e. QA) takes the code to deploy it to their location.  In many instances, a bug comes back from the QA team and the process repeats itself; a rebuild and deployment into the staging area for each environment. 

Let's look at just this process in more details to understand the cost of it.

  • The Developer may have to do separate builds for each environment
  • The Developer has to navigate out to the file share
  • The Developer has to backup or delete the files currently in the staging location
  • The Developer navigates to each separate build destination
  • The Developer has to copy the files out to the staging area from each build destination
  • The Developer sends notification to various parties that the code is ready.

Now, your environment may be a bit different than what is portrayed here (the number of different builds and how they are distributed, such as .Net's Publish feature or using an FTP instead, for example), these events usually happen in some fashion in many companies based on my discussions about this with other developers. 

 

Identifying the Cost Savings of Build Scripts

Looking at these 6 steps, how much time does it take you to do them individually?  How long does it take your computer to do each build, for you to navigate to each folder, and to copy/backup files over the network?  How much time do you spend being distracted between these steps?  How many times do you have to do this because of bugs from QA? These are the question that will allow you to begin quantitatively justifying automating just these 6 steps using build scripts. 

Depending on your project size and complexity, the above 6 steps can take anywhere from 5 minutes to 30minutes (and even longer in some instances).  Multiplying this amount of time by the number of times the process occurs over release or a few months will help paint a better picture on the amount of time just the build scripts can save. 

Now, while this identifies the cost in time we are hoping to automate, we also have to look into the time it takes to create the build script.  If you have never researched build scripts before, this may take some time through research as well as implementation.  Build utilities, like those listed above, offer a wide variety of predefined tasks that will make the above tasks fairly simple once you learn it.  The documentation in most cases is decent to help make this a simple transition; however, when testing the script itself until you are comfortable with it may take an hour or two.  The good news is that once you have a script for a build utility written, they are very easy to turn into templates (thanks to variables/properties being able to be set in the scripts) for reuse on other projects.

Now that we have identified the length of time the process takes and the amount of time it will take to create the script, we only have to identify how many deployments it will take until the savings is seen. 

 

Conclusion:

While continuous integration is the goal on the horizon, we looked at a scenario where we justified the first step in simple automation.  This post describes creating build scripts that would be executed manually by the developer (in our scenario) that would do everything in that script automatically in the background.  This isn't the ideal situation; however, it is a step closer to automating the process even more (through a build server).  The main thing to identify here is that just because a build server is not available or cannot be justified at this time, trying to move forward with the build scripts will save time now and in the future. 


kick it on DotNetKicks.com

Thursday, November 6, 2008

Starting with jQuery - Using jQuery with Web Forms

In this sixth and final entry on a series of post focused on using jQuery with no prior knowledge, I dive into applying jQuery into the traditional ASP.Net web form environment.  The example code covered in this entry focuses on applying the jQuery Validation plug-in against 2 sets of input in the same form.  To make the example more realistic, the form fields are contained inside of content page to ensure the code works with ASP.Net's client id renaming feature/annoyance.

 

If you would like to review the previous posts in this series, please feel free to review the links below:

 

Code Downloads for this Post:

ASP.Net Example project

 

An Overview of the Example:

The example project provides the following files:

  • Default.master 
    • The master page used in the example
    • Contains the jQuery-1.2.6.js reference
  • CustomerExample.aspx
    • The content page referenced in the example
    • Contains the jQuery.validate.js reference
    • Contains the example's JavaScript reference
  • jQueryExample.js
    • Holds the custom JavaScript code to manage the validations

Starting with jQuery - Solution Explorer
Figure 1: The example's Solution Explorer

 

When the CustomerExample.aspx file is viewed in a browser, we see that the form is divided into two field sets (see Figure 2 below).  The left field set is to simulate a customer information form which requires the First Name and Last Name fields filled.  Should the user opt to enter a phone number, the field will validate against the format specified.  The right field set is simple a text box for a user to insert their email address to subscribe to a fictitious newsletter.  In this case, the email address field is required and validated as an email address using the Validation plug-in's email validation.

 Starting with jQuery - Example Screenshot
Figure 2: The example form(s)

 

Looking at the Code Behind:

While searching on the web, I found a number of different ways of linking an ASP.Net control's client id to jQuery, the simplest and cleanest that I came across was suggested which is found in the page's Page_Load() method.  In this block, I am calling the ClientScript.RegisterHiddenField() method to create a hidden form field with the name of the server controls and containing the value of the server control's client id, prefixed with a pound (#) symbol.  While I chose this method for the example, I highly encourage everyone to do additional research into alternative methods (which are out there) to accomplish this task if you do not like the idea of adding this amount of additional code.

 

Diving into the JavaScript:

Opening up the CustomerExample.js file, we see some variable declarations first thing, followed by the code that will be executed when the document is in a ready state and the form button's client side code.  Inside the code the executes once the document is ready, our custom phone number validation rule is added to the form validator.  Afterwards, we wire the page's form to the validation mechanism. NOTE: ASP.Net Web form applications only allow 1 server-based form on the page.  Lastly, the variables declared at the top of the file are provided the string values of the hidden form fields created in the code behind, and the First Name field is given focus.  At the bottom of the file, two functions are provided that dynamically add and remove the validation rules from the appropriate fields using the variables that hold the hidden field values to pass in the appropriate control ids.

 

Conclusion:

When I started using jQuery a few months ago, I was very impressed with how easy it was to implement and learn as well as how small it was.  Since then I have been busy using a number of different UI plug ins for it as well as integrating it into ASP.Net and Classic ASP applications for my company.  As I explore jQuery more, I will be posting more about it as well.  I hope that this series has provided you with the basic understanding on how to get started with jQuery as well as locating and using some of the many plug-ins available.  In addition, I hope that this last post has provided information for you to begin integrating it into your ASP.Net applications.


kick it on DotNetKicks.com