Wednesday, March 25, 2009

jQuery, JSON, and ASMX 2.0 Services

A few weeks ago, I had a project that was grounded in the .Net Framework v2.0 and Visual Studio 2005.  The requirements were very focused on usability and speed so AJAX and jQuery were high on my list to use.  Through the project, I learned that there isn't a large amount of information in one place that tells you how to setup a ASP.Net solution that uses jQuery and ASMX services to effectively transmit json data back and forth from the client.  Because of this, I'll attempt to fill this void since there are still many developers and companies out there that have not been able to upgrade to Visual Studio 2008.

In this post, I'll discuss the process of building a plain ASP.Net 2.0 web application project (not web site project) , setting up the necessary entries in the web.config file to utilize the ASP.Net 2.0 AJAX Extensions v1.0, and use jQuery at the client side of the transfers.    In addition, I'll also show to to use the JavaScriptSerializer class and how to write your own custom converter for your objects.

 

A Note About This Post:

This post is focused on Visual Studio 2005 with the ASP.Net 2.0 AJAX Extensions v1.0.  The v3.5 of the extensions that came with ASP.Net v3.5 and Visual Studio 2008 have a few changes.  While I will try to point out the differences, the core of this post is going to be focused on Visual Studio 2005 web application projects and v1.0 of the AJAX extensions.  While we all love focusing on the latest and greatest, I'm aware that there are a large number of companies and professionals out there that are locked into using the older version of the software for a number of reasons.

 

Additions to VS2005 Used in This Post:

This post will be using the following additions to VS2005.  Below are the items and links to their respective installers:

 

Code Downloads for This Post:

 

Creating and Configuring a New WAP:

Now that we have the required additions established, we're ready to create a new ASP.Net 2.0 Web Application Project.  I'm not going to get into the details of how to do this; however, I want to stress that I'm NOT choosing an ASP.Net AJAX Enabled Web Application.  I'm just choosing to create a new, basic ASP.Net Web Application Project.  Now that the project has been created, we need to add a few references and add some information into the Web.Config file.

In the Solution Explorer, we'll need to add a reference to the AJAX Extensions v1.0 library, System.Web.Extensions.dll.  By default, this library is located at C:\Program Files\Microsoft ASP.Net\ASP.Net 2.0 AJAX Extensions\v1.0.61025\ directory.  After adding this reference, we can update our Web.Config file by including a reference to the ScriptHandlerFactory HttpHandler using the following snippet inside of the <System.Web> config section:

   1: <remove verb="*" path="*.asmx"/>
   2: <add verb="*" 
   3:      path="*.asmx" 
   4:      validate="false" 
   5:      type="System.Web.Script.Services.ScriptHandlerFactory, 
   6:            System.Web.Extensions, 
   7:            Version=1.0.61025.0, 
   8:            Culture=neutral, 
   9:            PublicKeyToken=31bf3856ad364e35"/>

By adding the AJAX Extensions reference and updating the web.config file, we're now ready to enable our ASP.Net Web Services (ASMX) to be called from JavaScript.

 

Setting Up an ASMX Web Service:

We have our new WAP setup and configured, it's time to write a simple ASMX web service and configure it so that it will be able to return JSON.  To do this, let's add a web service to our project called JsonService.asmx and add the following code snippet to replace some of the defaults Visual Studio gives you:

   1: using System.Collections.Generic;
   2: using System.Web.Script.Services;
   3: using System.Web.Script.Serialization;
   4:  
   5: [WebService(Namespace = "http://YourNamespaceHere.com")]
   6: [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
   7: [ScriptService()]
   8: public class JsonService : System.Web.Services.WebService
   9: {
  10:     [WebMethod]
  11:     [ScriptMethod(ResponseFormat = ResponseFormat.Json)]
  12:     public string GetCustomer()
  13:     {
  14:         // Method Body
  15:     }
  16: }

Lines 1-3, I imported 3 additional namespaces to use in the code.  System.Collections.Generics will be used in the next section.  System.Web.Script.Services allow us to decorate the service and it's methods as Script methods for the ScriptHandlerFactory to use when making AJAX calls to and from the client.

Line 7 decorates the web service with the [ScriptService()] attribute.

Line 11 decorates the GetCustomer() web method with the [ScriptMethod()] attribute.  This attribute tells the ScriptHandlerFactory that this method is allowed to be called from an Ajax Client.  The properties inside of the attribute, ResponseFormat = ResponseFormat.Json, tells the ScriptHandlerFactory to send the response stream as a json string and not XML or Soap.  If a response to the web service that is not formatted as json, the response will be returned as XML.

At this point, we can create the body of our web method in any fashion as long as it returns a string.  If you are only passing base types, you can skip down to the Talking to the Server Using jQuery section; however, if you want to pass something a bit more complex, I recommend you continue to the next section.

 

Using the JavaScriptSerializer and a Custom Converter:

While passing base types is easy enough, it can be important to pass objects back and forth from the client.  In order to assist with this, the ASP.Net AJAX Extensions v1.0 comes with the JavaScriptSerializer object.  This object has the ability to serialize certain objects into strings representing json objects.  This sounds great and the answer to all of our problem!  Too bad it is very limiting in it's natural state.  It CAN convert Arrays of base types and (from what I can tell) any framework classes that implements IEnumerable<T>.  I haven't experimented with some of the more obscure generic collections; however, I do know that it does concern List<T> and Dictionary<S,V> just fine. 

In order to use the JavaScriptSerializer, you simply instantiate it and call its Serialize() method, passing the object that you wish to serialize.  If this is a string array or an object of type Dictionary<string,string>, it will do all of the heavy conversion for you and give you a nice little string to return to the client as shown below:

   1: // Method Body
   2: Dictionary<string, string> customerInfo = new Dictionary<string, string>();
   3: customerInfo.Add("FirstName", "John");
   4: customerInfo.Add("LastName", "Doe");
   5: customerInfo.Add("EmailAddress", "JohnDoe@Domain.Com");
   6: customerInfo.Add("PhoneNumber", "555-555-1212");
   7:  
   8: return new JavaScriptSerializer().Serialize(customerInfo);

In this code snippet, I have instantiated a new Dictionary<string,string> generic object and populated with the property information for a customer.  Lastly, I instantiate a new JavaScriptSerializer object and call it's Serialize method, passing our customer information into it.  The JavaScriptSerializer will create following string from our dictionary:

   1: {"FirstName":"John","LastName":"Doe","EmailAddress":"JohnDoe@Domain.Com","PhoneNumber":"555-555-1212"}

This is just a simple string that, technically, we could have concatenated ourselves; however, you see how it converts the name-value pairs of the Dictionary object and turns them into a json Object with properties and string values.

Seems pretty simple.  Now, let's turn our customer Dictionary into a CustomerInfo object with the same four properties.  Below is the class definition:

   1: public class CustomerInfo
   2: {
   3:     public string FirstName { get; set; }
   4:     public string LastName { get; set; }
   5:     public string EmailAddress { get; set; }
   6:     public string PhoneNumber { get; set;}
   7: }

Now, if we replace our Dictionary object with our CustomerInfo object we get code that looks similar to the following, easier to read, snippet:

   1: // Method body
   2: CustomerInfo custInfo = new CustomerInfo();
   3: custInfo.FirstName = "John";
   4: custInfo.LastName = "Doe";
   5: custInfo.EmailAddress = "JohnDoe@Domain.Com";
   6: custInfo.PhoneNumber = "555-555-1212";
   7:  
   8: return new JavaScriptSerializer().Serialize(custInfo);

Sadly, running the above code will give you the following error when you attempt to run it:

CircularReferenceError

Since the JavaScriptSerializer doesn't know the definition of our CustomerInfo object, we get this Circular Reference error.  This causes us to go down one of two roads.  We can either turn the object manually back into our Dictionary object, or we can write a custom JavaScriptConverter for our CustomerInfo Class.  The nice thing about a custom converter is that once it's setup to our JavaScriptSerializer, we can then have it convert any number of CustomerInfo classes (or collections of CustomerInfo objects) we may need.

 

Writing A Custom JavaScriptConverter

Writing our own custom JavaScriptConverter is not as difficult as one may first assume.  Inside the System.Web.Script.Serialization namespace, we are provided with an abstract base class that helps us outline the definition and gets us started quickly.  In our project, let's add another class called CustomerInfoConverter.  Have the class inherit from the JavaScriptConverter class and right-click on the class name and select "Implement Abstract Class".  What you get is the following code snippet:

   1: public class CustomerInfoConverter : JavaScriptConverter
   2: {
   3:     public override object Deserialize(IDictionary<string, object> dictionary, Type type, JavaScriptSerializer serializer)
   4:     {
   5:         throw new Exception("The method or operation is not implemented.");
   6:     }
   7:  
   8:     public override IDictionary<string, object> Serialize(object obj, JavaScriptSerializer serializer)
   9:     {
  10:         throw new Exception("The method or operation is not implemented.");
  11:     }
  12:  
  13:     public override IEnumerable<Type> SupportedTypes
  14:     {
  15:         get { throw new Exception("The method or operation is not implemented."); }
  16:     }
  17: }

 

The first method the JavaScriptConverter makes us define is the Deserialize method.  This method is used by the JavaScriptSerializer to convert a json object from a client into the expected type of the web method uses as a parameter.  The JavaScriptSerializer automatically converts the json object into a Dictionary<string, object> collection.  Inside this method, you could add our mappings between the Dictionary keys to a new CustomerInfo object's properties and finally return that new instance of the CustomerInfo object.  Below is the method body:

   1: CustomerInfo cust = new CustInfo();
   2: cust.FirstName = dictionary["FirstName"].ToString();
   3: cust.LastName = dictionary["LastName"].ToString();
   4: cust.EmailAddress = dictionary["EmailAddress"].ToString();
   5: cust.PhoneNumber = dictionary["PhoneNumber"].ToString();
   6:  
   7: return cust;

 

The second method the JavaScriptConverter makes us define is the Serialize method.  This method is used during the actual serialize process.  Here, we're creating a new Dictionary<string, object> collection for our CustomerInfo class.  Since our CustomerInfo class will be boxed through the method's parameter, we'll need to properly cast it before we can start adding the values to the Dictionary object.  Below is the method body:

   1: // Cast the obj parameter
   2: CustomerInfo cust = obj as CustomerInfo;
   3:  
   4: if (cust != null)
   5: {
   6:     Dictionary<string, object> result = new Dictionary<string, object>();
   7:     result.Add("FirstName", cust.FirstName);
   8:     result.Add("LastName", cust.LastName);
   9:     result.Add("EmailAddress", cust.EmailAddress);
  10:     result.Add("PhoneNumber", cust.PhoneNumber);
  11:  
  12:     return result;
  13: }
  14:  
  15: // If the obj doesn't convert for some reason, return an empty dictionary.
  16: return new Dictionary<string, object>();

 

The last item the JavaScriptConverter makes us define is the SupportedTypes property of type IEnumerable<Type>.  This property should be a collection of types that this converter supports.  Since we are using this converter only for our CustomerInfo class, the property can be simplified to the following line of code:

   1: get { return new Type[] { typeof(CustomerInfo) }; }

 

Now that we have our converter built, we can attach it to our serializer to get our json string as shown below:

   1: CustomerInfo cust = new CustomerInfo();
   2: cust.FirstName = "John";
   3: cust.LastName = "Doe";
   4: cust.EmailAddress = "JohnDoe@Domain.com";
   5: cust.PhoneNumber = "555-555-1212";
   6:  
   7: JavaScriptSerializer jss = new JavaScriptSerializer();
   8: jss.RegisterConverters(new CustomerInfoConverter[] { new CustomerInfoConverter() });
   9: return jss.Serialize(cust);

 

Talking to the Server using jQuery:

Now that we have our server-side infrastructure setup, we can begin to write our AJAX client-side code using jQuery.  Compared to all of the code we've written, this is the easier part.  Below is the JavaScript/jQuery code that can be used to call our web service:

   1: function GetCustomerFromServer()
   2: {
   3:     $.ajax({
   4:         type: "POST",
   5:         url: "/JsonService.asmx/GetCustomer",
   6:         dataType: "json",
   7:         data: "{}",
   8:         contentType: "application/json; charset=utf-8",
   9:         success: function(msg){
  10:             var custInfo = eval("(" + msg + ")");
  11:             alert(custInfo.FirstName);
  12:         }
  13:     });
  14: }

In this client-side code, we're making a HTTP Post call to our web service by calling the web method of it directly.  We are stating that we are sending and receiving data in json format.  Despite the fact that our GetCustomer() web method does not take any parameters, we still need to send an empty json object in order to have json returned. 

Lastly, we write an anonymous method to handle the successful message returned to us.  The message is evaluated in order to be turned into a json object on the client side.  We then validate the object by echoing the FirstName property in an alert box.

One thing to remind, this is for ASP.Net 2.0 Web Services.  The response from ASP.Net 3.5 Web Services IS different in that the response message (msg) has it's content in a property only called "d".  So instead of eval("(" + msg + ")"), it would be eval("(" + msg.d + ")").

Summary:

This post has covered a large amount of steps to get a json-based web service infrastructure setup using jQuery and ASP.Net 2.0 web services.  After a lot of low level research and asking questions to people, I realized that there was not a single location for this information. Hopefully, this post will help fill that gap.


kick it on DotNetKicks.comShout it

Wednesday, March 18, 2009

How I'm Learning F# - Interacting with the .Net Framework

Over the past few months, I've been hearing more and more about the use of functional programming concepts and also languages like Haskell and F#.  While some of the initial musings that I've read revolved around how the concepts have been around for decades and that it makes financial and scientific applications easier to read and write, I couldn't find a good reason to start learning it for my typical line-of-business application design and development job or even some of my basic hobby projects.  Nonetheless, I kept getting drawn to the concept and have decided to focus on learning it.

This post marks the third entry of a series that I'll be writing to discuss how I'm going about learning F#.  While I'm not saying that my method of learning is ideal and should be followed by others, I'm just reporting how I'm going about doing it.  Over the course of this series, my goal is to provide other .Net developers a(nother) resource for learning the F# language as well as apply the language into some non-financial or scientific scenarios.

 

Series Table of Contents:

  1. Finding Resources
  2. Writing the First Application
  3. Interacting with the .Net Framework

 

Downloadable Resources:

 

Project Overview:

In this post, I'm going to interact with libraries in the .Net framework to illustrate how to you'll be able to apply F# to functionality that normally may do in another language. The basis of this project will incorporate 2 activities:

  1. Read from a comma delimited text file
  2. Create a new Fixed-Width based text file

Throughout the course of this project, we'll be dealing with a large amount of F# syntax as well as a couple of assemblies from the .Net Framework.  This post is a bit of a step from the previous post; however, I am hoping that a lot of the concepts will be applied through a little more comprehensive example.

 

What Does It Take to Read a Text File?:

While I was writing this example, I began to think of how a person traditionally learns a language.  While in our day to day jobs we may gloss over some of the granular steps of what it takes to read from a text file, I began to look for that level of detail in this program.  To read a text file in any .Net language the follow basic steps must be done:

  1. Open the System.IO namespace
  2. Call the File.ReadAllLines(string) static function, passing the path to the file as a parameter.

What's nice about F# is that when you think about the detailed steps of a task, you begin to see the lines that you need to write. Here is a function in F# that does the above steps:

open System.IO
let readFile =
    File.ReadAllLines(@"C:\...\myFile.txt")

By writing the above code, we have just opened a .Net Assembly (System.IO in this case) and declared a value that returns an array of strings representing each line of the text file.  In the above code, we could have fully qualified the ReadAllLines() function instead of opening the assembly; however, we will be creating a new file here in a moment so this works out better.

 

How Do I Manipulate This Array of Lines?

So, we have an array of strings representing the lines of delimited words.  Now what?  If we want to take these lines and break them into a fixed lines, we'll need to do the following things:

  1. Identify each word in each line (a.k.a. split the delimited string of words)
  2. Pad each word with spaces until its total length is 25
  3. Concatenate the words into one long string per line

Once again, we can directly map each of these lines to a line or function of code.  Let's see how these steps would look when we translate them into just functions:

let obtainWords (line : string) =
    line.Split(",".toCharArray())

let padWord (word : string) =
    word.PadRight(25, ' ')

let joinWords (words : string []) =
    System.String.Concat(words)

Our 3 steps translate fairly easily into single lined functions thanks to some build in String functions of the .Net Framework. Our obtainWords function takes a string and splits it using a comma being the delimiter.  Next, our padWord function takes a string an pads it to the right to ensure that it's 25 characters in length.  Lastly, we call the System.String.Concat function to take a string array and turn it into a single string.

We have our "what" to do to the lines but we haven't really answered the "how".  In traditional C# or VB.net, we would probably use a for loop against each read line and then call each function to update the variables in those languages.  We would end up with something that looks like the following in C# (using our function names from above):

for(int x = 0; x<readFile.Length; x++)

    string[] words = obtainWords(readFile[x]);

    for(int i = 0; i<words.Length; i++)
    {
        words[i] = padWord(words[i]);
    }
   
    readFile[x] = joinWords(words);
}

Here we have a loop that iterates through each line read by the readFile function.  Then we split each line into another array variable.  Next we iterate over the words array and update the values of the array to the padded versions.  Finally, we update the line with the combined strings of our padded words.

The code is pretty straightforward but I don't know many people who like inner loops.  Also, some developers may fall into a trap and attempt to use a foreach loop instead of a for loop.  If you are not aware of the difference here, the variable generated by a foreach loop is readonly.  I could iterate over the arrays in a foreach loop;however I would have to add the values into a different variable altogether in order to "update" the values like I did above.  Thankfully, F# has a special function that cleans this up for us.

 

Understanding Array.map()

One of the built in functions that I have really enjoyed in learning is the Array.map() function.  The Array.map() function takes a function value and an array as parameters.  It returns a new array that is comprised of values where the function was applied to each element of the provided array.  An example of this functionality is line C# example where we took the words array of strings and updated all values of the array with the padWord() function.  By using the Array.map() function in F#, we get code that does the same thing but looks like the following:

let newWordsArray =
    Array.map (padWord) words

This creates a new array (newWordsArray) where the values are the same as if each string in the words array was applied to the padWord() function.  One nice thing about the Array.map() function is that it also allows lambda expressions to be used in place of the function value also.  Lambdas will be covered at a later time though. By using Array.map(), we can begin to chain our values together and be able to take our delimited file and turn it into a fixed with file.  However, before we get into that, let's look at a technique in F# that allows us to chain these together even easier called Pipelining.

 

Pipelining with |>

Pipelining is a technique where you pass the returned value of 1 item and pass it as the parameter of another function.  This is very similar to the a technique in shell scripting using the pipe (|), greater than (>), and double greater than (>>) operators.  For example, in a command window you can type in type in the dir command and see a list of directories; however, if you wanted to apply paging to the list you can type in dir | more.  Likewise, if you wanted to send the directory listing from the dir command to a file you could type in the dir > file.txt to create/overwrite a new file with the redirected output or dir >> file.txt to append the directory listing to the contents of file.txt if the file already exists.

In F#, we can take a function and send it's returned function to another function using the |> operator. In our last examples, we can illustrate this by pipelining the return value of the obtainWords function (which returns a string array) into our applyPadding function.  The resulting function (see below) returns a string array that already has every string in the array padded to 25 characters. 

let obtainWords (line : string) =
    line.Split(",".toCharArray())
    |> applyPadding

At first glance, this may seem a bit confusing; however, remember what I said in that the output of the first expression (in this case line.Split()) is used as the parameter of the second (applyPadding).  In essence we are just reordering how we are just reordering a chain of events.  This simple example is just to show how we can remove the need for one additional value to hold the output of obtainWords before passing it to applyPadding. Below is a more applicable example where we do multiple chains in order to give our initial readFile function the ability to output an already transformed array of strings.

open System.IO

let obtainWords (line : string) =
    line.Split(",".toCharArray())

let padWord (word : string) =
    word.PadRight(25, ' ')

let applyPadding (words : string []) =
    Array.map (padWord) words

let joinWords (words : string []) =
    System.String.Concat(words)

let readFile =
    File.ReadAllLines(@"C:\...\myFile.txt")
    |> Array.map(obtainWords)
    |> Array.map(applyPadding)
    |> Array.map(joinWords)

Here, we open our System.IO namespace.  Next we define our functions that we'll be used to transform the data in the file.  Lastly, we create our function, readFile, that reads the lines into a string array, pipes the array into the Array.map function that maps the obtainWords function to the array and returns an array of words.  Those words are then padded and subsquently joined.  Through all of those steps, the readFile now contains an array of strings that represent the fixed width version of the comma delimited file that it read.  The final step is to now write those lines to our output file.

File.WriteAllLines(@"C:\...\myOutputFile.txt", readFile)

 

Summary

At this point, another function can be established to do anything with the output file that you wish.  You could use the System.Net namespace to gain access to the mail message object and email the new file to another process or possibly even FTP/copy it somewhere.  This is just a simple file transformation example to illustrate some advanced functionality and how to interact with the .Net framework using System.IO and System.  If you know any other .Net language, it works exactly the same in F# as in VB or C# from what I can tell so far. 

This project was one where things began to click for me inside of F#.  I knew the basics and seen examples through euler problems and such; however, seeing an example like this saw how easy and straight forward F# can make things.


kick it on DotNetKicks.comShout it

Thursday, March 12, 2009

Tackling Anxiety Against Automation

A few years ago, the company that I worked for was preparing to upgrade from MS Sql Server 2000 to Sql Server 2005.  While there wasn't too much abnormal concern about upgrading the databases themselves, there was a lot of concern about the large number of DTS packages that the company created for the majority of their B2B processes.  Their solution until they could come up and complete a test strategy to ensure the DTS packages would run under Sql 2005's DTS runtime, just incase the packages couldn't be converted to SSIS in some fashion, was to stop writing new DTS packages and push the processes into .Net console applications that would be scheduled through a batch processing system.  Personally, I liked this solution since it allowed me to write .net code in VS and not VBS code inside of the DTS Package designer in Enterprise Manager.  Actually abstracting out processes into a more reusable and testable state was a great benefit over VBS and DTS as a whole.

One of the first console applications written under this initiative was to automate a series of manually ran Sql Scripts that people were running against production to accompany a file import process.  It was a pretty simple project and ran, still to my knowledge, bug free, even when tested by my QA team.

Now fast forward to today.  The need for the process is still in place and the code hasn't been updated since no changes have been required; however, I found out that the application was only ran once in the past few years and then the person who used to manually go through the steps and scripts didn't want to automate it and has been doing the manual steps ever since.  Even today, this person has continued to manually spend a few hours every other day going through the same ceremony.  I think my brain gave me a BSOD from what I find highly illogical thinking.

So I approached my coworker who has been doing this and start asking the number of "why" questions I had.  Here are some of the responses I was treated to:

  • "I don't trust what I cannot see or watch."
  • "I like being able to watch and correct where necessary."
  • "The process had a bug."
  • "I won't ever know when it breaks or succeeds."
  • "What else would I work on?"

From the discussion around these themes, there are a few design and development themes that can/should be implemented when faced with such an anxious coworker.

 

Establishing Trust through Quality

While I see James Bach's point in his "Quality Is Dead" blog post, I disagree with the blanket of the statement.  There are always constraints when you're talking about software development (usually Time/deadlines being the largest constraint), it doesn't mean that every piece of code that you churn out will be of poor quality.  Practices such as TDD or just plain Unit Testing and reporting can be implemented in order to demonstrate and achieve a higher level of quality to someone who doubts quality.  Now, I believe that any developer can write code with and without unit tests and still achieve the same level of quality; however, it's easier to validate that level of quality through unit tests and TDD practices.  This level of validation is essential to people who do not trust something to be automated.

 

Establishing Awareness through Instrumentation

The bulk of the other themes that I received surrounded the concept of being able to step through the process and watch/correct the data as necessary.  This is a nice thing for debugging; however, if an automated process has to be stepped through, then it's not automated.  One way to handle this issue is to ensure you have a high level of coverage of instrumentation in the code/process.  This means making sure the proper types of notification is provided, auditing is turned on to the degree needed, and any errors (be it bugs or bad data-related) are captured and reported appropriately.  I cannot confirm my coworker's claim that there was a bug in the code since I was unable to find one in the bug database.  If an email to capture the error existed, at the very least, I would begin having my own doubts about the quality of the code; however, even that was not found.  If instrumentation was implemented to a better degree than what I put into it for error logging and tracking, I would be able to verify the claim.  As far as bad-data and catastrophic scenarios (i.e. server loses network connection mid-process), those items can (and was) implemented through logging or some for of process auditing infrastructure.  The more data that can be provided, the more confidence the person will have when automated.

 

Establishing Automation Can Assist with Workload

The last theme that came from the conversation dealt with how the person's routine became.  This person was a very routine-oriented individual who knew exactly when to do when and how long it would take.  This person was a person of ceremony and possibly a bit fearful of being automated out of a job.  After being in the industry for a number of years, I have came to the conclusions that those nightmare stories of people being replaced by programs rarely happen to the developers or analysts that implement the program or automated process.  The automation allows for new, different tasks to be done.  Even if the automated process is kicked off manually, it still frees up time to do other things.  This means you can do more easily or in general transition onto the next big thing.  I do not really know any company out there that would say they have a truly shortage of work.  Even if it's wish-list, lowest priority, internal cost projects, there's always something to do.

 

Summary

Automation is a beautiful thing that I have grown more and more fond with.  I have grown into a developer where I look for things to automation.  I have seen the themes I talked about here at a couple of different companies I've been with and people that I've talked with.  Each time, I find more and more ways to automate certain elements of a work day and also make the work flow easier.  Hopefully a few things here will help others as well.  Now if only I could find a way to automate my chores at home. :-)



kick it on DotNetKicks.comShout it