Wednesday, October 24, 2007

An Introduction to PageMethods

PageMethods offer a simple way to asynchronously communicate with a server using Microsoft's ASP.Net AJAX technologies.  Unlike Update Panels which utilize the full page life cycle to synchronously update a section of the page based on the panel's triggers, PageMethods handles the transmissions manually through JavaScript.  Since everything is manual, PageMethods take a small amount of additional time to develop; however, provides an additional level of efficiency that cannot be found in UpdatePanels.

In order to begin utilizing PageMethods in your ASP.Net AJAX enabled webpage, you need to do 3 things:
  • Set the ScriptManager's "EnablePageMethods" property to "true".
  • Write Public Static WebMethods in the Code Behind file of the web page that will return the information required.
  • Write JavaScript that calls the PageMethods and reacts to the return results (information or errors).
Setting up the Script Manager
Setting up the ScriptManager to handle PageMethods is fairly straightforward; however, there's one thing that you'll need to be aware of. The "EnablePageMethods" property is only found on the ScriptManager control and not the ScriptManagerProxy control. This means that if you have a Master Page with a ScriptManager control in it used by the Content Page that will contain the PageMethods, you must set the Master Page's ScriptManger control to include the property. Doing such enables all pages that use that master page to be able to use PageMethods. I haven't investigated to see the impact this has on pages that do not utilize PageMethods.

With that out of the way, in order to enable page methods on a page, simply go to your ScriptManager control that's associated with the page and set the "EnablePageMethods" property to "True". This is "False" by default, but once you set it to "True" you're all set.
Writing the Server Code
Now that we can write PageMethods, we need to flip to our code behind files (or our <script runat=server> sections) to begin. Below are the steps that I traditionally follow when creating PageMethods:
  1. Create a new method/function that returns String or value type (Integers, Doubles, Dates, etc.).
  2. Mark the new method as Public Static (or Shared for the VB coders out there)
  3. Import the System.Web.Services namespace
  4. Add the [WebMethod()] (or <WebMethod()>) Attribute to the Function
In a nutshell, I write the method first. This allows me to get the logic in place just in case I need to modify the implementation when I'm in the middle of writing this code. Next, I ensure the proper scope and modifiers are setup on the method, and finally, mark the method as a WebMethod like I would with a web service.

[WebMethod()]
public static string MyPageMethod(string someParam)
{
return "Hello, PageMethods!";
}

When the page loads, the ScriptManager will examine the page's definition for any public static webmethods. It will then generate JavaScript code that is used to call these methods and append it into the ASP.Net AJAX's PageMethods object. This interaction allows our new method to be called from JavaScript by referencing in JavaScript the method of the same name of the PageMethods object ( PageMethods.MyPageMethod(...) in this case - more on this below).

Making the calls with JavaScript

So far we have enabled PageMethods in our ScriptManager and have made some static web methods in our code that the ScriptManager will create JavaScript from. Now, we need to write our own JavaScript code to call these PageMethods so that we can retrieve the information they provide.

In order to create the JavaScript code to call the PageMethods, you simply have to make have 3 things:

  1. The script that calls the PageMethod itself
  2. A function to call on each successful PageMethod call
  3. A function to call when ever there was an error in the PageMethod call.

The code to call the PageMethod is very simple.

function callerMethod() {
PageMethods.MyPageMethod(PARAM1,
PARAM2,
PARAMn,
callerMethod_Success,
callerMethod_Failure);
}

Let's pick this apart now. What we have above is a simple JavaScript function called "callerMethod". Inside of this function, we do a simple call to our method's JavaScript counterpart that got attached to the PageMethods JavaScript object that's included by the ScriptManager. Next, we pass any parameters required (signified by the "PARAM1", "PARAM2", and "PARAMn" in this case) followed by a pointer/name of the method to call on a successful transmission and the same for the failed transmission.

One optional parameter that I've purposefully excluded can appear at the very end is designated for the current UserContext. In a later posting, I will provide more information revolving around this parameter and how to use it to provide more powerful communication models in the code. For right now, we will exclude the parameter because we can.

Now that we've created the call, we need to create the functions that'll be called when the transmission succeeds or fails.

function callMethod_Success(results, userContext, methodName) {
alert(results);
}

function callMethod_Failure(errors, userContext, methodName) {
alert(errors.get_Message());
}

Again, these are very simple methods that do nothing other than alerting the user of the resulting value on success or the error messsage on failure. The names of the methods are completely up to you naturally; however, whatever you name the methods, each require 3 parameters.

The first parameter of is for the returned value or error that the call to the PageMethod created. On success, you can immediately begin using the results as if they were a normal value passed into any regular function (since it's either a String or Number JavaScript type). On failure, the PageMethod and ScriptManage return an object with information about the error. The get_Message() method of the error object will return the error message that was thrown during the transmission. Once you have the resulting value (success or failure), you can then write JavaScript logic to do whatever you want with it.

The second parameter is the userContext. Like previously stated, this will be covered in a future posting. For now, just remember that you can send the userContext to the PageMethod and read the resulting context on the returning trip.

The last parameter of the success and failure methods is the methodName. This parameter provides a simple string that represents the PageMethod name that was called and caused the success or failure function to be triggered. This is a great addition to the returning functions since it allows you to reuse success and/or failure functions across different PageMethods and produce custom logic based on which PageMethod was called. While I can see some merit in having a single success and a single failure (or even 1 function that does both), I wouldnt' want to maintain the if statements that would spawn from such logic.

Summary

While the example code here isn't too useful (like most Hello, World projects), it hopefully will get you started on using PageMethods. In future postings, I'll provide some addition information and techniques that you can use in order to utilize PageMethods by using the UserContext object, identifying more in depth the data types that can and cannot be passed, and also object serialization through JSON. In addition to these items, I'll also be keeping an eye out on any items that may be not quite so obvious when you're work with PageMethods. Down the road a bit, I'll do a very similar post to this about using local WebServices instead of PageMethods and why.

Tuesday, October 23, 2007

The Beauties of PageMethods & WebServices

It's funny. My first experiences with AJAX was back with the old XMLHTTP Post days going to a Java Servlet. I liked the results but hated the implementation. Thankfully the rest of the development community and vendors couldn't either, and now, today, you can't go anywhere without hearing AJAX it seems.

I didn't care too much for Microsoft's ASP.Net 2.0 AJAX implementation. I toyed around with it and like the simplicity of Update Panels; however, I hated the overhead and the synchronous interaction they provided. I was looking for an easy toolkit that I can use w/o having to write a lot of custom JavaScript or focus on writing my own JavaScript toolbox like I felt like I had to with AJAXPro. Luckily, I stumbled upon different features of ASP.Net AJAX; PageMethods and Web Services.

Most blogs about ASP.Net 2.0 AJAX will focus on the uses of Update Panels or controls from the toolkit. However, there's a couple that talk about the ability to do truly Asynchronous calls using PageMethods or WebServices. Under the hood, these appear to be pretty much the same: we public-scoped, static/shared method that can be called using JavaScript once it's class has been registered with the ScriptManager (or ScriptManagerProxy in the case of Master Pages).

With the similarities, there are some differences:

  • Page Methods are Public, Static methods located in the page's class file or in it's <Script Runat=Server> tag.

  • PageMethods are scoped to just the local page. You cannot call a page method of a different page than what you are currently on. (I'm needing to test this further.)

  • Web Services methods allow for a better separation of presentation and control than what a normal code behind structure can provide.

  • You cannot reference a WebService method that is in a different domain than the page. (Cross Domain Scripting has security issues that many modern browsers just shut down now).



With two different technologies, which is the better one to use inside of your application?

For the PageMethods model:


  • Best for smaller projects

  • If time is critical, Page Methods are faster to develop then the WebService Model



For the WebService model:


  • When the same code is needed in multiple locations.

  • Separation of the control logic is desired

  • Projects are larger and tends to find multiple developers working amongst each other's code



In the next post, I'll provide some code snippets on how to get started on each of these technologies.

Monday, October 22, 2007

Beginning of a Different Focus

For the majority of the year, I have been researching and helping my coworkers learn a variety of technologies. This has lead to a lot of opportunities as well as forced me to really take the time and focus on a number of technologies. I've been somewhat bouncing all over the board of technologies on my researching; Regex, CSS, LINQ, AJAX, etc. Going forward, I'm going to be focusing on writing shorter, more focused blog entries in order to provide some value over the random ramblings that I've posted thus far.

Stay tuned.

Tuesday, August 28, 2007

Old trends aren't necessarily antiquated

I was working on rewriting an old ASP 3 application into ASP.Net 2.0 today and came into an interesting thought that caused me to look at the past 5 years of ASP.Net. Back in ye' old ASP 3.0 days, everything was inline code. Then came .Net and the trend of Code Behind files. Everything seemed good aside from the initial runtime compilation upon the site's first request. Now in the ASP.Net 2.0 world, we're blessed with the truly pre-compiled/published sites that only hold marker files in place of the traditional files that would house the mark up.

Now, looking at the history of the application that I am rewriting, this specific application tends to have a few pieces (usually just presentation layout) that requires changing on a frequent basis. Now, this is where the difference of opinion lies. Some people see these changes as a cause for a full deployment and along with that, some QA representatives would also deem that a full deployment requires a full application test (I've worked with people who demand such even if the change was only the order of text boxes). There are also others that think that one would only need to deploy only the updated files. While this seems logic, it comes with the price of having the ASP.Net 1.x initial lag once the file is accessed. In addition to the lag issue, there's also the requirement of deploying the solution in an "updatable" mode. As in most cases, this would be a trade off between security and convenience. I'll let you decide on when such a trade off is applicable.

Assuming that the convenience is agreed upon as necessary for the project, there's two ways that could accomplish this task. One is to simply replace/update the markup of an ASPX or ASCX file. While this is great, what if the UI needed to change from a drop down list to a series of cascading drop down lists to obtain the "same" value in a more refined manner?

At this point, I would advocate a full deployment in order to maintain a consistency of code (or in the case code-behind); however, curiosity got the best of me on this one. :)

Never trying it myself, I created 2 simple user controls where the code was inside of a <script runat="Server" > tag. I chose to Register the controls via Web.Config instead of directly on the page for a single location, should I find myself wanting to use the control elsewhere. Upon updating the SRC of the registration entry, the user control flips.

After doing this, I realized that it would work if I modified the user control logic a bit. Many developers create user controls that are nothing more than a segmented section of the page that houses it. This tightly couples the control to the page and is considered a bad practice in many groups. A better way is to just encapsulate the functionality of the control so it doesn't rely on any external piece of information. Yet an alternative way (and the way that arguably works best for this situation) is to create the user control where any data that would need to be passed to the page itself be exposed through properties and events. Ensuring that all controls that a person will be swapping out implement the same interface(s), the page's code behind should be able to implement the logic without a full deployment needed.

Base on this information, I'd have to argue that Code behind with a non-updatable deployment may be the best and most secure form of deployment, there may come a situation where it's not the best option.

Wednesday, August 22, 2007

The 4 Paths of 3.0

Microsoft.Net v3.0 has been out and around for a good couple of years now (including its days under the WinFX name) and it seems like just now people are beginning to use it. With in the past year and a half, more and more advances in development paradigms have caused the need for a better interface along with easier ways to do things. .Net 3.0 was to help in the areas of authentication (CardSpace), processing (WF), UI (WPF & Silverlight), and SOA (WCF). The 4 aspects pack a huge impact once one dives into them and applying them together makes them a true force of development.

So, if one was to start pick one up of these up, which of the 4 paths would one choose?

CardSpace:
CardSpace provides a quick and easy way to provide digital authentication. Unlike Passport, CardSpace provides better management, an easier API, and a significantly better cost structure (if anyone priced the Passport authentication licenses, you know what I mean by this). Many other bloggers view it as probably the most important piece that's getting the least amount of attention. I have a feeling that I'll be diving into CardSpace in the next few months, but not initially given some of my direct needs.

Workflow Foundation (WF):
I started reading up on WF about a month ago, randomly chosen from the 4 technologies. An embedded runtime specifically for workflows that would take compiled or xaml-based workflows was quite nice. It provided an easy interface for design and was quite easy to implement. Even with all of the benefits, WF is one that I'm going to feel is acceptable with my limited knowledge at the moment. I'll pick it back up after a bit.

Communication Foundation (WCF):
As I was researching and playing around with WF, I noticed that WCF can be integrated into it quite nicely. This, combined with recent performance statistics posed on MSDN, makes it a very nice option to learn and apply in just about any environment. Again though, I haven't truly dove into WCF and will probably do it second on my list.

Presentation Foundation (WPF):
When the first WinFX book came out, I thumbed through it and the first thing I saw was Xaml. I saw it and shrugged it off (since I was still semi-green in the ways of .Net at the time). Xaml was a strange language and I felt more comfortable honing my Perl skills before touching it. However, now looking over 3.0 and also reading more into Xaml, I must say that I wish I did discard it earlier. WPF uses Xaml (or can be generated through code) and the deeper you go into the 3.0 technologies, so does everything else. The power of Xaml is amazing and seeing how it works along with Silverlight, it truly is a remarkable language.

While probably not the most professionally practical in many current settings, I can see many companies begin to adopt WPF should their development staff become educated in such. The sheer power is amazing and becoming well versed in it has 0 negatives from what I can tell.

Thursday, August 9, 2007

The Daunting Task of Continuous Education

Looking over at the technology landscape that exists today, it's easy for someone to be barraged by all the information. What do you do if you are fresh out of college or even high school and want to get into development? Where do you start? Or, in many cases, where you do pick back up if you were left behind?

I've been pondering these questions. With technologies of today and tomorrow being out there, having the skills to learn the information and also to continuously educate yourself is one of the best "skills" a person can have. In addition to be able to absorb information, I've seen it being extremely beneficial to be able to apply and present what you've learned. In the situation of presenting the topic, it assists in validating your knowledge and also easily identify holes.

Most college graduates in college are not focused on a single programming language. Colleges for the most part prefer to provide a variety to languages, which is good for a neophyte who doesn't have a preference; however, it also shows the difficulties of using college as a baseline for a developer. I switched colleges a lot when I was in school. Not all of my credits would transfer and in a few of those cases, I got to experience lvl 2 classes of many programming languages. After a couple 2nd level Java and VB classes as well as the interviews and research I've done, college really only teaches some basic File IO and possibly some DB connectivity. Some colleges DO take development courses further than this; however, many programs only get this far.

So you're a recently graduated college student with knowledge of the basics and a desire to learn more. What do you do? There's plenty of resources out there now for the Internet generation. Google, MSDN, forums, user groups, Safari, Books24x7, ElementK, and more are out there for easy learning. Their effectiveness really comes down to that of the individual. If the person learns best from reading and applying, that person won't have any troubles but what about the others who prefer examples or mentorships?

While everyone prefers a specific learning style, that doesn't mean the paths they choose doesn't have to different: same content but different means. So how does a person who knows the basics through College level 2 VB or Java get to being a skilled .Net 3.0, SOA, or basic OOP developer? How does one learn topics that are somewhat scarce due to "common knowledge"?

There are some steps that make a sense sequentially like the following:
  • Basics > OOP > SOA
  • .Net 2.0 > .Net 3.0
  • OOP > Unit Testing
Even with these maps, it still can be a daunting task to get exposure to everything and even more of one to attempt to master any of them. The only thing you can do is take it one step at a time. No one will ever be an expert at everything since there's just too much out there; however, there are definitely those that are more skilled in a particular field. Luckily, most of those individuals in our field tend to share.

With this goal in my mind, I shall hopefully minimize my ramblings like these first couple posts have been and start putting more technical content in. If even one person find it useful then it'll be worth it in my opinion.

Monday, July 30, 2007

Writing the first line....

Back few years after the turn of the century, I had the privilege of hearing Ted Waitt, returning CEO, President, and Founder of Gateway Computers, speak about the computer industry and his return to CEO of his company. He spoke of the need for change and the possible ramifications if the company didn't change. He told the audience about arriving into a meeting with his executives being carried in on a coffin. I don't know if this is true or not; however, it was used to illustrate the importance of such a change in that business then. Shortly there after, Gateway and other computer OEMs began branching out from computers with digital electronics such as MP3 players, DVD players, and plasma and LCD TVs.

Looking back, I'd have to say Ted was right on the need for change. If any person or company continues to offer the same items over a period of time without changing, then they may end up in that coffin (figuratively speaking). Change is constant in this industry as well as the world around us. I remember the rise of the Internet, XML, and even how Web Services was the ultimate buzz word. I remember interviewing for a developer's job a handful of years back where they talked to me about web services. They seemed to know that it was the "next big thing" but after a few questions of my own; I knew they had absolutely no idea what it was all about. At the time, I'd be lying if I told you that I know what promises it held.

Now-a-days, we can't go anywhere from, again, the "next big thing"; Web 2.0. Mashups and tags and personalization and social networking, and AJAX, and everything else. Roll backup the clock 10 years and I could say that while college programming taught someone the basics, there was still a lot that it didn't teach. You may have learned VB 5 or Java 1.1; however, I never heard of a college talking about DCOM, or JDBC, or anything of the likes. Looking at the playing field now, I'm seeing a lot of the same from future developers. They may know the basics and the version of the language may have changed, but the overall scope hasn't changed.

Developers have to constantly learn. This isn't a bad thing since in today's Internet age, finding answers and information is extremely useful and semi-easy. Ask a VB developer 10yrs ago what they used as a resource and many times it was MSDN. Ask any developer now and most will probably say Google as well as a handful of other sites. Information is out our fingertips and all we have to do is focus on what we want.

How can we use this large amount of information that we have though? It's great for a reference and some is decent to learn from our experiences; however, staying one, let alone two, steps ahead of the current technology isn't easy. Everyone's looking for the next big thing. Web 2.0 has already migrated from the Internet into the Enterprise and things are getting better. We've heard about SAAS and SOA for a few years and yet just now some people are coming around to what this means.

Today, I started to read an article over at ZDNet that talked about the Semantic Web and how some people are even beginning to coin it Web 3.0 (why not Web 3000!.....exclamation point is important there too :) ). Truth be told, I didn't read more than a few lines into the article because it was really just about 4-6 sentences that talked about a rather large excerpt from a web cast that it provided as a free download. Naturally, I listened to that instead of reading only part of it.

The web cast brought an interesting topic to light which I foresee as at least 1 step if not 2 steps ahead of where we are today. The Semantic Web is very much like the ultimate dream of SOA and SAAS. Being about to provide information facets that can be consumed by just about any endpoint. The ability to consume and offer such information and ability with in any type of program, webpage, or device is a nice dream. I have no doubt that it'll be a reality eventually, but after diving into it, there's a lot to chew there. The whole design philosophy is different then a lot which is shown in today's main markets. Throw into the mix that not everyone has fully got their heads around SOA let alone enough able developers to truly push it forward once they do and it seems like a far dream. Nonetheless, I think it's possible.

Take a good, long look around the web. There was a time I hated to surf because I couldn't find anything that interested me. Looking around now, a person can be barraged with input on possibilities and you don't have to stretch your imagination to see what the next thing for those entities are. Take a look at SalesForce.com, Drupal.org, or even attempt to take in the various assets, services, and developments that Google is doing. Look at some of those items and try to come up with what's next for each of them. Some basic OOP design principals would point towards abstraction. How will one be ready to consume pieces of what today's pioneers are creating? Will the Web be ready? What can each of us do?

I've recently taken up catching up on some of the recently released Microsoft technologies (Namely the .Net 3.0 framework of Workflow, Presentation, and Communication Foundations). Looking into these technologies and assessing what other things are going on inside of MS's roadmaps as well as the industry as well, it's pretty obvious some of the transitions taking place. It's only a matter of time before we'll need to learn it before we end of in the proverbial coffin.