Web 3.0 Has Already Begun!

Web 3.0 has begun and its not what you think.  I don’t think many people realized what Web 2.0 was until we were at the height of it, and already seeing the capabilities that had been set out years before it. But imagine if we had all realized in 2001 what was possible and all of the major social and technical shifts that would rise the an explosion of innovation and opportunity by 2006.

First let’s start by defining Web 2.0.   Depending on who you ask, you would get different answers.  A marketer would tell you it is about the User Generated Content (UGC).  A technologist would tell you it is all about AJAX and the API mashups that AJAX enabled.  A designer would tell you it was the new simplified aesthetic that focused on conversions and pragmatism.  They’re all correct, but let me distill this into a concise list:

  • Use Case:  Socialized content.
  • Technology:  AJAX (JavaScript) and Mashups
  • User Experience: Fewer page refreshes. Simplified design.

To say web 3.0 has begun, we’d need parallel impact on all these fronts, so let’s start there:

From the use case perspective, consider the semantic and meta tagging that are now a part of the HTML5 spec.  Semantic web promises to transform the web into an ultimately connected experience in which a machine has as much awareness of the content as a human.  Imagine your calendar warning you there is a conflict prior to booking a ticket on Expedia.  This is equally if not more significant to the social revolution of web 2.0.

As far as technology goes, HTML5 solves two major problems that will lead to a major revolution of web application architecture, moving in the same direction of Web 2.0.  First, no more limitations preventing cross-domain AJAX calls. That has huge implications for mashups!  Second, a robust local storage facility that can be used for JSON (serialized JavaScript objects) document storage locally on a device and will substantially help overcome the stateless persistence issues that have plagued web applications since the beginning. This is so significant in fact, that there is already a movement toward smaller event-driven server-side technologies such as Node.js and NOSQL document databases such as Couch and Mango, which are perfect for JavaScript object storage, in acknowledgement of the shift of dominance toward the client.  Think fat client applications written 100% in JavaScript.  The potential is already there and buzzing quietly under the surface. Node.js already has as many followers on GitHub as Rails (for Ruby)!

Finally, user experience.  This one can be summed up in two words – interactive media.  Finally with HTML5 and the proliferation of cheap bandwidth, it seems the pieces are in place for the much-anticipated online media revolution.  There are technologies already available and built upon HTML5 that enable bi-directional triggering and interaction between the HTML Document and the embedded media. Imagine you are a media content producer and you can embed triggers into your video that will instigate social interaction and information widgets at correct times in the media.  Simply amazing when you think about what might be possible here.

Putting it all together, the potential is there for a much larger wave of technological and cultural innovation now, than at the beginning of Web 2.0.  Not only is this significant enough to be compared to Web 2.0; its bigger!  And if you consider Roger’s Innovation Adoption Curve, NOW is the time for entrepreneurs and technologists to begin creating opportunities around these possibilities.  Don’t wait until we’re already 1/2 way through the innovation cycle and the retrospective innovation has become obvious.  By then, it is too late.

Website Design for Tablets and Mobile

Recently I went through a redesign, and as part of the effort, I reviewed how best to support tablets and mobile visitors. If you haven’t been researching this topic recently, no one could blame you for dismissing mobile/tablets as a novelty that won’t have much impact on your web presence. But if that’s where you’re at, consider this:

It was recently estimated (as of early 2012), that 10% of all Internet traffic is now coming from mobile devices and mobile traffic will be as high as 36% by 2016. And compared to this time last year, mobile traffic is up 131% … in a single year. We are clearly in a secular trend toward personal mobility in computing. Proliferation of mobile devices now is accelerating and sales of internet enabled mobile and tablet devices now exceeds that of traditional desktop devices; tablets alone will exceed desktop sales as early as next year!

So there is a compelling need to support mobile and tablet users with any new web design moving forward. But how do you do that? First let’s talk about our objectives and we can look at a couple possible strategies from there:

Objectives:

Consider the use pattern of these devices. When using a desktop computer, they’re typically at the office and either researching work-related information, or procrastinating between projects (yes, ahem, at work). Tablet users meanwhile are typically sitting on the couch in front of the television during evenings and weekends. Mobile users meanwhile are trying to squeeze in activity while waiting in the doctors office or at a red light in their car (sad to say).

So what is going to be a meaningful and satisfying user experience for these three users? The desktop and tablet users are going to be more receptive to alternative suggestions for content (upsell eCommerce and related posts links, etc). Tablet users will enjoy media and rich images more. And all of this rich user experience is actually an impediment for mobile users who are fighting against time constraints, a small screen, and slow load times. They just want to get to the critical information such as contact information as quickly as possible. So sticking with the traditional web design only, which is geared for the desktop user, is not going to well represent the other two new classes of users (table & mobile) that you need to start thinking about.

To facilitate these new users, there are a few options:

Native Application:

The first thing that comes to most people’s minds with mobile content, are the native applications for iOS and Android that you download and install on your mobile and tablet devices. These provide very rich user experiences but require that you develop a separate application for each device that you want to support. That can get expensive. And let’s be honest, how many websites are users going to want to take the effort to download an app for? So does all that extra expense justify developing for a use case that users likely won’t even engage? There are exceptions to this rule of course. If you’re a gaming company or you are a cloud software application that provides a productivity tool that the user will need to access on a regular basis, then a native app is certainly a possibility and probably even a superior user experience. For the majority for corporate and marketing websites though, this just isn’t a reasonable option, since a user is merely looking for some quick information about your site.

Separate Mobile Design:

The next reasonable consideration is creating a separate website to support tablets and mobile phones. This is relatively easy to accomplish. It is relatively straightforward to detect the device type (e.g. HTTPUserAgent) and redirect users to alternative content accordingly. If you use your phone to browse web content frequently, you’ve likely come across a few sites that will redirect you to a separate subdomain of their website. xyz.com for example might redirect you to m.xyz.com. And there you serve the simplified version of the website that better applies to mobile devices. But this gets complicated when you begin to consider all the various permutations. What about tablets for example that actually benefit from richer user experience?

Responsive Web Design:

There is a quiet movement in the design community that began with an article by Ethan Marcotte, describing the possibility of Responsive Web Design (RWD). The idea provides a technical framework for implementing a “mobile first” web design, in which you start by designing the constrained mobile experience and layer on the richer user experience based on capability from there. By working backwards, you ensure you’ve accounted for all the various permutations such as all the different screen sizes (media ports), landscape versus portrait layout, etc.

Responsive Web design is possible primarily because of the introduction of media queries with the new stylesheets standard in CSS3. Media queries enable the website author to encapsulate CSS classes within conditional statements that evaluate the media type of the user such as window size (viewport) and orientation. RWD also addresses use of flexible design grids and context-aware images that adapt the size and quality of the image also based upon device type.

A Hybrid Approach:

After evaluating these different options, I ultimately concluded a hybrid approach was in order. The philosophy of responsive web design is elegant, I naturally gravitate to it, and it worked wonders for optimizing the desktop version of the website for tablet devices. At the end of the day though, I did not feel it was appropriate for mobile devices. As mentioned earlier in this post, mobile users have fundamentally different goals and needs, and are simply not looking for a more elegant representation of my website; they want to get the information and get it quickly, period. I think about my mobile experiences and when I look up a business online, its typically because I just want to find their address or contact information. Thus, there is a lot of information on the primary website that does not belong there for the mobile version, and it really should be treated separately.

An Example:

If you check out Enlogica.com on your smart phone, you’ll see that there are only 5 pages and the first is the contact link. If you click on this, it is a very simple page with the address and three buttons that allow the user to: click to call, click to email, and click to get directions on their phone. That’s it. There are a few other pages below that for the sake of completeness (the blog section is accessible at the bottom), but I’m really trying to create a simplified user experience, based upon the assumed use pattern of the user. And this is fundamentally a different prerogative than Responsive web design, which is focused on adaptive aesthetics, not user experience.

Because I’m using WordPress as the CMS framework for the site, I was able to easily setup a secondary mobile template that is only delivered to mobile devices, based upon the device type (HTTPuserAgent). This provided an elegant solution, since I was able to and optimize the content for mobile devices without having to create a separate mobile site to maintain.

In summary, I determined a hybrid approach to mobile accessibility that married the benefits of responsive web design with the practicality of a separately mobile theme was preferable. Responsive (RWD) techniques were used to adapt to a reasonable tablet experience, but mobile devices receive a separate theme, with limited navigation, and the content of those pages is also minimized. The mobile experience is greatly simplified and models the native device interface (using JQuery Mobile). This approach provides an optimal user experience on all three device classes (desktop, table, and mobile).

Article originally published at SitePoint:
Website Design for Tablet & Mobile

Goodbye Flash, Hello Edge!

It is no secret that the iPhone does not support Flash.  Steve Jobs went as far as to explicitly rule out support of Flash by name, in his famous 2011 speech.  And now, Adobe has responded by announcing they will no longer support or further develop the Flash platform. Instead, Adobe is quietly releasing a new product called Edge, which outputs HTML5, CSS, and JavaScript, as presumed replacement for their popular animation authoring environment.

HTML5 is very powerful and a significant milestone in the evolution of web application interface architecture.  Its not just about animation and native support of audio and video.  In truth, HTML5 in conjunction with JavaScript can do just about everything that Flash could do. The name HTML5 in fact is perhaps a misnomer in this regard as it should be noted that HTML5 embodies the standards for a collection of technologies that facilitate audio, video, real-time rendering and animations, local persistence, and so much more.  But because it is all inherently a part of the HTML5 document model (not compiled binary code), accessibility and SEO are no longer the issues they have been in the past.  Finally, the design and technology prerogatives need not be in contradiction to one another!

So what exactly is Adobe Edge? The basic concept for developers is quite similar to Flash.  The differences  are minor, for example the timeline is based on elapsed time now rather than key frames, similar to Adobe’s After Effects video editing software.  Also notable is the use of non-destructive editing. Rather than over-writing the original HTML and JavaScript files you start with, it will create its own parallel file set to augment what you started with. Available binding events are also a little bit different and better reflect their HTML5 underpinnings. Overall though, you’d be surprised just how similar the tools are.

Adobe Edge

With Adobe making this change to embrace HTML5, it seems that just about everyone is on board now with HTML5 and JavaScript as the way forward for development of rich internet applications; Microsoft even announced recently that they are depreciating Silverlight.  That’s on the desktop anyway. Mobile is a more complex issue with so much momentum still with development of native applications (iOS, Android, etc). The cost of maintaining separate applications for various devices certainly isn’t ideal however and truthfully, 80% of those apps could be replicated using HTML5 and JavaScript without much negative impact on user experience, but at a substantially lower cost.  Its primarily the sophisticated game and media applications that might not port as well.

Adobe Edge seems well positioned to take become the default product for interactive and animation authoring for the web 3.0 applications of the future. The product has done a good job of playing off of the strengths and knowledge of the existing Flash platform and hopefully will not alienate the developer base.  They’ve satisfied UI architecture prerogatives by keeping the output artifacts aligned with HTML5 and the document object model, and they’re going to be the first significant tool to provide an easy authoring solution for what will inevitably be a major new wave of web application innovation.  I’m sure it was a painful decision for Adobe to kill their golden goose, but this move should be a positive for everyone and may actually help them in the long run.

Claiming Your Online Identity

Defining and managing your identity online can be time consuming but considering how many of our social and professional relationships begin with a Google query, it probably makes sense for all of us to invest a little time pulling it all together and presenting the image to the World that we want displayed, rather than whatever just happens to show up on the Internet.

First, let me make a distinction. In SEO circles, there is a practice called ‘reputation management’ that generally involves creating numerous pages of external content that will rank for the given brand’s related searches.  All those external pages are intended to rank below the corporate website but before whatever negative reviews or remarks exist, hopefully pushing any negativity down to the 3rd or 4th page of results, where no one will notice.  In other words, it is a focused spamming effort on behalf of said brand, with the goal of manipulating search results.  Sometimes brands have no choice and have to engage in this level of online warfare. Pragmatism aside though, this is NOT what I am describing here.  Rather I am talking about a proactive and cooperative effort to help Google identify all of your identities and content online, in return for preferred placement of your content when people search for you.

Google implemented an important new feature, along with the release of Google+, that allows them to recognize social channels and owned content and properly attribute this to the brand or individual.  The implementation takes advantage of a new semantic tagging attribute (@rel) in the HTML5 spec that specifies origination and authorship of content.  If you create a Google profile and link your content and social identities using these tags, you can essentially let Google know which content is really you.  In a recent presentation, Matt Cutts (Google) called this new ability “author rank”, and if you’ve been following their recent talk about quality signals, you know that authenticity of content is a big deal in recent algorithmic updates. In fact, they’re not only giving preferred placement to authenticated content for brand and name searches, they’re even displaying the photo of the author in many cases, to help set these aside as authenticated content, from reputable authors.

Okay, sounds good so far right?  Now we just need to work on the confusing mess that is our social network and tie it all together in some meaningful way.  After a bit of homework and practice on my online identity, I distilled down what I feel is a best practice approach.  First, I created a simple website for myself under my namesake URL and created links to all of my social accounts from this location.  NealCabage.com is my literally my homepage now. Next, I linked all my enlogica articles back to my homepage.  And of course enlogica has a couple of its own social accounts to facilitate outreach and easier content socialization. So its important to keep the social accounts owned by the blog separate from those attributed to my own personal identity.  So whereas I link to the blog’s social sites on the sidebar of the blog, I keep all my personal social media accounts separate and only link to them from my own personal homepage.  Once all of that is clear, I create a Google profile page for my own personal identity link all the personal identities together.  Iterative.ly should probably also do this for its own identity, separate from my own.

Sync Up Your Online Profile

Now let’s get a little more specific about how its actually implemented and the use of authorship tagging. On my homepage, each of my social account links contains the new attribute: @rel=”me”. The homepage should have proper authenticity to make such a claim since you already liked to it from your Google profile account.  Next, on the blog that you are reading now (for example), I placed a small author block at the bottom of each article that provides a link to my homepage. This links back to my personal homepage, with the attribute: rel=”author”.  Just like the @rel=”me” helped Google to identify my social profiles, @rel=”author” will help it to recognize any content that I have created and authenticate it.

Since I know this can be a little confusing, here’s the same information in enumerated form:

1. Create Google profile (profiles.google.com)
1a Link to your website and all social accounts
1b Link to any blogs or online magazines that you contribute to
2. On your blog, provide a link on each article page back to your home base site.  Include rel=”author”
3. On your homepage, link to all of your social accounts.  Include rel=”me”

Connecting Accounts to Google Profile

The final step is to validate that everything is setup correctly. Google provides a tool for you to use. Unfortunately at the time of this writing, the tool is giving me an error for using the @rel=”me” attributes on my social account links from my homepage. This appears to be a bug however; even Matt Cutts blog is getting the same error for the same reason.

Anyway, despite that hiccup, if everything was done properly, you will soon start to see that your homepage, social accounts, and your content begin to dominate the search results for terms related to your name or brand.  You may even begin to notice for profile picture next to your content.

There you have it.  My homepage, my social accounts, and my blog contributions are all accounted for and properly attributed and validated.  Now hopefully with a little time, Google begins to treat my identities with a positive bias for searches related to my personal ‘brand’.

The Future Is the Semantic Web

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The “Intelligent agent” people have touted for ages will finally materialize.
~Tim Berners-Lee

In the 1999, Tim Berners-Lee described his vision for the future of the Internet.  He described computers being able to parse and understand the same information that we can, and become personalized virtual assistants on our behalves.  Imagine the calendar application on your mobile device observing a schedule conflict when you begin to book a trip on Expedia.  Or what if your computer could do all of the research necessary to justify decision you task it with, and it compiles a detailed research report in support of a decision, ready for you in the morning.  To many, this is the true destination of the Internet, and all of the social sharing that caught everyone’s attention in web 2.0 is a mere preview of what is to come. One blog author I came across, described it by saying “Its a data orgy and your server is invited”.

For roughly a decade now, a movement called the Semantic Web (aka HyperMedia) has been seeking to enable this vision. It is thought that by properly annotating an HTML document, you will enable a computer to consume and understand the information, much like a human would, and thus be able to make informed decisions and act on our behalf.  To accomplish this, there must be structure and relational definitions, and several annotation protocols and descriptive vocabularies followed.  So far, adoption has been anemic however, for numerous reasons: the effort has so far been splintered by various factions creating competing technologies, and limited support by browsers and search engines, limiting incentives for early adopters to explore the technology more deeply.

But recently this has been changing.  As per the introduction of HTML5, there is now support for semantic tagging in all modern browsers, search engines have begun to reward websites with rich snippets in their search results when the semantic data is present, and reports have been written, detailing how BestBuy on others have seen lift in traffic as much as 30%, as a result of their recent RDFa implementations and resulting rich snippets in search results.

Others are also making use of semantic markup recently. Facebook has been using RDF in their social graph implementation since 2009 and recently began using the hCard and hCalendar markups of user profiles and events, respectively. Google has been making a push to ‘authenticate’ authors and authored works with the XFN rel=’me’ annotation, Yahoo Tech and LinkedIn are both using Microformats in their data as well. So now we have support, peer adoption, and recent evidence of positive ROI from the effort and we’re starting to see results; RDFa adoption increased 500% in the last 12 months alone! The only thing missing was significant corporate backing.

In response to this confusion and complexity, a consortium of the major search engines, including Google, Yahoo, Microsoft and Yandex, came together to create a standardized approach, called Schema.org. It assumes the use of Microdata rather than RDF/RDFa and provides a single resource for the major semantic vocabularies to be used.  The hope is that by simplifying the technology, and standardizing it in industry, adoption barriers will be reduced and the average development team will begin to embrace the technology.

So how does one implement semantic annotation?  There two primary annotation frameworks are the Resource Definition Language (RDF/RDFa) and Microformats.  RDF can be used markup with the HTML document with what are called Subject-Predicate-Object triplicates.  It is a robust language that is most commonly used for more robust datasets that require deep data linking.  It has been criticized for its complexity however and so a recent revision called RDFa focused on making it easier to use, with a focus on use of attributes. Here is an example of what it might look like if you were to markup a simple object in RDFa:

    <div xmlns:v=”http://rdf.semantic-vocabulary.org/#” typeof=”v:Person”>
        <p>Name: <span property=”v:name”>Neal Cabage</span></p>
        <p>Title: <span property=”v:title”>Technologist</span></p>
    </div>

Microformats meanwhile, was developed with simplicity in mind and a more natural extension of the HTML document.  MicroData is a subset of Microformats that Schema.org has settled on and even has its own DOM API in the HTML5 spec, so this is the presumed future standard for most websites:

    <div itemscope itemtype=”http://semantic-vocabulary.org/Person”>
         <p>Name: <span itemprop=”name”>Neal Cabage</span></p>
         <p>Title: <span itemprop=”title”>Technologist</span></p>
    </div>

At some level the concepts are quite simple, however there are numerous Ontologies and semantic vocabularies , which have been defined and are referenced in order to give meaning to your RDF or Microformat attributions.  Notice the semantic-vocabulary.org reference above?  It calls out to an externally defined schema for the Person hCard being defined here.  Schema.org has defined many of these directly for MicroData but there are others such as Dublic Core, DocBook, and Good Relations which is specifically very popular for eCommerce. There are plugins available for many of the major CMS and eCommerce platforms to assist with semantic markup as well.

The Semantic Vision
In the near term, the Semantic Web (or “Linked Data”) is merely an exercise of annotating your content more precisely with hope of getting the Google carrot at the end of the proverbial stick (e.g. rich snippets).  As a result, any short-term concrete gain main seem a bit hollow.  The real promise however, is in what is possible when semantic annotations in content reach critical mass.  This chart shows what some have been predicting in terms of the direction of Internet technology and proliferation.  The real intelligence and interoperations still very much lie ahead in what could be described as a Web 4.0 World.  This assumes that HTML5 marks the precipice of Web 3.0 and we all begin working on proper annotations now, to lay the foundation for those achievements in the future.

Web 4.0In the meantime, there are companies already beginning to do cool things, experimenting with the sort of predictive intelligence that one might expect from a “linked data” web 4.0 World.  Hunch.com in particular attempts to help you parse through excessive data on the Internet, by looking at your past interests and those of your friends, to determine what you might like and limit your choices accordingly.  Hunch.com’s CEO talks about how research exists to assert that fewer purchases are made when consumers are presented with too much information or too many choices.  It is thus worthwhile technology for retailer to pursue, in an effort to predictively get fewer precision choices in front of each consumer.

And with all of the data that currently exists in the World, we just continue to collect exponentially more, via social interactions, tags, various use profiles, online transactions and analytics data.  The buzz word in many organizations now is “big data” and they are looking to new tools such as Hadoop to help them address these problems.  In this growing cluster of data, we will eventually outgrow search as our most useful paradigm for how to access all of this data. And what will that look like?

I was asking myself this question the other day when I looked down at my iPhone and realized, the interface may already be here!  What would a computer interface look like that is largely Internet driven, but for which the user experience does not begin with an Internet browser or Google.com?  In fact, if the real vision of a semantic web, is intelligent consumption of data and that intelligence applied for specific applications as virtual agents, wouldn’t that manifest as a lightweight albeit more mature mashup style app, similar to what we already have on mobile phones and tablets today?  Its an interesting thought.  I can imagine a progression in that direction, with these applications continuing to grow in sophistication, intelligence, and awareness.

In closing, here is a video that I found while researching, that provides an excellent introduction to the topic.  If you’re looking to introduce these concepts to your team or stakeholders, this is a great place to start:

Web 3.0 from Kate Ray on Vimeo.

100% Javascript Web App Architecture

Imagine creating a Rich Internet Application (RIA) built entirely using JavaScript, from client to server.  What would it look like? I’ve already discussed the benefits of HTML5 and the coming Web 3.0 movement in recent posts so I will not into those details now.  Instead, I want to focus on what an application like that might look like.

First, to review the new resources at our disposal with HTML5: Because of the required implementation of the HTMl5 spec localstorage(), you will now have a much greater facility for maintaining persistence across the user experience and even over the lifetime of the user. The implementation specification calls for a robust key/pair database that can store up to 5MB of data and doesn’t necessarily ever expire (ideal for storing JSON objects). Second, security provisions now allow for cross-domain AJAX calls, enabling mashups of disparate data sources, directly in the client application.  Third, js workers are now a native concept, allowing background processes for intensive JavaScript processes, to now kill browser memory.  Fourth, native support for media, enabling bidirectional control of audio/video by the DOM and vice versa. Additionally, libraries for rendering sophisticated SVG graphic and even drawing real-time on the canvas. These resources provide tools for a much deeper user experience and will provide the opportunity to create a fatter client application, shifting much of your central logic to the client.

Meanwhile on the server side, a substantial amount of momentum is building behind Node.js.  In fact, check out Node.js on GitHub and you’ll find it now has more followers than Ruby on Rails. The promise of Node.js is an extremely thin event-driven framework for creating web services entirely in JavaScript.  The benefits of this approach are (a) an extremely fast framework that avoids thread locking and an all-JavaScript developer experience, which provides for both faster development and less team bifurcation.  Node.js is also ideal for managing long-session connections which might be used for media or chat sessions, etc.

Node.js Popularity

There has also been a lot of interest in NOSQL (“not only SQL”) databases, that are document  rather than table oriented.  So, instead of a structured pre-defined schema, you simply store your own data in a structured way that can be independently structured from any other document.  The documents are essentially serialized JavaScript (JSON or the binary equivalent BSON) and are ideal for persisting data objects from the software.  In fact, wasn’t this largely the goal of bridging ORM technologies such as Hibernate?

javascript ArchitectureSo where is all of this going?  If we look at the technology stack we now have available, it is natively JavaScript from top to bottom, with persistence resources on both the client and server, that are both ideal for storing serialized JavaScript objects.  Combine this with the deep integration of media and imaging, as well as support for AJAX web service integration across any domain, not to mention the proclivity of JavaScript developers toward  a fat client architecture – I think you have a recipe for a popular new fat client architectural paradigm emerging.

Considering mobile, it is ideal not to have page refreshes more than necessary.  With a web application architecture that is much less dependent upon server-side page refreshes, this could be a much simpler alternative to building complex native mobile apps, using Objective C and still another version in Java for Android.  And with all of the media resources implicit in HTML5, it appears we’re going to see much deeper still, RIA user experiences, which again benefit from a fatter client and thinner server application. In this paradigm, the server application is minimized to a collection of web services in support of the fat client application, which aren’t even an exclusive provider for the RIA application, since it can also not pull directly from other external resources providing JSON web services.

Node JSObserving other early indications of movement toward this model, Yahoo is set to release a new platform called Mojito, which seeks to “blend” the server/client paradigms into one cohesive development experience.  It is essentially built upon Node.js on the backend and YUI for a rich AJAX library on the client side. There is a whole philosophy they are building around the best practice for developing RIA JavaScript-centric web applications that service mobile and desktops alike.  Or you could assemble your own stack using JQuery and Backbone.js on the client; Backbone.js in particular is an MVC framework for JavaScript and may prove particularly useful as you begin building more complex “fat client” applications.

This movement toward deeper JavaScript’ing and richer user experiences online, began in 2005 but stopped short due to key limitations of AJAX, media, and persistence, though I think many saw the promise.  With those key issues now solved, I very much look forward to seeing a continuation of interface innovation that leads to further refinement in both architecture and user experience.

HTML5 Is Kind of a Big Deal

Have you looked at the new features that are part of HTML5?  I must admit that it took me some time to actually look at it because I dismissed it as just some new markup to have to deal with.  In that way, its really not correct for this to be labeled as HTML at all!   In fact, HTML5 is a collection of technologies that have been sorely missing from web browsers, and upgraded markup is a rather minor footnote.

The introduction of these technologies now means that we’re probably about to witness a major new architectural paradigm for web applications, toward that of a fat client, and away from the traditional static page model. Dare I say Web 3.0?  As someone who has always had an affinity for UI centric application, I must admit that I find this pretty exciting!

So let’s get into it. What is so exciting about HTML5? There are 5 major categories that I would us to describe these upgrades:

1. Dynamic Image Rendering

There are two major technologies to discuss here: SVG and Canvas. SVG is ‘retained mode’ and used mostly for data modeling or whereas the canvas ‘abstract mode’ and can be used for real-time painting and animations. In totality, these vector tools are what will allow the technology to replace what has otherwise been accomplished mostly with Flash up til now.

SVG – essentially a markup language for vector images that live in the document. That in itself is pretty cool which you consider the semantic web potential of the image data.   Mozilla has a really great example of SVG with their glow map that shows a glow dot on a map for each download of Firefox. The real time update of the graphic shows what can be done here.

Canvas – more for user interactivity and animations. So whereas SVG might be used in place of traditional Flex modeling, imagine the canvas replacing most uses of Flash.  A friend sent shared an example of an interactive animation in which the user draws a stick figure and then interacts with the stick figure as he begins walking across the screen and doing random activities.  The painting of the screen is all possible via the HTML5 canvas. The animation and interactivity, is the interplay of JavaScript with the canvas.

2. Native Media Support
Embedding an image or audio file is now as easy as embedding an image. This is probably the most famous feature of HTML5 as we all have heard that Steve Jobs pointed out that you no longer need Flash to embed videos, right?   But that is  just a tactical concern.  What really makes this cool are the implications of having natively supported video.  Imagine what you can do when you have a UI that can directly interact with the video, rather than simply play it.  For one thing, you can have multiple iterative hot spots throughout the document that trigger videos based upon user or application events. That’s true Flash or Director multimedia, directly in the DOM and SEO and Mobile friendly.

There are also interesting experimental technologies such as the Popcorn.js library that looks at triggering events within the UI or application via embedded footnotes in the video itself.  There are many people buzzing that Interactive/Internet TV is finally going to take off in 2012 because the technologies are finally going to be present to do so. Imagine the implication for TV content producers if they’re able to conduct the iterative events on an Internet TV video embedded directives in their video content?  They could instigate real time data and social media and coordinate it with their media experience.  That is very compelling.

3. Semantic Tagging
This is compelling from a Semantic Web perspective. So far in the evolution of the Internet, everyone has focused on creating websites and web applications that engage a user.  But what if a computer needed to interact wit your content?  The only significant example of this to date are the search engines and we should all be familiar with SEO and how pages are altered to make a document parseable by search engines.

But imagine taking it a level beyond that.  What if your computer acted as a virtual assistant for you and you were able to book a plan trip on Expedia, when your computer interrupts you to remind you that you have a scheduling conflict already.  These sort of aware systems would only be possible if they were themselves aware of the content we are interacting with.  So the idea with the semantic web, is that if we all properly tag our content with appropriate tags and meta data, we make it possible for such systems to be aware and to consume our content.

HTML5 takes a big step forward on this, both with semantically appropriate tagging, but also with formal adoption of meta tagging standards such as RDFa and the @rel attribute that can be used to map together authors and their contributions online, which I discussed a bit in another post.

4. Local Data Storage

Initially the HTML5 specification called for a location implementation of a SQL database.  Sadly, this was deprecated last year.  Many of the modern browsers have already implemented but it may not be there in future browsers.

What is there however, is an client-side key-value database solution for local storage. Using the localStorage API, you can store up to 5mb of data and it seems to persist indefinitely, or until the user manually purges their database.  So this can still be very useful. This is basically cookies on steroids.

This probably is a better solution than a local SQL database.  Consider the movement of NOSQL database systems toward non-structured document databases rather than tables and schemas; they essentially store JSON objects which are native and ideal for persisting state of a JavaScript application.  Given that an HTML5 JavaScript application would be the consumer of this database, it seems this might actually be the perfect solution for maintaining state, compared to a traditional SQL dB.

As for why this is a big deal, it has the potential to completely change the architectural paradigm of web applications!  Persistent state is one of the big issues that pushed traditional “fat client” application design toward a thin client/fat server model, since web applications relied on the server to remember everything.  If this issue of state is finally resolved, we could see a return to a fat client model, in which we’re doing way more development in JavaScript on the Client side, and much less on the server.  Many, many implications here!

5. Standardized Resources

Other resources have also been standardized here as well:

JS Web workers – this is a subtle yet bit one.  We’ve all probably experienced the occasional web application that seemed to load really slowly and kill usability because the JavaScript had a lot of work to do and ran away with the app.  With web workers, its possible to isolate certain JavaScript threads as background processes, similar to Unix Daemon processes.  That could he very helpful for large those pesky social 2.0 JavaScript includes that delay your document ready event or other data fetching or calculation intensives such as prime numbers, etc.

Cross Domain – AJAX can finally make calls cross domain, rather than being limited to the domain or origin.  This is again huge in terms of being able to build a fat client app, particularly in the days of  mashup APIs.

GEO Location – Also apparently each browser now provides a standardized Geo-location API. The implementation details are up to each browser; Firefox in particular apparently now implements Google’s Geo API.  So now we have standardized location approximation, for all computers not just mobile.  This has recently popped up in Google’s maps in fact.  Clearly this is an attempt to support creation of a single fat client app for mobile and the desktop.

Conclusion

So there you have it, HTML5. Each one of these respective upgrades is a big deal, but in my opinion the really big deal is that I think this will trigger a new architectural paradigm in web applications.

Imagine writing a single application that is equally engaging on your mobile app or desktop.  It retains its own state and doesn’t require page refreshes, so its an optimal experience with even a slow internet connection or lack of connectivity such as being on the rode or in airplane mode.  Imagine the client is where the majority of the application logic lives and only minimal server calls are required any more, and even those being written now in an event-model pattern using JavaScript via Node.js, rather than an entirely different server side technology.   Yep, this indeed has the potential to trigger a really exciting new era of web application development!