top of page

Are We Ready For Web 3.0?

by Gregor Fisher 

Nowhere in the technology world is the level of noise louder and more bewildering than with the internet which lies at the nexus of many new technologies. The debate over Web 2.0 is a good case in point. People question whether or not it is a relevant term and exactly what the term Web 2.0 refers to. Many think it is just marketing hype, signifying nothing of real merit. Others think it is a useful term, referring to the ability to develop rich, browser based, applications. Regardless of where you stand on this issue, Web 3.0 is already at hand. 

As a means of pealing back the onion, let’s first discuss the Web x.0 nomenclature and what these iterations refer to. Whether you think of these terms as more marketing hype than substance or not, the term Web x.0 does signify clear turning points in web development techniques and what you can actually do in a web page. For this alone, the ability to clearly define these turning points, the Web x.0 naming convention is worth its weight in gold. 

The Web x.0 naming convention provides a sort of bread crumb trail that allows us to see at a moment where we have been, as well as eagerly speculate about where we may be going. 

Web 1.0 – Applications are an afterthought 

So what is Web 1.0? Web 1.0 is the original model for developing web pages and applications. To say “applications” is somewhat of a misnomer because the web was not originally intended to be a platform for application development. The Common Gateway Interface (CGI), the original mechanism for connecting an HTML form to an application running on the server, was developed as an extension to the HTML specification. It was developed for the specific need of providing interactivity to web pages and to provide a modicum of connectivity to server-side applications. 

As the result of being somewhat of an after thought to the original HTML specification, the technique for connecting an HTML form (forms are the basis for internet applications) to backend applications was flawed. It was at least flawed in terms of being able to build robust, responsive, and dynamic applications--a requirement for being taken seriously as an application delivery platform. 

Early on (mid 1990s), most web development was taking place on the UNIX platform. CGI applications were written in C, Fortran, PERL, shell scripts, etceteras. These programs ran as processes external to the web server. An HTML form submits its contents via the browser to the web server CGI script. This script then parses the content of the form and passes the values submitted to your application for processing. The application then passes back to the web server the result of its processing; the web server formats the data and sends the result back to the client browser. 

On a fundamental level the CGI model works, but as I mentioned, it is flawed in some basic ways. In my experience, it was the clumsiness of the model that gave me the most trouble, although the model had other inherent problems relating to security and performance. Too many of the pieces of creating an application under this model had to be done manually. You had to parse the request using PERL (a not altogether friendly language), then pass the results to your application, get the result back and format it and send it back to the client. This required using disparate tool and skill sets. You needed to be not only familiar with web servers and the request/response model, but also with PERL, have expertise in your programming language, know HTML, etceteras. Indeed, you had to be a “Jack of all trades.” In all fairness, much of this sort of disjointedness still exists in web development today (perhaps even more than ever!), however, many of the fundamental architectural problems have been worked out and the tools have greatly improved. 

At that time, not only were the basic requirements somewhat high for connecting your application to the web, but as I have said, the essential model was flawed. Every request to the server launched a new process that upon heavy use could really bog down the server. The response to every request refreshed the entire page in the browser which made for a poor user experience (this one is key in terms of understanding the evolution toWeb 2.0). CGI applications also had security vulnerabilities. With each request you were launching an unmanaged (by the web server) process on the server that could potentially be exploited because of poor programming on the part of the developer or improper permission settings. This is to say nothing of the lack of any integrated tools that addressed the need to be able to create and deploy sophisticated, object oriented, applications that had a sound architectural model. 

Web applications were largely procedural, often times scripts, that really did not support building sophisticated applications. It was nice what you could do with these tools, but what you could not do became more and more apparent. Eventually, “middleware” tools such as JDBC and ODBC, among others, sprang up to make it easier to connect to and work with databases. Soon methods for integrating fully object oriented applications that ran under the auspices (read managed by) the web server were on their way. This last item is the other key piece to the understanding of Web 2.0 and the problems it solves. 

Web 2.0 – A clever workaround leads to the “ahahhh!” moment? 

Web 2.0 represents the attempt to solve some of the basic architectural flaws of that first generation of web applications. In my mind, this is what the term Web 2.0 refers to and the term is indeed helpful for making this important distinction. It refers to web applications that came after the CGI era and that solve the “round-trip” problem. The “round-trip” problem is the inherent problem of the browser refreshing itself too often, providing an unstable and often unresponsive user interface. It was a real problem. 

The CGI model and the round-tip problem were the two main impediments to the browser becoming a true application delivery platform. Interestingly enough, it was a workaround solution to the round-trip problem that led to the “ahahhh” moment. 

Well, maybe it wasn’t an “ahahhh” moment; perhaps it was more like an “ahahhh” couple of years. Regardless, it was a workaround to the round-trip problem using HTML Frames that gave rise to-—the other pillar of the Web 2.0 story (although as I have alluded to, not the whole story). 

In around the year 2000 or so developers began using hidden Frames to make calls to the server without refreshing the main, visible frame that was displayed to the user. This allowed you to make calls to the server that didn’t redraw the web page in the visible frame. This technique for overcoming the “refresh problem” put the “A” in AJAX, which stands for asynchronous. It was in fact the beginning of AJAX. While the use of Frames and IFrames are still used in developing AJAX applications today, the techniques have evolved. 

Now, browsers expose objects that are accessible in JavaScript on the client. These objects handle making requests asynchronously to the server directly from JavaScript. You can then load the data returned as XML into a data island using the Document Object Model (DOM), without refreshing the page and without using the Frames technique. The data in your data island can then be accessed and used in your web page at the developers discretion. JavaScript is the common glue that ties these pieces together. Today there are numerous client and server-side technologies and strategies for managing the relationship between the request and response to provide a smooth robust feel to web applications. 

In fact, there are so many AJAX related technologies and strategies, that it is actually a problem. Sorting out the differences between them can be overwhelming. It would be nice if there were just one or two approaches that were tried and true. Unfortunately, that just ain’t the way it is. 

As promised, however, there is more to Web 2.0 than AJAX . The other significant part of Web 2.0, that I began discussing at the top of this section, has to do with the integration of web servers with application frameworks that make available to web applications full blown APIs and the object oriented programming model. Today, web server platforms are integrated with the full APIs of the .NET and Java APIs. And there are a lot of smarts built into these APIs that tackle very specific tasks in web development, such as parsing XML among many others. 

This is a fundamental and significant change from the CGI model in which you passed in arguments upon launching your program, spawning a new heavyweight process each time. This old, clunky, way of doing things was fraught with problems. It had the potential to bring the server to it knees. The new model allows you to bind to a web page the full weight of mature APIs. The limitations of server-side scripting with sparse supporting tools is a thing of the past. This is the other, less commonly talked about, part of what I think of as Web 2.0.

Although Web 2.0, specifically AJAX , is still a work-in-progress, it has paved the way for allowing the browser to become the “miracle client” that everyone hoped it would be. A lot of time, effort, and VC dollars are being spent to make it so. In any case, it is probably a safe bet that the browser will continue to be the model for thin client application development for the foreseeable future. The web is now a viable application delivery platform! It is this culmination of a mature server-side programming model along with the solving of the “synchronicity” issue (AJAX) that the Web 2.0 milestone represents. This is what web 2.0 means to me anyway. 

Web 3.0 – It’s the application(S) stupid! 

If you scratch around on the web and look up Web 3.0 you will get mostly useless results. Among the more interesting of the results that you will find is a reference to Web 3.0 referring to the Semantic Web. In fact here is one of the links I found if you are interested http://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/. 

The Semantic Web is an AI based approach to search engines that allows for natural language queries of information made available on the web. The notion of the semantic web has been around for a while. Even though this idea has been around for many years, there are few tangible results to show for it. I am here to tell you that Web 3.0 is not about Search, a single application—it is about sound (VoIP) and the myriad web based applications that will come to use it. 

As I discussed in a previous article, “The Talkies Have Arrived!,” the addition of VoIP technologies, including Session Initiation Protocol (SIP) and IP Multimedia Subsystems (IMS) to web and internet based applications in general, is the next wave. These technologies are the building blocks of a new generation of applications that will include real-time interactive audio and video. Web 3.0 is about the integration of VoIP to web applications. 

From a purely logical perspective, if you stop a minute and think about it, interactive telephony is what is missing from the web. I am not talking about Skype or Yahoo IM that includes the ability to make calls. I am talking about web applications. How many web applications have you used that allow you connect with a person via a web page and talk to that person, perhaps even see that person? Not many I imagine, if any at all. Mash-ups which combine AJAX web development techniques combined with web services that access a call control server make these sorts of applications possible. And a variety of useful business and consumer applications can be built on top of these technologies. 

The new era of telecom (IP telephony) is the driving force behind this trend. Sit around the TV during any major sporting event to which advertisers flock. You will see numerous telecom ads touting wireless and WiFi integration and applications. You will see ATT, Verizon, Sprint…you name it…all hawking the promise of new applications built on IP for mobile devices. Granted, they are not necessarily web (browser based) applications. It is a good bet, however, that many of them will be because of the inherent value of a universal client. And, whether or not these applications are being accessed on a mobile device, a PC, in a browser or not, doesn't really matter. The cat is out of the bag when it comes to SIP and IP telephony. A good number, perhaps even the preponderance, of applications that utilize these technologies will be web based. Are we ready?

bottom of page