Buckle in, the Mr. Fusion reactor is loaded with 1.21 gigawatts of trash and our first stop on this Journey is 1961. Originally known as ARPANET (Advanced Research Projects Agency Network) the Internet itself dates back to the first packet switching paper in 1961 by american engineer Leonard Kleinrock “Information Flow in Large Communication Nets”. This predates the web as we know it today by over four decades and if I were to start there, this post would be by no means brief. The Internet is a global system of interconnected computer networks and the web is one of the services transferred over these networks. It is a collection of HTML (HyperText Markup Language) documents and other resources, linked together by URLs (Uniform Resource Locators), usually accessed by web browsers, from web servers.
Though the Internet has been in existence since the early 1970s, accessing documents remotely was a complex process using services such as FTP (File Transfer Protocol), NNTP (Network News Transfer Protocol) and the gopher protocol. These protocols are still in use today, but offer only a simple directory structure which users navigate and choose files to download without the ability to view them directly.
The real vision and execution for the World Wide Web didn’t come about until the early 1990’s, when a British CERN physicist by the name of Tim Berners-Lee was struggling with this complexity. He needed to share information with other physicists around the world and found there was no quick and easy solution for doing so. Berners-Lee had already experimented with hypertext in an earlier project by the name of ‘Enquire’. This gave Berners-Lee the idea to simplify access to content by using hypertext to link directly to documents, thus starting the World Wide Web project. On 6 August 1991, Berners-Lee published a short summary of the project on the public newsgroup alt.hypertext and so began the journey to where we are today.
Although the World Wide Web existed in 1991 it was more or less inaccessible by the world without software to interpret the hypertext markup language Berners-Lee had created. Tim’s original browser went by the same name as his project — WorldWideWeb — but was later renamed Nexus in order to save confusion. Written using a NeXT computer, it was both a browser and editor but due to platform restrictions was limited to use with only NeXT computers.
In February 1992, Berners-Lee released the Line Mode Browser which was the first browser to support multiple platforms. Between 1992 and 1995 a range of other browsers hit the market including Viola, Mosaic, Cello, Netscape Navigator, Opera and Internet Explorer 1.0 opening internet access up to virtually anyone with a computer and a landline.
On 30 April 1993, CERN issued a public statement relinquishing all intellectual property rights to the World Wide Web making it free to use for anyone, with no fees being payable to CERN. Berners-Lee left CERN in October 1994 and went on to form the World Wide Web Consortium (W3C). Made up of several member organizations, the W3C work together to create standards for web development and to act as a forum for discussion on web usage. Today it comprises 406 member organisations from around the world.
Back in the docs DeLorean, this time, setting our time circuits to the year 1995 when the web was starting to become a household name in the western world. With almost 50 million users globally, representing a penetration of 0.8% of the world’s population and 173% year on year growth, there was a surge of entrepreneurs looking to make it big with a new generation of consumers. It was the start of the dot com bubble marked by the founding (and, in many cases, spectacular failure) of several new Internet-based companies.
Websites of the time were limited to nothing more than static text documents linked together by hyperlinks and due to limited technical dissemination, there were much fewer authors of content compared to consumers. Styling was also very limited with only 18 tags (unique components) available in the original spec. It took the best part of a decade, four iterations of the HTML specification and a stock market crash to build out the foundations for the modern web applications we use today.
With the web reaching thousands of new users each day, there was a desire to do more with this new technology than simply sharing content. Methods were being developed for rendering dynamic content on a web server before it was sent to the client, allowing each page served to be customised for its target user depending on their previous interactions with the website. Known as server side scripting, there have been many languages developed for this purpose, starting with CGI (Common Gateway Interface) in 1993. Other notable dynamic languages since then include Perl, Python, PHP, Ruby, ASP, Java Servlet and Cold Fusion.
Server side scripting languages coupled with a database, gave users the ability to register and login to a website for this first time. This led to the first wave of web applications, or what developers now refer to as CRUD applications. The acronym denotes the four main abilities of these types of websites: Create, Read, Update and Delete which maps directly to both database queries and HTTP requests.
One of the first of these that attracted a large user base was the online auction website eBay. eBay was originally founded as AuctionWeb in San Jose, California, on September 3, 1995, by French-born Iranian computer programmer Pierre Omidyar. Omidyar wrote the code for what he called an “experiment.” He wanted to know what would happen if everyone in the world had access to a single global marketplace. To test his idea, he came up with an auction website, where he listed a broken laser pointer that he was going to throw away. In the end, a collector bought it for $14.83.
Today CRUD applications are used all over the web, from posting blog comments to the API’s (application programming interfaces) serving content to the native applications on our phones. It is the basis to any web application with state representing its users and their actions.
Nowadays web pages are much more than static documents, they have the ability to be fully interactive applications that look and feel like you are using a native desktop application. Much of this transition is due to the addition of JavaScript and CSS (Cascading Style Sheets).
One of the first problems discovered with HTML was its limited ability to present content in a desired fashion. For example, you could only format a paragraph with either horizontal or vertical alignments in plain HTML. This led to the addition of style sheets, an extension to HTML’s existing styling functionality implemented by browsers which separated the structural language (HTML) from the visual language (CSS). CSS is responsible for roughly 90% of the styles you are seeing on this page, from font size and spacing to positioning and orientation.
JavaScript (originally code named ‘Mocha’) was developed in 10 days in May 1995, by Brendan Eich while he was working for Netscape Communications Corporation. As it was first announced to the public on 19 August 1995 it was created as a way to make web pages more interactive by automatically adjusting the page’s content depending on the user’s input. JavaScript allows for instant feedback to users without communicating back to a web server, for example can tell users their password didn’t meet the minimum password requirements without submitting a new page request.
It wasn’t the only choice for dynamic client scripting at the time though, Microsoft and Adobe were also developing similar technologies on their own. Microsoft built a scripting engine called ActiveX, which was used in both the Windows operating system and Internet Explorer. JScript and VBScript were both Microsoft scripting languages that ran on its ActiveX scripting engine. Adobe was developing ActionScript as an object-oriented language for Macromedia’s Flash authoring tool, which is now developed by Adobe Systems as Adobe Flash.
These scripting languages were initially considered unsafe as they had the ability to execute any arbitrarily downloaded code on a user’s computer with potentially damaging results. Browsers attempted to contain this code by restricting its reach within the browser but techniques like ActiveX controls and the exploitation of browser vulnerabilities could be used to sidestep restrictions.
With client side scripting being both limited in functionality and potentially dangerous, server side scripting was the preferred method for creating dynamic content. Some websites used small amounts of client side scripting alongside server generated content to produce a richer experience for users.
Then, in 1998, Microsoft released their Outlook Web App, which extended the possibilities for client rendering on a website. They created the component XMLHTTP request by client script (ActiveX), which for the first time, enabled a web browser to request more information from a web server without refreshing the entire web page. This technique was later adopted by Mozilla, Safari, Opera and other browsers as the XMLHttpRequest JavaScript object and started a new generation of web applications. Most users probably first noticed the implementation of this technology when registering for a new service then being told their desired username already existed before submitting the registration page.
In October 2004 Kayak.com’s public beta release was among the first large-scale e-commerce uses of what their developers at that time called “the xml http thing”. Google then made a wide deployment of standards-compliant, cross browser Ajax with Gmail in 2004 and Google Maps in 2005. Today we almost take Ajax for granted, the number of websites using it greatly outnumbers those that do not, it is the backbone of the modern web app.
With web sites now evolving to become far more dynamic, it was evident that the landscape of the web was completely changing. ‘Where we’re going we don’t need reloads’. Web 2.0 had reached lift off. First coined in 1999 by Darcy DiNucci, then popularized by Tim O’Reilly and Dale Dougherty at the O’Reilly Media Web 2.0 Conference in late 2004, Web 2.0 describes web sites that emphasize user generated content, e.g. social networks, wikis, video sharing and web applications. It does not refer to any updates to technical specifications.
Proposed updates to the HTML specification came in June 2004 when the Mozilla Foundation and Opera Software presented a position paper at a W3C workshop focusing on developing technologies that are backward compatible with existing browsers, including an initial draft specification of Web Forms 2.0. The workshop concluded with a vote, 8 for continuing work on HTML, 14 against. Immediately after the workshop, the Web Hypertext Application Technology Working Group (WHATWG) was formed to start work based upon that position paper and a second draft ‘Web Applications 1.0’ was also announced. The two specifications were later merged to form HTML5. The HTML5 specification was adopted as the starting point for the new HTML working group of the W3C in 2007 and in January 2008 WHATWG published the first working draft of the HTML5 specification.
One month after WHATWG was formed, in July 2004, David Hansson extracted an application framework from his work on the project management tool Basecamp. He called this framework Ruby on Rails and released it as open source under the MIT License. Rails provided default structures for a database and web service emphasizing the use of well known software engineering patterns and paradigms. This allowed developers to rapid prototype web applications and was the foundations for some of the largest sites on the internet today e.g. Twitter, GitHub, Yammer, Shopify, Hulu and Basecamp.
As server side code progressed, so too did client side code with the release of the Yahoo! User Interface Library (YUI). jQuery came soon after. These libraries were developed to build richly interactive web applications which provided abstractions to simplify client-side scripting and allowed for better support across different browsers. jQuery is still the most popular JavaScript library in use today with installation on 65% of the top 10 million highest-traffic sites on the Web.
Fast forward a few years to 2007 with Apple’s launch of iPhone and the following wave of smartphones in its wake. These new devices completely changed the way in which users accessed the web. Until this point accessing the web had primarily been achieved from computers or laptops and since then, almost all mobile phones have either adopted web browsers themselves or run a mobile operating system like Google’s Android or Windows Phone.
The number of users accessing the web from mobile devices has grown rapidly since 2007, with mobile web traffic recently overtaking desktops and laptops combined. During this time websites and web apps needed to adapt to cater to these for reduced screen sizes and processing abilities. More of a focus was put on efficiency with JavaScript libraries and many design considerations were necessary to facilitate the new market segment. I will be discussing these design changes further in the next part of this blog series, with an article called The Evolution of Web Design.
There’s no debate that JavaScript is the most widely used language client-side on the Web. Regardless of how the back-ends of web applications are implemented, the client side is using JavaScript for everything from form validations to full Ajax applications. Now imagine being able to develop web apps using JavaScript server-side too. Being able to use the same language on both client and server could greatly simplify development?
This was the goal Ryan Dahl and other developers working at Joyent set out to achieve when they invented Node.js in May 2009. Node is an open source, cross platform runtime environment for developing server side web applications in JavaScript. It was created by combining Google’s V8 JavaScript engine (used in the Chrome / Chromium web browser), an event loop, and a non-blocking I/O API.
Dahl was inspired to create Node.js after seeing a file upload progress bar on Flickr. When uploading, the browser did not know how much of the file had been uploaded and had to query the web server to find out. But Dahl desired an easier way.
Two short years after nodes initial release, a package manager was introduced for the node.js community called npm. The package manager allows publishing and sharing of open-source Node.js libraries by the community, and is designed to simplify installation, updating and uninstallation of libraries. npm has dramatically increased the productivity of developers by abstracting out commonly used functions into small and reusable modules / packages which can then be consumed and further developed by other members of the community. By October of 2014 over 100,000 open source modules had been created on npm and today that number is almost 200,000.
In an effort to give web applications the same fluid user experience as their desktop counterparts, the quantity and complexity of client side scripting increased dramatically, to the point where it could completely render a web application from well structured data. In doing so, server load was reduced and users received instant feedback from their actions.
These large codebases needed to be well structured in order to be managed effectively, and out of necessity, frameworks were born to facilitate this. AngularJS was one of the first of these frameworks and still remains one of the most popular today, it was developed in 2009 by Misko Hevery at Brat Tech LLC as part of an abandoned startup. Instead of scrapping the project completely, Hevery open-sourced the framework, and it is now maintained by Google. Ember.js is another popular single page application framework. Initially released in December 2011, it shares philosophy and design with its server side counterpart Ruby on Rails, following Convention over Configuration and the Don’t Repeat Yourself principle.
Single Page Applications weren’t the silver bullet for building web apps though; they came with their own set of tradeoffs. Instant user feedback came at the cost of first load time performance, forcing users to download the entire web app when the page first opens. This often caused large delays before a page would render at all, with a loading spinner in the best case.
Managing state also became more difficult with the potential for unintended side effects. Common attempts to manage this state used two-way data binding and directly linking page content with the data model it was stored in. This often made code hard to follow and caused the client data model to drift when compared to the server’s model, which lead to invalid data being displayed when events occurred out of their expected order.
Developed at Facebook and released under an open source license in 2013, React was designed to overcome some of the existing problems with single page application frameworks available at the time. This was achieved by a concept Facebook call Flux, scrapping traditional two-way data binding in favour of a deterministic one way data flow, allowing for complex user interfaces to be developed in a maintainable fashion.
This change in data flow was no small feat and was achieved by a feature of React called the Virtual DOM (Document Object Model). By maintaining a separate copy of the app’s current state in memory, React is able to compare the desired state of an application to its previous state and render only the difference between these two states. Using this difference engine React ensures updates never have any unintended side effects.
React’s state management properties also led to a more modular approach to developing user interfaces. Interfaces are broken down into their smallest reusable components, each describing the rules for how they should be rendered. Components can contain other components, giving developers a way to group commonly used components together. This way, the next time a user interface is built, developers can write much less code leading to reduced development time, fewer bugs, and less code for end users to download.
Another feature provided by the React’s Virtual DOM is the ability to render components with a JavaScript runtime alone, no longer requiring a web browser to render content. With the nested component approach discussed above, an entire web application can be encapsulated within a single component and rendered on a web server using the Node.js runtime environment. For the first time, the Node.js dream of running the same codebase on both web servers and web browsers came true. This technique was coined Universal / Isomorphic web applications.
Last but not least, React’s componentised approach to development introduced a revolutionary new way to style web applications. Separating concerns by style, structure and logic doesn’t tend to scale well for large web applications. When an app gets large enough developers tend to separate styles, structure, and logic further into smaller more organised groups of code. React has taken a different approach and introduced the idea of inlining styles into the same modular components that applications are made from, therefore, separating concerns by user interface components. This concept has been the topic of much debate, but in my opinion, the pros far outway the cons if you’re using React to build large web applications inline styles make sense.
Even with the advent of AJAX for server communication, there was still a missing piece of the puzzle for web sites to become fully fledged web applications. When web servers had new data to update a user’s page with, they needed to wait until the user’s next request before being able to send it.
In the development community, there was a desire to send this data without any user interaction, and so Comet was born. Comet, also going by the names Ajax Push, Reverse Ajax was named as a play on the AJAX cleaning product (Comet and AJAX both being household cleaning products in the US). The very first Comet implementations date back to 2000 using web requests with long timeout periods to deliver data back to a user’s browser without the browser explicitly requesting it. This method sufficed for the majority of web applications real time data needs but it was inefficient, one directional and didn’t work for all data types.
In 2009, nine years after the Comet first appeared on the web technology scene, WebSockets was proposed to the W3C as an alternative communication protocol for the web. It overcame the current issues faced with Ajax and Comet by providing bi-directional data flow with reduced overhead and support for binary data. This was only a first draft and the technology never shipped to users’ browsers. Much iteration was done on the specification, then in December 2011 WebSockets became an official IETF protocol ‘RFC6455’ and was soon implemented in all major web browsers. For the first time, browsers fulfilled all the requirements to build highly interactive web apps that could equal or even outperform their desktop counterparts.
On 28 October 2014, over 10 years since its inception, the HTML5 specification was released as a stable W3C recommendation completing the specification approval process. This specification defines a range of new HTML elements including; video and audio for media, canvas and svg for graphics plus many more for structure, input and syntax. For a concise list of HTML5 spec the Mozilla Developer Network does a great job of defining the full specification here.
With universal React and Node.js, a single JavaScript codebase can now be successfully used for full application development that runs on both the client and server, fulfilling the vision for a single, unified language of the Web that makes apps easier to develop and maintain.
The HTML specifications don’t end here. Revisions to HTML5 are being suggested on a regular basis with completely new APIs being recommended for some tasks. ‘It means your future hasn’t been written yet. No one’s has. Your future is whatever you make it. So make it a good one.’
https://blog.keepsite.com/a-brief-history-of-the-web-809509ba23df