This post was originally posted in December 2016 by Pedro Rolo and updated in April 2020 by André Santos .
Mr. Brendan Eich, an american technologist, was working for Netscape and had to create a programming language that would allow front-end developers to embed small programs into webpages, making them more interactive and dynamic. The final product had to be flexible, easy to learn, have a popular appeal, and be easily implementable.
Mr. Eich only had 10 workdays to produce the first implementation - a fact that we confirmed with the man himself - but, against the odds, he did succeed to deliver.
Here are some existing programming languages that influenced the result:
- Scheme - the functions as first class values and the dynamic typing;
- Perl - weak-typing;
- Self - simple-to-implement prototype-based object-system;
- Java - mainstream syntax and naming conventions.
An unavoidable question that often arrises due to the root of its name, that has a pretty short answer: Everything, except for some syntax elements such as brackets, dots, semicolons, that express analogous constructs in both the language. The chosen name was a marketing move by Netscape, in order to showcase its new creation piggybacking on the hype of the day: Java.
In order to add new features to the language and to become compatible with the work of other standardization organizations, the standard has been changing from time to time.
The deliverables for each version are known as "EcmaScript
The first version of the language standard.
This version was produced in 1998 when the language was also standardized by ISO/IEC. It updated the Ecma standard so that it would match the document produced by ISO/IEC.
In 1999 there were several new features being added to the language. The most conspicuous were:
• Regular expression literals;
• New control statements;
• Exception handling.
This version draft was abandoned in 2008 due to disagreements about its feature set. It was decided that the output was too radical to be introduced and there was an agreement to create an intermediate version before such drastic developments would take place.
The version that supported most browsers in early 2016 was created in 2009. Its most noticeable features were:
• High-order iteration functions (map, reduce, filter, forEach);
• JSON support;
• Getters and setters;
• Better reflection and object properties;
Just like EcmaScript 2, this spec was produced to align the Ecma standard with the document that was produced by ISO/IEC for EcmaScript 5.
EcmaScript 6 (AKA ES6, EcmaScript 2015, Harmony)
This version added several big changes in 2015, namely:
• Block-scoped variable declarations (let);
• Arrow functions;
• Template literal;
• Spread operator;
• De-structuring assignment;
• Parameter default values;
• Rest parameters;
• Generator functions.
EcmaScript 2016 (AKA ES7)
This version only made minor changes to the language, such as adding an exponentiation operator and specifying the <Array.includes> method.
The browser support for this version is incomplete though there is a transpiler that converts ES6 code into older ES versions.
EcmaScript 2017 (AKA ES8)
Mainly adds support for async functions.
This version expands regular expressions' toolkit as well as introducing the rest and spread operators to object literals. It finally introduces <Promise.finally> as an neat way of streamlining behavior that should always occur independently of the promise's outcome.
EcmaScript 2019 (AKA ES10)
The tenth interaction brought <Array.flat> and <Array.flatMap> methods along with relaxing the catch block error binding that got increased attention from ES2017's introduction of async and await.
EcmaScript 2020 (AKA ES11)
While the draft of this version is far from reaching its eleventh hour, we can already count with the Nullish Coalescing Operator and Optional Chaining Operator that will undoubtedly increase code readability.
Promises will also expand its api with allSettled, an alternative to <Promise.all> that always resolves with a description of each promises outcomewherever they resolve or get rejected.
After conquering the back-end, the mobile world was an open field.
DOM Manipulation Libraries
The main needs of the developers working with AJAX web applications are:
• Performing AJAX calls;
• Manipulating the Document Object Model (DOM) - the data structure kept by the browser representing the webpage's structure;
• React to the user's interactions.
The first libraries to become popular addressed precisely these three tasks, providing a collection of functions and objects that made these operations easy and functional on every browser. Some of these libraries would also provide plenty of other general-purpose functions:
These core extensions were regarded as troublesome by other programming communities and the provided functions was weirder than the following competing library:
The jQuery framework was released in 2006 and - unlike Prototype - has let the language core as it were and rather focused on providing plenty functionalities related with the DOM, events, asynchronicity and effects.
One feature to highlight is that jQuery supports a plugin system that allows library programmers to extend its functionality. This library was the most successful library of this kind, becoming the indispensable tool for front-end development, being used nowadays in the majority of the websites.
While using DOM-Manipulation libraries, it’s common to keep the application data in the DOM itself, thus there are DOM-Manipulation selector functions that have to be called when needing to access or modify this data.
This kind of code is often filled with selector-names and easily becomes a mess as the application's complexity evolves. There is a set of frameworks that allow us to address this problem in an elegant way by leveraging on application patterns that are similar to the famous Model-View-Controller (MVC) pattern.
On the other hand, there is a recent trend in which web-applications are fully loaded in the browser on the first page-load, performing every subsequent operations via AJAX.
This kind of applications is called Single-Page Application (SPA) and, as their code is often quite elaborate and complex, it’s recommendable to pick one of these frameworks when implementing a SPA. Let’s look into some of them.
Backbone is an Model-View-Presenter (MVP) framework published in late 2010. it’s focused on SPAs and enforces the separation between application-logic, display-logic and templates.
The communication between these entities is managed in an event-based fashion and it also supports a Router class that allows the applications to react to URLs and to update these according to the triggered events.
Angular is a framework implemented by Google that used to follow a MVC architecture and that has evolved into an MVVC framework. Similar to Backbone, it’s focused on SPAs and enforces the separation between application-logic, display-logic and templates.
It’s interesting to notice that Angular allows the programmer to extend HTML by creating new tags that can be used in the templates and that encapsulate parts of the page.
Virtual-DOM based frameworks
As we have described on the previous sections, the display layer of our web applications moved from a simple computational model where the interface to display was only function of some data calculated by the server-side code into something much more dynamic and complex, where it would depend not only on the retrieved data but also on the user interactions and scripts that were processing these.
This complexity often translates itself into complex code that’s hard to share and to reason about. Thus, some people at Facebook started experimenting with a simpler computational model to code the front-end. A computational model that would allow us to think of what is displayed in the page again as result of the state of some few variables instead of the result of the interaction between a "zillion" of Model-View-Controller classes.
The Virtual DOM
In order to make such change take place, something radical had to happen: if we wanted the display to only be the result of applying a function to some state variables we would have to stop manipulating parts of the DOM directly and instead render each version of the display as a whole.
As rendering the whole DOM whenever changing something in it would be too slow, it was decided to map the DOM onto a lightweight representation of it that would be easy to regenerate and compare.
Thus, whenever changing one variable, we are able to generate a new version of this lightweight representation and compare it with its previous version, thus calculating which parts of the DOM need to be updated. We call Virtual DOM to this lightweight representation of the DOM.
React was the first framework to use a Virtual DOM. It was created by Facebook and the architecture that needs to be followed by the applications using it it’s minimal.
Regarding MVC, React does not stick to it and is rather only about the View part of the code. With it, the View is represented as a tree of entities called components that might be composed of other components themselves.
A component holds some state, can receive properties upon creation, and knows how to render itself for a given set of properties and state variables. Whenever its state is changed, a new version of its display's Virtual DOM is rendered and compared with its previous version, and the needed DOM is updated accordingly.
If some interactions performed on a given component are to change the state of parent components, then the parent component has to pass the handling code to the subcomponent in the form of a callback. This approach guarantees that components are loosely coupled between themselves, thus making them quite easy to be reused.
Flux and Redux
The component-based approach of React is effective for smaller applications, though for structuring bigger applications Facebook proposes an architecture called Flux, in which the state of the application is stored away of the components in something called state containers.
The original implementation of the Flux architecture was rather complex but there is a simpler one called Redux that is the most popular one and has even been used to implement application's back-ends.
Its architecture is functional and instead of directly mutating state, the programmer is meant to create new versions of it by applying reducer functions, thus allowing easy Undo/Redo and time-traveling debugging.
Vue started as a MVC framework similar to Angular but focusing on being simpler and requiring less boilerplate. The last version already features a component-based approach and a virtual-DOM in a similar way that React does.
The documentation describes it as a progressive framework, in the sense that it is thought to be progressively adopted and thus allowing a user to start by only using the view-part of the framework and later add more complexity if needed. There is a focus in reducing boilerplate and the number of files used when possible.
When developing complex Web applications people noticed that when fetching data through AJAX calls there was often under-fetching and over-fetching as well as that it was not easy to manage caching.
In order to sort out this problem Facebook developed a query language that allows describing the data each part of the interface needs. These queries are meant to be interpreted and aggregated into efficient sets of AJAX requests by a graphQL client.
There are at least two GraphQL clients available:
• Relay, the former implementation by Facebook, highly tangled together with React;
• Apollo, which strives to be a framework-agnostic project.
On the other hand, both Chrome and V8 were released as open-source software (with a BSD license), thus allowing anyone to build their work on top of them and to distribute them as part of any software without having to pay royalties to Google.
The Node Package Manager (NPM) is such package manager, just in the style of Perl's CPAN or Ruby's RubyGems.
Full Development Stack Solutions
There were some attempts to bundle both the tooling and the technologies aimed at creating full-stack applications. Here I'll talk about three of those attempts.
Meteor is a web framework that uses MongoDB as database. It uses a publish-subscribe protocol applied to resources, allowing clients to get updates when the data is updated on the server. It can be used with any client-side framework.
The MEAN stack stands for a package with the following set of technologies:
It also comes with the "mean" command-line tool, that allows generating projects, files and performing other useful command-line tasks.
The MERN Stack is similar to MEAN but focuses on React rather than Angular.
Mobile Development Frameworks
• Asynchronous Module Definition (AMD);
• CommonJS Modules;
• ES6 Modules.
In the ES5 world, Node.js implemented CommonJS modules while front-end programmers preferred AMD, thus creating chaos on this field. There was thus specified a way to require modules and declare them called Universal Module Definition (UMD), which is rather ugly but makes modules compatible with both module APIs.
Anyway, this mess is being mitigated by the adoption of ES6 - that already supports a language-level constructs aimed at dealing with modules.
When using modules while developing web applications there is often the need to merge the different module files into a single file to be included by the web application. There are two libraries that allow developers to perform this task:
Browserify is the oldest web module bundler and allows developers to use the CommonJS module syntax when programming web-applications.
Transpilers and related tools
Thus, when compiling to wasm one should either use a programming language that requires no garbage collector or will otherwise have to compile a runtime that enables garbage-collection together with the remaining application/module.
Flow is a static type checker developed by Facebook. As Typescript, it is a gradual typed system, though Flow is only focused on the types and does not add anything else to the language besides the type system.
Unlike other static type systems, this one guarantees 100% type coverage through type inference, thus not requiring the programmer to explicitly enter any type signature at all. The type system is so safe that it does not allow any runtime exception to occur.
An older one called Grunt - in which data that’s intermediate to the different transformation steps is kept in temporary files. And Gulp - which is a more recent one that rather uses memory instead of the temporary files.
A Fuzzy Future
It also shows well how the evolution of software sometimes requires patchy solutions from which the essence is later distilled into standards and more mature tools (e.g. the modules system). Another noticeable element is how the application of old ideas has driven breakthroughs in the ecosystem (e.g.: V8).
WebAssembly is one of such projects, and, if successful, it might represent a big turning point in the history of this language.
Found this article useful? You might like these ones too!