Behind the Scenes of Disney TRON: How to Build a Digital Book Site with HTML5

By HTMLGoodies Staff  |  eMail Email    Print Print  

By Giorgio Sardo | Sr. Technical Evangelist | HTML5 and Internet Explorer


Thanks to the amazing teamwork of Disney and Vectorform, it took just about 1 month to build the new Disney TRON: Legacy Digital Book Site, an immersive HTML5 experience built on top of Internet Explorer 9 Hardware Accelerated HTML5.

In this post I’d like to share some of the “behind the scenes” stories from the team involved in the project, with a particular focus on lessons learned and implementation best practices. I’d like to thank in particular Ken Disbennett, Creative Director, and Alex Barkan, Lead Developer, from Vectorform for sharing their experiences and thoughts about the project.


From paper to web (Ken Disbennett)

It all started from the printed comic book. The goal was to leverage the power of HTML5 to upgrade that experience without losing the sense of authenticity of the traditional comic experience. We wanted to ensure that each panel of the comic had life and action of its own just as they would in the printed version, and that the site provided a sense of pacing through the story. Therefore we decided that a linear, timeline type experience was the most appropriate. Each panel featured a custom reveal that emphasized and re-enforced the action of the story. The left to right action of progressing through the comic kept the feel of a traditional reading experience without the interruption of page turning.

The original assets have been provided as high resolution Photoshop source files, organized by book page or chapter.


It took about 5 days to meticulously separate the characters and essential elements from the background of each comic panel. We back-painted the scenery to give the background a seamless appearance. Each panel was then reconstructed in sequence with the linear and organic layout of the site. Finally each element was isolated and exported in various states to produce the final animated outcome.


Choosing the best underlying technology (Alex Barkan)

It took about 5 days to build a few prototypes that would help us identify the underlying technology that would fit better in this project and offer the best results across browsers.

The initial thought was to use CSS3 (in particular, CSS3 2D Transforms). We started building a few tests simulating the level of interaction we needed for this project using CSS3; the main complexity was to interact programmatically without losing performance. We went through several approaches and experiments using pure JavaScript, jQuery animation, applying drawing, DOM, and CSS optimizations; none of these patterns, however, gave us 60 frames per second (FPS) performance across any browser.

Not satisfied by the CSS3 performance, we looked at a solution based on the HTML5 <canvas> element. Starting from our previous project (Foursquare HTML5 Playground), we built a new prototype to stress the browser performance. The prototype was smooth, clocking in at 60 FPS on low-spec office equipment; we could manage 10,000 buildings and render hundreds of 32-bit RGBA PNGs with basic viewport-clipping functionality. The big "aha" moment came when we added animated sprites using the image-slicing-capable version of the drawImage() canvas method (more on this below). We added hundreds of sprite characters, with depth-testing, walking around hundreds of buildings.

It was clear – HTML5 <canvas> in Internet Explorer 9 (and to a smaller degree on other browsers as well) changed the game!


Reducing the bandwidth without reducing quality (Alex Barkan)

Bandwidth was an obvious issue from the start. We wanted HD imagery: lots of pixels, high bitrate, and smooth transparency. For a parallax effect to work correctly we need to overlay multiple layers with more than 1-bit of alpha, otherwise the images look no better than GIFs. PNGs were the obvious—though expensive—choice. But along the way John Einselen (Art Director at Vectorform) brought up a handy tool called pngquant. All browsers nowadays support PNG/8, a rarely seen variation of PNGs where RGB and Alpha can be stored within the same 8-bit channel, allowing us to have multiple bits of alpha for smoother alpha-blending while cutting file sizes in half! We had to experiment to find places where this sort of quantizing was appropriate and didn't take away too much quality from image fidelity.

One trick we learned was a split-compromise between 8-bit RGBA and 32-bit RGBA. You bake an image as two separate layers: base texture and glows. Then compress both as 8-bit RGBA. This gives a lot of bits for smooth glows (think lamp posts in fog) but cuts out 16-bit worth of data per RGB triplet. The result is lower total file size than a single 32-bit RGBA png, and higher quality than a single 8-bit RGBA! Here’s the glow of the car on the first page (full image in the ‘Navigating the code with the Developer Tools’ section).


The assets we had to work with had multiple layers, all using different blending modes and some needing paint work to fill in missing backgrounds. Our artists, Ken and John, had to convert to normal-blending modes in Photoshop for the rendered images to appear correctly in web-browsers. They also filled in backgrounds where necessary and made good use of the available pixels which we then resampled down to look crisp. One big lesson we learned was that we needed a fixed-size target screen in order to produce animations that revealed the story in a way that made sense and was enjoyable. During the entire duration of the project, this has been a key challenge to solve: smooth easing animations that work regardless of browser size and mouse sensitivity and powered by a background image pre-loader.


Keeping things in sync and fast (Alex Barkan)

A common animation problem that spans all technologies (CSS, HTML, Canvas) is what to do about vsync. The complexity was that all browsers had a different timer resolution with a bigger or smaller margin of errors. Because of this, it’s possible to have some code drawing into a framebuffer while it’s being drawn to screen.  In order to prevent visual glitches, we had to fine-tune the setTimeout() draw callback to try and match the 60hz redraw. Overall, I wish this was easier across the board.  I’m looking forward to seeing the evolution of the conversations about requestAnimationFrame in W3C.

At the end, we were very impressed with the hardware acceleration support in Internet Explorer 9; the actual image rendering inside the Canvas 2D proved to be extremely performant. On other browsers it gave us good results too, eventually, although with some artifacts on lower end machines.


Navigating the code with the Developer Tools

In order to keep the application flexible, modular, and easier to maintain – the entire playground has been divided into 13 different “pages” (as in the original book). The pages are pre-cached during the startup of the application and laid out one after the other on the horizontal axis. Each page defines its own display and interaction logic, which is relative to its X coordinate (configured during the startup).