Grunticon updated to address IE8 data URI length limit

I submitted a pull request for grunticon earlier this week and it was merged into the master branch today! The update addresses a limitation in IE8 where images embedded as data URIs are restricted to about 32KB. In these cases grunticon will now load these large images as external image files to ensure that they are still displayed.

What is grunticon?

grunticon is a tool by the talented folks at Filament Group which makes it easy to manage SVG images in your website. Basically, it reduces the number of HTTP requests to load the images to improve performance, and it automatically generates fallback PNG images for older browsers that don’t support SVG. The description from the readme follows:

grunticon is a Grunt.js task that makes it easy to manage icons and background images for all devices, preferring HD (retina) SVG icons but also provides fallback support for standard definition browsers, and old browsers alike. From a CSS perspective, it’s easy to use, as it generates a class referencing each icon, and doesn’t use CSS sprites.

grunticon takes a folder of SVG files (typically, icons that you’ve drawn in an application like Adobe Illustrator), and outputs them to CSS in 3 formats: svg data urls, png data urls, and a third fallback CSS file with references to regular png images, which are also automatically generated and placed in a folder.

grunticon also generates a small bit of JavaScript and CSS to drop into your site, which asynchronously loads the appropriate icon CSS depending on a browser’s capabilities, and a preview HTML file with that loader script in place.

What was the problem?

I came across an issue where some images I had processed with grunticon were not displaying in IE8. There was just a blank space where the images were supposed to show up. IE8 doesn’t support SVG so grunticon serves it images as PNG data URIs embedded in a stylesheet. However there is a limitation in IE8 which means that it does not support data URIs longer than 32,768 characters. Some of my images, whilst small as SVGs, were larger than 32KB when saved as PNG data URIs, and therefore they were not displaying. For an example, visit http://codepen.io/keeganstreet/full/Ltdxy in IE8, and you will only see one image.

What was the solution?

Now when grunticon generates the stylesheet with embedded PNG data URIs, it measures the size of each image, and if it is greater than 32,768 characters, it removes the data URI and uses a link to the external PNG image file instead. This results in an extra HTTP request to load the image but it ensures that the user will see it!

Element Finder now returns the line numbers of matching elements

I just released an updated version of the Sublime Text package Element Finder. The results screen now includes the line number of each matching element. You can double click the line number or summary and Sublime will open the file and jump down to the correct line.

How it works

To add this feature I needed a way to map DOM elements to line/column numbers in HTML source-code. Once a DOM has been generated it has no direct relationship with the HTML. There is no function in the DOM, jQuery, etc like `$(“#header”).getLineNumber()`. To solve this problem I wrote a Node module called domtosource, and included this as a submodule in Element Finder. Domtosource has a couple of techniques for finding the line and column number of an element.

The first technique is fastest, but only works if the outerHTML of the element is a unique string in the HTML. If that is the case, we can just get the character index of the unique string, count the line breaks before that index to get the line number, and then count the characters between the last line break and the start of the string to get the column number.

If the outerHTML is not unique, domtosource uses a slower method, but one that works in all cases. This method is based on the fact that elements appear in the HTML and the DOM tree in the same order. We get the node type of the matched element, for example `div`, then walk through the DOM tree to find how many other div elements appear before the matched one. Now that we know this is the nth div, we search through the HTML for the nth occurance of the string `<div`, which gives us the character index of the element. Once we have that index we can re-use part of the first technique and count the line breaks and characters before the index.

You can install Element Finder through Package Control.

CSS Specificity Calculator

When you see two CSS selectors can you always say which one has a higher specificity? For example, which of the following selectors would overrule the other?

div.products li > *[data-name="test"] a
:first-child .selected:hover

I don’t always know with some of the more complicated selectors, so I made a little web app to figure it out for me: specificity.keegan.st

The app uses regular expressions to find the parts of the selector which contribute to specificity. It highlights each part with a background colour to help you learn the specificity rules. The core part of the app is a JavaScript module I wrote which is available through npm via:

npm install specificity

The JavaScript module source and the website source are on GitHub.

Easy image data URIs with Compass

I just found a great feature of Sass / Compass for automatically generating data URIs: the inline data helper.

In my current project we’re using sprites for most of our icon images, but in some cases sprites didn’t work because the element was larger than the icon and other icons were showing through. In these cases we were using standard background images instead of sprites, but this meant we had an extra HTTP request and the hover images weren’t being preloaded.

Luckily Sass came to the rescue!

We simply changed our background image properties from this:

background-image: url("images/icons/expanded.png");

To this:

background-image: inline-image("icons/expanded.png");

And the result was this:

background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA0AAAANCAMAAABFNRROAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAAZQTFRF7RcfAAAA1Qe9yAAAAAJ0Uk5T/wDltzBKAAAAHklEQVR42mJgQAWMCICdx0AcD24cifpw244MAAIMACcaAG3Sx6WHAAAAAElFTkSuQmCC');

Data URIs are supported in IE8 and above.

Introducing Element Finder for Sublime Text 2

I am pleased to announce that Element Finder is now available as a package for Sublime Text 2. Installation is via Package Control.

Edit: NodeJS is required for Element Finder to run. You can install NodeJS from http://nodejs.org and then you will be able to install Element Finder from Package Control. NodeJS is required because JavaScript is the most effective way to search through HTML documents.

Element Finder is a tool for web developers to find which HTML files contain elements matching a specific CSS selector. You might be new on a project, returning to a project that you haven’t worked on for a while, or maybe a CSS selector is just unclear and you’re not sure exactly which pages it will effect. Just copy a CSS selector into Element Finder and you will quickly see all of the HTML elements in your project which match that selector. For example if you search for `div.main`, it will show you all the DIV elements with a classname of `main`.

The Sublime Text package makes a number of improvements over the command line app I released three months ago. The filenames in the results page are clickable links so you can quickly open them. The results page also includes a preview (the first two lines) of each matching element.

To use Element Finder, right-click on a folder in the Side Bar and select `Find Elements in Folder`. Then enter the CSS selector you want to search with and hit Enter. The results will look like this:

Element Finder can only search through HTML files, so if your project doesn’t have HTML templates and is only contained in a CMS, I would recommend using `wget` to crawl your site and download all of the pages as HTML. Then you can search through these pages with Element Finder.

The package is currently available for Windows and Mac. Linux should be supported in the future but I don’t have Linux so I haven’t been able to compile or test it.

The Sublime Text 2 package and command line app are open source and on GitHub.

Testing JavaScript library updates with remote mappings in Charles

Charles Proxy is an incredibly useful but still not widely known tool for web developers. One of its features lets us map URLs from one location to another. With remote mapping enabled, the web browser sends out a request for one file, but receives a different file in return.

One useful application of remote mappings is testing updated versions of JavaScript libraries. If your website is running jQuery 1.5.1, you could set up a mapping to quickly and easily test jQuery 1.7.2. You would simply tell Charles to map http://your-site.com/js/jquery-1.5.1.min.js to http://code.jquery.com/jquery-1.7.2.js.

Then when you visited your website, you could see how it would run with the newer version of jQuery. No server-side changes required!

Setting up a remote mapping for jQuery

This is an extract from my presentation at the Melbourne Web Developers meetup yesterday.

Use wget and elfinder together to search for a CSS selector in all pages of a remote site

Earlier this month I introduced Element Finder, a command line tool which lets you search through multiple HTML files and find matches for a CSS selector. This is fine if there is an HTML template for every page in your site. But what if the pages are generated dynamically, for example with WordPress? Or what if you want to search in a remote site?

In these cases you can use `wget` to crawl a website and download all of the available HTML files. When they have been saved, you can search through them with Element Finder.

If you’re using Linux, `wget` should be pre-installed. If you’re on a Mac, the easiest way to install it is via a package manager like Homebrew: `brew install wget`. There are Windows options available too.

For example, lets say we want to search through this WordPress website, http://keegan.st, and see which pages contain elements matching the following selector, which comes from the default Twenty Eleven WordPress theme stylesheet:

.entry-content h1, .entry-content h2, .comment-content h1, .comment-content h2

First, we would use `wget` to download all HTML files from the website:

wget --mirror --adjust-extension --convert-links --accept html http://keegan.st

Or, for short:

wget -m -E -k -A html http://keegan.st

Following is an explanation of the wget options used in the command above:
–mirror, -m: Recursively download all available files.
–adjust-extension, -E: Add a `.html` filename extension when saving the files.
–convert-links, -k: Change the links in the downloaded files so that the stylesheets and images will be loaded from the remote server when the files are opened off the local file system.
–accept html, -A html: Only download HTML files. Don’t download images, videos, stylesheets, etc.

Next, we CD into the directory where the files where downloaded, and run Element Finder with the CSS selector we want to search for:

cd ~/Downloads/keegan.st
elfinder -s ".entry-content h1, .entry-content h2, .comment-content h1, .comment-content h2"

The following screenshot shows the results:

We can use this information to view the pages that would be effected if we were to modify that CSS rule.

Find in Files with CSS Selectors

Most code editing software has a Find in Files tool, where you can search through all of the files in your project for a string or regular expression. But that isn’t much help if you want to find the files which contain HTML elements matching a CSS selector. That’s why I have built Element Finder.

Element Finder searches through your project for any given CSS selector, for example `.layout-fluid #sidebar`, and provides a list of the files which contain elements matching that selector. In this example that would be any element with an ID of `sidebar` which is a descendant of an element with a class of `layout-fluid`.

This is useful for front-end developers because sometimes when we have to modify a CSS rule, it is not clear if the change will have implications on other pages. This is particularly the case for large projects involving multiple front-end developers.

It is easy to use Element Finder from the command line. First you need to `cd` into the directory you would like to search. Then run `elfinder` and pass in the selector you would like to search for as the `-s` argument.
elfinder -s ".layout-fluid #sidebar"

When you press `Enter`, Element Finder will run and display the results:
Searching for ".layout-fluid #sidebar" in 3 files in "/Users/kestreet/Projects/MyProject". Found 2 matches in 2 files.

Element Finder ignores `.git` and `.svn` files by default. You can ignore other files too with the `-i` argument. If you wanted to search all files except for files in the `partials` folder, you would run:

elfinder -s ".layout-fluid #sidebar" -i ".git, .svn, partials"

Element Finder only searches files with an extension of `html` by default. If you would like to search other files, set the `-x` argument. For example:

elfinder -s ".layout-fluid #sidebar" -x "html, shtml, htm"

If you edit in Vim you can call Element Finder directly from your code editor. Just run:

:!elfinder -s ".layout-fluid #sidebar"

How it works

Element Finder is built with Node JS. JavaScript was designed for interacting with web pages so it is a natural fit for this app. Another benefit is that front-end developers know JavaScript, so it should be easy for other people to fork Element Finder and change it to suit their needs.

Element Finder utilises a number of useful Node JS modules for building command line apps, such as TJ Holowaychuk’s commander.js and node-progress. Elijah Insua’s jsdom module is used to construct a W3C conforming DOM for each HTML file we need to search. Sizzle JS is used to find matches for the search input in the DOM. Sizzle JS is jQuery’s selector engine, which means Element Finder will consistently return the same elements as the jQuery selectors in your JavaScript.

Looking forward

It would be useful to have Element Finder implemented as a plugin in Vim, Sublime Text and other code editors. But in the meantime I hope you will find it useful as a command line app. Technically the app should work in Mac, Windows and Linux, but so far I have only tested it on a Mac. For installation instructions, visit Element Finder on GitHub.

Filtering HTML5 Validator Errors

In an ideal world, every web page you work on would have 100% compliance with the HTML5 spec. But in the real world this often isn’t possible. Sometimes we need to add standards noncompliant markup to support certain browsers. We might also be using server-side frameworks which generate invalid markup.

Unfortunately the major HTML validators validator.w3.org and validator.nu don’t care if you would prefer to ignore certain errors. To them, an error is an error is an error. This can make validating HTML markup a real pain. The results page can be full of errors that you already know about and have made a decision not to fix, making it difficult to identify new or unknown errors.

That’s why I have customised validator.nu to create validator.keegan.st. I have added a JavaScript layer on top of the original validator which filters the results into groups and lets you show or hide various groups at the click of the button. If some of the error messages are about “frameborder” and “allowtransparency” attributes which you have added to iframes to support IE8, just hide them! If your server side framework generates closing <img></img> tags instead of self-closing <img/> tags (I’m looking at you, ATG), hide those errors too! By filtering out the noise you will be able to identify when there is a serious problem in your markup, such as an unclosed element or duplicate IDs.

Below is an example of the validator grouping error messages for Melbourne newspaper The Age. As you could imagine, looking at a list of 17 groups of error messages is much easier to comprehend than a sprawling list of 299 individual errors.

If you would like to use the validator, head over to validator.keegan.st. If you’d like to learn about how it works, read on.

Under the hood, validator.keegan.st is just validator.nu with a JavaScript layer added on top. In fact, I haven’t written any server-side code. Fortunately, the good people at validator.nu have configured their server to send an HTTP response header of “Access-Control-Allow-Origin: *”. That enables the client-side JavaScript on validator.keegan.st to make Cross Origin Resource Sharing (CORS) requests to the validator.nu server.

The first CORS request is a simple GET request to load in the form. Most browsers support this – the exceptions being IE7 and Opera. I’m not too worried by this because Opera is expected to support CORS in version 12 and how many web developers are using IE7? Not the ones who would care to validate their markup I suspect!

The form gives the user three main input options: Address (URL), File Upload and Text Field (direct input). Address input is the simplest option and the only one which works in IE8 and IE9. It serialises the form and sends it off as a GET request. When the response is returned, the JavaScript sorts the resulting error messages into groups and generates an interface to toggle the visibility of the error messages in each group.

The File Upload and Text Field options are a little more complex because they have to post “multipart/form-data” in a CORS request. This can be achieved with the FormData object, which is supported in Chrome, Firefox and Safari. IE10 and Opera 12 are expected to support this object too.

The final feature I added was HTML5 local storage to remember which groups of error messages the user has hidden. When they validate another page or return to the website later, their preferences remain intact.

Happy validating!