All good books have colophons. Since a lot of people learn to build web pages by looking at the code of sites they visit, it seems like a good idea here, too. Naturally, that brings us to the pressing point of definition. Just what is a colophon and what would you possibly want with one?


Function: noun

Etymology: Latin, from Greek kolophOn summit, finishing touch; perhaps akin to Latin culmen top

1 : an inscription placed at the end of a book or manuscript usually with facts relative to its production

2 : an identifying mark, emblem, or device used by a printer or a publisher

—Webster’s 10th Collegiate Dictionary

In a nutshell, the colophon usually tells you what fonts were used to layout a book, who typeset it, the sort of paper on which it was printed, and the name of the printer. Because there are many layers of sophistication on a website, I wanted to make this page fairly detailed – like an annotated bibliography of technologies and techniques employed to bring everything to life.


How We Got Here:


Some time late last century, the site began as a hodgepodge of HTML 4.0 and a whole bunch of really ugly table code. At some point, it occurred to me that writing code that would validate might be a good thing, so things slowly shuffled in that direction. CSS was also adopted at some point. In 2006 or so, for reasons lost to time, the conversion to XHTML 1.0 Transitional got under way. The tables went away. So did the spacer GIF’s. Neither are particularly missed. By now, the site started to look amazingly consistent across browsers.

One of the issues with the XHTML 1.0 site was the fixed CSS navigation. In order to make it work, quirksmode was used to keep everything working in Internet Explorer. It also caused some problems that were pretty much ignored. A few conversations with Scott Kimler troubleshooting his PZ3 code and its incompatibilities with my site weirdness basically resulted him sending me some revised code that didn’t require quirksmode and shaming me into redesigning the site to be 100% compliant XHTML 1.1 and hack-free. It took awhile because A) I’m slow, and B) I wanted to change some things that required technology to progress a bit. Eventually, it did. And here we are.

Since the site exists largely as a placeholder on the web, the design is done as needed, and not by a professional. The site functions as required and evolves at its own pace. It’s uncluttered and easy to navigate, with an emphasis on content (and textual content, at that). For as remedial as the site may look, there’s a tremendous amount going on under the hood. It’s a collection of technology and techniques that can be carried over to other vastly more sophisticated site designs. Most of it is explained below.

Directly and indirectly, more than a few people have helped to make this site what it is. To their skill, experience, and willingness to share, I am eternally grateful.

The site is built by hand using UltraEdit. It is occasionally fiddled with in an ancient copy of HomeSite.

Features of particular interest include...

Page Structure:

It started innocently. But then somehow, it always does. Back when the fixed side navigation menu was instituted, I was taken by the flexibility of Server Side Includes as a way to flexibly build up a web page. When Scott Kimler put me onto his myMIME script, I was even more impressed with the ability to include pretty much anything via PHP.

As the 1.1 rewrite got underway, I sat down to analyze what made up my pages. Like most sites, there’s a DOCTYPE declaration and then all manner of stuff crammed into the <head> – things like style sheet imports and the various meta elements. Since all these elements are common across all pages, why duplicate them? I didn’t have a good answer so there was a conscious effort to eliminate, get rid of, and otherwise stamp out redundancy.

The basic goal was simple: I didn’t want to have to change the same thing in more than one file – ever. By the time I was done, every conceivable piece of the site was split out into modules: the CSS was broken into multiple style sheets, and the pages were standardized around a series of imported code blocks which, like the CSS, can be altered with global consequences. This reduced the basic page to little more than an insane amount of includes and a heavily templated content section. That’s the nice thing about templating: if everything’s the same, then all you have to pay attention to are the things that are different.

The include structure breaks down something like this: DOCTYPE declaration (which is not an include in any traditional sense; more on that later), content-type metadata, then all of the various style sheet imports. The navigation menu was split into three different blocks based on a header/body/footer model. The logo and main elements are consistent on all pages, so everything above the horizontal rule makes up the primary block. Menu items below the separator change contextually by section, so they’re assigned to their own block. The validation badges finish off the menu with their own tiny include.

To give the efficiency of this concept some perspective vis a vis space conservation and the ability to make site revisions, there are about 30 lines in the <head>, and with the navigation menu (which varies in size depending on the page/section on which it’s being implemented), there can be as many as 100 lines of code (roughly 4.5kb per page) to wade through before you ever get to the contents. If you have to change boilerplate code on every page, it quickly becomes tedious to make widespread changes, even on a small site. While there are text editors which have “Replace in Files” functionality, those only work if every file is identical in its construction – and if an anomaly were to be introduced somehow, it would be practically invisible and the global replace would fail silently on those pages. With the template system, that same preamble is guaranteed to work on all pages and takes up only 12 lines of code.

Standards Compliance:


In addition to validating the XHTML and CSS (see the linked icons at the bottom of the navigation column to the left), the site is tested in both Firefox and IE. This is done more out of curiosity rather than a burning desire to cater to the dominant browser, but since both are generally present on the dev machine, it’s just an issue of being thorough. Every so often, differences in rendering crop up and usually the fault lies with my code. It’s always interesting to see which things break in which browser. Since there’s no real exotic coding going on, things should be expected to work on any modern, standards compliant browser.

The thing I’ve come to like most about XHTML is something that seems to irritate and horrify a lot of people: if your code isn’t well-formed, the page won’t even display. While it doesn’t actually mean that you’re writing valid code, if your page displays but won’t validate, the solution is usually fairly trivial. It’s perversely useful. Once you get past that hurdle, there are a few more considerations and complications along the way to producing a working page.

A MIME is a Terrible Thing to Waste:

One of the more convoluted issues surrounding the conversion of a site to (or building a site in) XHTML is making sure that you get the appropriate MIME type. There’s all kinds of heated debate about how/why this should or shouldn’t work (or whether or not the world will end if it’s not done exactly according to Hoyle). The fact that the W3C recommends that all XHTML documents be served as “application/xhtml+xml” is complicated by the fact that some browsers (e.g. IE, as usual) haven’t a clue what to do with that MIME and render the page as raw code.

Scott Kimler put me onto a PHP solution for sorting all this out. Since XHTML 1.1 code will validate with an XHTML 1.0 Strict DOCTYPE – which the W3C graciously “allows” to be served as “text/html” – the myMIME script dishes up different DTD’s and MIME-types to different browsers, based on their rendering capabilities. As if that’s not cool enough, another interesting and potentially confusing thing about myMIME is that these declarations happen invisibly. If you do a view source, there’s no Content-Type, and yet the page still validates. Since it’s not unreasonable to think that anyone looking at the page source will be completely baffled by what looks like a glaring syntactical omission, I used this—

<?php echo "<meta http-equiv=\"Content-Type\" content=\"$mime;charset=$charset\" />\r"; ?>

—to display all the appropriate MIME info in <head> (where it “belongs”) – even though it’s not actually “needed”. The functionality of the page isn’t altered. All the code snippet is doing is taking info that already exists and echoing it back into the page source for clarity.

And so, with myMIME, XHTML is served properly to all browsers, quirksmode is a thing of the past, and a whole lot of cross-browser compatibility issues went with it.

Unicode Conversion:

One final interesting feature about myMIME is that since it’s included in every file, you can use it to globally change the character set used on a site (since charset is a component of MIME type). To this end, I decided to switch the text encoding from the standard Latin charset (ISO-8859-1) to the much more universal UTF-8. This wound up being an intriguing process, not so much in terms of complexity, but of research and ramification.

Unicode web pages are not particularly cryptic, neither are they particularly well-explained. Characters can be placed into a file in several ways. Like HTML named entities – e.g. &amp; for ampersand – Unicode characters have specific values and can be referenced by this numeric codepoint, either as a hexadecimal value (U+ values are always given in hex) or by its decimal equivalent:

&#x[hex value] ... or ... &#[decimal equivalent]

If the application being used to create your file supports UTF-8 encoding, these characters can also be typed in directly.

So what identifies a file as UTF-8? That’s an interesting and convoluted issue for which there is an appallingly terse answer – not much. Since UTF-8 is backwards compatible with ASCII, the only difference between a standard ASCII text file and one that is encoded in UTF-8 is the presence of the Unicode characters. UTF-8 has everything to do with text encoding and nothing to do with file structure. Having said that, this is where things can get complicated. There is a byte sequence known as the Byte Order Mark (BOM) which identifies the endianness of characters encoded in UTF-16 or UTF-32. Text encoded as UTF-8 does not suffer from byte order issues, so a BOM is not required, though it can be used to identify text as UTF-8. Many applications – and especially UTF-8 conversion utilities – will add a BOM to ASCII files by rote, and this in turn leads to all manner of issues and complications.

The success or failure to render UTF-8 encoded text lies solely with the applications that create, manipulate, move, and display it. Whether they lack support or are simply misconfigured, the end result is horribly mangled output. Culprit applications can include text editors, FTP clients, databases, web servers, scripting languages (such as PHP), and legacy browsers. Fortunately, as things evolve, there is an increasing demand that applications support UTF-8. As a consequence, many of the issues faced in the past are simply no longer relevant. However, there’s still a good amount of research, experimentation, and testing to be done before converting a complicated site to Unicode.

CSS Validation (and Ninjas):


CSS is a wonderful thing. How could you dislike a technology that allows you to make global changes from a centralized location? Well... easy! Things don’t always work the way they’re supposed to. But with a little ingenuity, there’s a solution to everything.

It’s not unreasonable to say that the single nicest contribution Microsoft has made to the field of web development is the introduction of the conditional comment. (This is canceled out by the fact that they also inflicted Explorer on us.) In a nutshell, it works like this: you’re coding along, checking your progress, and you do something that you either know or discover will not render in some flavor of IE. Instead of littering your style sheet with various fixes, you create an additional style sheet with rules that rewrite the problematic rules from the main sheet to address (and presumably solve) the whatsit. In the <head> section of the page, you create an [if] condition inside of an HTML comment: if some or all levels of IE are discovered, then do something – such as load an additional style sheet.

Not only is this handy, it’s downright necessary in some cases, since the Microsoft has historically had its own very peculiar agenda when it comes to rendering and standards support. This isn’t to say your IE styles shouldn’t be valid, it’s just that sometimes they need to be different.

Here’s an interesting bit of trivia about the W3C CSS validator: it doesn’t pick up conditional style sheets (nor should it, because... HEY – they’re inside of a comment). Might this be a way to sneak invalid code past the validator...? Why yes – if you only wanted to target IE.

In breaking the site apart and making it more modular, the CSS got attacked fairly early on. The bulk of the styles are served out of one main sheet. Standards compliant browsers are most happy with this. For things that are not so happy – i.e. IE – with the standards ways of dealing with things, there are separate style sheets for IE6 and IE7, as well as a global IE fix sheet. It was easier to split them all apart than to try to keep track which weird trigger activates which browser.

Since printing in Firefox has been horribly broken since its initial release, an additional style sheet to reflow web pages for dead tree printing has been put into service. Over the years, various minor Firefox print bugs have been squashed, but there is a major problem that has gone unaddressed, though widely discussed, since it was first reported in March 2002. The issue has to do with the way the Firefox print engine handles overflow:hidden and overflow:auto declarations. When it hits one of those, Firefox refuses to print anything beyond that point, which generally means you get only one page. While remedial, the printer CSS strips irrelevant styles (such as the navigation) and makes things a bit nicer if you’re into the ritual sacrifice of trees.

There’s one last style sheet that bears some discussion since it has a direct impact on site validation.

Some styles – while supported – will not validate. The W3C has been working on CSS3 since 1998, and it’s still not done – much less implemented. Certain chunks of the spec, though, while still considered a working draft, have reached the point of maturity and usability that preliminary support has been added to certain browsers. “Preliminary” generally means that some kind of browser- or vendor-specific prefix has been added. (See sidebar.)

When this started happening a few years back, the answer seemed obvious: if you want to have a style sheet that validates, you’ve got to strip out this sort of fringe code and relegate it to its own style sheet. It’s not really a solution, since those styles will still break, but if you’re reading a validator report and everything it’s choked on is contained in something called “invalid_styles.css”, it should be pretty obvious what’s going on. That’s a lot better (or less obtuse) than the Wikipedia approach of adding notes to their style sheet stating that the code “is correct, but the W3C validator doesn’t accept it”.

While this is an acceptable method of containing the problem, it doesn’t change the fact that you actually have a problem. This requires the universal solution to every problem – ninjas!

I had this idea a few years ago, but didn’t (and still don’t) have the coding experience to implement it, so there it languished in the depths of my head: extending the separate valid/invalid style sheets concept, how difficult would it be to hide something from the validator? There’s a solution that creates a series of conditional queries identifying various browsers by user-agent and serving invalid styles (or whatever) to them. Since the validator bypasses the conditions, it doesn’t see the styles and goes on its merry way. It’s rather clunky, since it requires you to specifically define every user-agent you want to target. Scott Kimler pointed out that the more obvious way of attacking the issue is to simply exclude the validator, which is fairly trivial since it can be identified by its own unique user-agent string.

Occam’s razor in the hands of CSS Ninjas – the validator will never see them coming! (You, on the other hand, can spot them here.)

One final thought about CSS: it seems to be a little known fact that the W3C CSS validator can test for CSS3 compliance. The default is CSS 2.1, but if you check under “More Options” you’ll see that there are all manner of different profiles – including CSS3. All the these variables can be passed directly. (If you’re embedding validation links in an XHTML page, make sure to escape your ampersands or the page will commit ritual suicide.)

Viewing Width:

Print media has benefited from centuries of refinement. Newspapers, for instance, are laid out in columns for specific reasons: readability and economy of space. Those reasons compliment the style in which newspapers are written – short sentences and equally short paragraphs – but it’s a case where having control over the final layout is imperative. The web basically destroys all that. As soon as everything goes liquid – and if it’s not, you get criticized for that – then you’re at the mercy of the common sense of your visitors. Or the decided lack of same, because you just know that there’s gonna be some idiot complaining that your site looks crappy running full screen on their 30″ Cinema Display. The problem is that when everything scales dynamically, paragraphs get shorter the wider your browser is, so you can wind up with what would otherwise be a reasonably-sized paragraph stretched out to only a couple of obnoxiously long lines. And if you have images that are supposed to float next to said paragraph, when it’s only a couple lines, the traditional concept of what a paragraph “looks like” kind of goes out the window.

The site is intended to be viewed at somewhere around a 1024 pixel browser width. Running a browser smaller than that reflows the pages fairly well. The max-width style in CSS3 allows liquid flexibility beneath the threshold size and fixed-width behavior above it. Of all the modern browsers in use, only IE6 doesn’t support the property. There are ways of faking it, but since unsupported CSS is simply ignored by browsers that don’t grok those styles, it was more expedient to ignore the issue. If you’re still running IE6, you have bigger problems than the width of this site (and you probably don’t have a 30″ Cinema Display).


There’s never been a lot of JavaScript on this site. Now there’s less. The only appearance is the onclick="return false" code used to kill the ability to click hovered links and images. If visitors have JS disabled in their browsers, the click will simply divert to an explanatory error page.

Frameless Menus:

The navigation menus are supposed to be fixed in place, giving the impression that they are housed in a frame. In fact, the site is frameless and this effect is accomplished with CSS.

In the 1.0 Trans site, these menus (except on the homepage) were placed with Server Side Includes. This made things nicely flexible and planted the seeds of modularizing everything. The only limitation was the index page, since SSI requires the normal file extension to be preceded by an “s” (i.e. .shtml or .shtm), and the index page has to be .html (or .htm). In the 1.1 site, the includes are handled via PHP. Now, all of the menus are included, and only one file extension is used.

The original concept was picked up from articles at The Jotsheet, Tagsoup, Simon Jessey’s page, and the CSS-Discuss wiki. Though slightly different, each of these implementations use quirksmode to sidestep the IE6 box model bug. In the end, though, quirksmode winds up causing more problems than it solves because every other piece of positioning CSS is impacted by its use. One of the first victims of quirksmode rendering was cross-browser rendering in PZ3. In troubleshooting the problem, Scott Kimler worked up a standard-mode version that works identically in both IE and Firefox with no wonky hacks.



If you Google for CSS sidebars, you’ll get almost a million hits (down from almost twice that a couple years ago) for everything but this fantastic bit of code – print-style sidebars for your website! (Available in a variety of flavors for your enjoyment and satisfaction.)

By sheer coincidence, I ran across this excellent tutorial on creating the sorts of inset article sidebars that you see in books and magazines, as opposed to the multi-column page layouts or fixed sidebars (such as the one featured to the left) usually discussed in CSS forums. The article shows how to set up columns of multiple widths in a variety of positions. All code provided.

Expanding Thumbnails:

On the earliest incarnation of the site, thumbnails were scaled in Photoshop and posted separately from the full-sized images. This technique was abandoned in favor of the in-page CSS image zoom. Though the thumbnails don’t look as nice when scaled in the browser vs a dedicated graphics application – detail can blur and thin lines look jagged – the result is a much more elegant user experience.

The code for the in-page CSS image zoom is Scott Kimler’s PZ3 (Photo-Caption Zoom, v3) – a dazzling combination of CSS and XHTML (both valid) that is flexible, compact, and easy to implement. The page layout does not reflow when switching image size, and it behaves identically in both Firefox and Internet Explorer. There’s even a caption option which can be toggled on and off. Information and explanation of how it does what it does can be found at his site.

Since the concept of expanding thumbnails is anything but intuitive, the instruction box is included to offer a gentle hint as to the functionality. Though it appears here in the middle of the page (to accompany the placement of the blurb), in practise, the instruction box is placed over the first thumbnail and is used only once on a page. For consistency throughout the site, all expanding thumbnails are sized to 128px wide.

One thing that I did rework was the physical layout of the XHTML necessary to display a PZ3 image. Scott uses one long line of code, which forces you to actually read the entire block to find the elements you need to change. Since I have a fair number of PZ3 images, I wanted something that was easy to adjust with no more effort than a casual glance. After staring at the code for awhile, it became clear that the block could be divided neatly into lines based on the elements they contain (with a couple stray bits of purely functional code). Since they never change, those purely functional lines have been indented to move them off of the left margin and out of your eye-line, and comments have been added to the head of the remaining lines to identify the elements you’ll need to adjust whenever you copy the code to add a new image.

By way of comparison, the stock layout for Figure 1 on this page would look like this—

<div class="PZ3zoom PZ3-r Bdr Cap noLnk" style="width:128px; height:22px;"><a href="./errors/error_nojs.html" onclick="return false"><img src="./graphics/colophon/features_firefox.jpg" alt="" title=" Features in Firefox " /><span class="PZ31cap" style="width:692px;"><span class="PZ3inr"><b>Figure 1:</b> The colophon&rsquo;s in-page navigation menu as it can appear in Firefox 1.5 and newer.<br/><b>NOTE:</b> The number of columns will depend on the width of the browser window.<br/><i><span style="color:#ccf; background:inherit;">&mdash;Design by author.</span></i></span></span></a></div>

The reformatted version, as it is featured on the site, appears thus—

<!-- thumb size --><div class="PZ3zoom PZ3-r Bdr Cap noLnk" style="width: 128px; height: 22px;">

<a href="./errors/error_nojs.html" onclick="return false">

<!-- image source --><img src="./graphics/colophon/features_firefox.jpg" alt="" title=" Features in Firefox " />

<!-- width full size --><span class="PZ31cap" style="width: 692px;">

<span class="PZ3inr">

<!-- caption --><b>Figure 1:</b> The colophon&rsquo;s in-page navigation menu as it can appear in Firefox 1.5 and newer.<br/><b>NOTE:</b> The number of columns will depend on the width of the browser window.

<br/><i><span style="color:#ccf; background:inherit;">

<!-- attrib. -->&mdash;Design by author.


The revised layout chews up an additional 119 bytes, but that’s a reasonable trade-off for vastly improved readability and ease of maintenance. In either case, using a text editor that supports code formatting makes life still nicer.

Since the expanding thumbnails will print as thumbnails and not full-size images, a bit of redundancy and CSS is used: the image source and caption lines from the PZ code block are duplicated at the bottom of the web page in div.printercontent. In the main style sheet, printercontent is hidden with display:none, and toggled on with display:inline in the printer style sheet. Because the expanding thumbnails are positioned by default with/inside of div.thumbnail, switching them off is trivial.

Contact Form:

The deeper into the redesign things got, the more PHP there wound up being. At some point, it made a certain amount of sense to standardize the number of languages running around the site (if only to reduce the number of requirements to check if the site is ever relocated). So Stephen Ostermiller’s nifty GPL Perl contact form gave way to the more sophisticated PHP-based Green Beast Contact Form by Mike Cherim. The GBCF offers a host of security and anti-spam features. It outputs valid XHTML, is relatively easy to customize, and it’s free. And in the unlikely event that you have problems, Mike is incredibly knowledgeable and quite helpful.

Source Viewer:


The marked up source is provided by the amazing GeSHi. The Generic Syntax Highlighter is a PHP class that uses CSS formatting and creates valid XHTML of the parsed file. It was rather daunting to get it running (the documentation is geared towards a fairly technical audience), but working with the developer, it turns out there were a couple of bugs in the code. If you run into issues, check the IRC channel: #geshi (on Freenode).

The sidebar in the section on CSS Validation has a number of examples of GeSHi in action. Each of the style sheets are explicitly linked, so achieving the desired output isn’t a problem. What I really wanted to do, though, was to be able to use GeSHi to output the source of any page on the site, and dynamically passing those page locations proved to be tricky.

If you google for PHP and view source, you’ll get a lot of hits for people trying to view PHP source files on web sites. Strangely, I had exactly the opposite problem. Because the Ronin Group pages are pieced together from a number of templated includes, they don’t look like much in their raw form, and aren’t all that interesting. However, that’s all I could get to display.

The usual way of passing a page to GeSHi is by file_get_contents, in a string that looks something like this:

$source = file_get_contents(dirname(__FILE__).'TRG_colophon.html');

But what PHP does – all it has to do – if the file is local, is read it off the drive. That’s all well and good for something like style sheets, but if your file has (or rather “will have”) any sort of include or other components that might be injected server side, you’ll never see that material.

After thinking about this for awhile, I thought perhaps I could fool PHP into fetching local files with a technique similar to what you’d use to grab the contents of remote ones:


$file = substr($url,strrpos($url,'/')+1,strlen($url));

$source = file_get_contents($file);

Unfortunately, it didn’t work. The file was treated just as if I’d bypassed the whole URL parsing thing and done a local file_get_contents directly. More head scratching and grumbling and research ensued... My quest for a solution lead me to the reference pages of PHP Freaks, and eventually to their IRC channel (##phpfreaks on Freenode).

It turned out that my idea of grabbing pages as if they were remote was sound, just incomplete. The primary issue was that file_get_contents() returns an unprocessed string. By using a magic output buffer (which basically creates an imaginary new page), and assigning said buffer to a variable, the entire page is passed GeSHi as processed code. Voilà!


$file = substr($url,strrpos($url,'/')+1,strlen($url));



$source = ob_get_contents();


Many thanks to MaxFrag|Zack for taking the time to help me sort this out.

For the very curious (or the eternally bored), there’s a hidden feature on every page: a clear 80×15 block above the XHTML/CSS validation badges that hyperlinks to a GeSHi output of whatever page you’re on. No results will be returned from the home page because the source viewer requires an explicit target page and that one is referenced as / and not as index.html (courtesy of Apache).

You can read more about getting GeSHi to do its thing in this article.

Hover Notes:

After years of user interface development and refinement, the concept of the hovering tooltip is fairly well-established as an element on the desktop. While tooltips on a web page could be fantastically usefulThis is an example of a hovering tooltip on a webpage – très spiffy, no?, there’s no intrinsic way of implementing such a beast. There are a number of “behave-alike” solutions, but unfortunately, most of those have the potential of breaking either the user experience – by having to load a whole bunch of JS code, and/or having the tooltips display off screen – or by violating the specific semantic usage and definition of certain tags, such as <acronym> or <title>. (Read why titles are not tooltips.)

Determined that this feature would be useful, I worked up a slight modification of Eric Meyer’s pure css popups experiment. Where Eric used the popup as a caption for, and immediately beneath, a sidebar menu, the relationship between caption and menu element was obvious. Here, the only consistency is the placement of the box – it overlays part of the menu space – since the terms being defined can be anywhere on the page. Instead of creating a hover tip near the defined element, the danger of the box disappearing off-screen or interfering with other elements on the page is thereby eliminated.

It’s not widely implemented around the site, but it’s bound to pop up from time to time...

Site Map Styling:

The concept of a site map has intrigued me for years, but never so much that I was actually inspired to do anything about it. While it’s never too late, from a practical standpoint, it’s a good idea to start early lest you wind up with so many pages that you have an insurmountable problem. (There are probably utilities for generating such beasts but I wasn’t quite to the point where I felt compelled to check.) Once the map was put together, I was singularly unimpressed with the layout. It’s just a giant list and there’s not a whole lot that can be done about that. By happenstance, I found a brief article on turning lists into trees at the same time as I decided to build a site map. The cosmetic limitation is that it doesn’t “work” on pages with background images because it needs to be able to overwrite the terminating tree element with a solid color. That’s a rather trivial concern – one that I chose to ignore – and the design proved to be exactly what the site map needed. Wrapping it in a multi-column <div> was the final touch.


The favicon – the little icon that appears in the URL bar as well as in the bookmark list – is a detail from Hokusai Katsushika’s famous woodcut, Beneath the Great Wave at Kanagawa (see here for a larger version). It was converted from a JPEG by the web-based FavIcon from Pics utility.

Antipixel Badges:

On 22 October 2002, Jeremy Hedley created an internet sensation when he introduced a set of replacement buttons for advertising various under-the-hood technologies used on his Antipixel blog. They were nicely colored, easy to read, and consistently compact: 80 pixels wide by 15 pixels tall. News traveled swiftly through the blogosphere and soon everyone was creating buttons to announce or evangelize one thing or another. Over the years, Taylor McKnight has amassed a very impressive button database which is well worth the time spent browsing.

Quite a few of the 80×15 buttons that appear on the site – on this page, especially – were created by the graphicsguru’s online button generator. It’s very cool: plug in the hex numbers for your borders, field, and font colors, add some text, specify how wide to make the rectangles, hit “generate”. Voilà – your own custom Antipixel badge!

Geo Visitors:

The geo-location plotting of site visitors comes from Shawn Hogan at Digital Point Solutions. It’s basically a hack of Google’s mapping API that shows the origin of visitors during the last 24 hours. It works by hosting the Geo Visitors badge seen on most pages on the Digital Point servers. When the image is pulled to your website, the requesting IP address is logged, then resolved in a commercial geo-location database and plotted on the map. The code to add this to your own site/page can be found at the Digital Point link.

Since tracking visitors to all pages requires the Geo Visitors badge to be present on all those pages, this can potentially interfere with your page layout. A simple workaround is to use CSS to include the badge as a DIV background, but to use position:absolute to stake it down in a location where it will never be seen – in this case, 9000 pixels down on the page. By way of comparison, a 30″ Cinema Display is only 1600 pixels high, so this coördinate should be well out of range (even if you’ve tiled a bunch of them into a wall).

One word of warning: there’s been some kind of ongoing glitch with Digital Point’s MySQL database (which they seem loath to address), and sometimes, it fails to properly identify the incoming website. It’s obvious when this happens, and the solution seems to simply be to refresh the page until the errors go away and the correct site is plotted.

Search Engine Optimization:

This site has, in no way, been optimized for search engine page ranking. It’s something that probably needs to happen, but there are so many conflicting opinions about what is good or bad, elegant or fugly, in terms of implementation and best practises that it will require a decent amount of research to sort out.

I have no idea if this will work or not, but if search engine bots will deal with PHP includes, then the perfect solution is to add SEO crumbs in a semi-generic include.

In Conclusion...

As the [now late] great Walter Cronkite was wont to say, “That’s the way it is... ” I hope this page was of enough interest not to bore you to tears. Feel free to send me any comments, suggestions, improvements, flames, or whatever occurs to you.


1 The colophon’s in-page navigation menu as it can appear in Firefox 1.5 and newer.
NOTE: The number of columns will depend on the width of the browser window. [Design by author.]

2 The colophon’s in-page navigation menu as it appears in Internet Explorer.
NOTE: The grey links indicate that they have been visited. [Design by author.]

3 Around here, it’s sneaky week all the time! [Photo courtesy of the fabulous T.K Ryan via]

4 Beneath the Great Wave at Kanagawa (1823–29)
Color woodcut, 10×15″ Metropolitan Museum of Art, New York.
From 36 Views of Mount Fuji by Hokusai Katsushika. [Photo courtesy of Wikipedia.]