A while ago, I had to incorporate a map of Flanders into a site, so I had my designer draw the map in Illustrator. The problem was that different regions needed to be clickable, so I'd have to make an HTML image map. Image maps support polygon regions, so it would work fine, I only had to get the polygon data out of the Illustrator file into the HTML markup.
The first attempt was using a site where you can upload an image, and draw your polygon manually. Obviously the polygons would be coarse and not match the outlines of the provinces very well, as you have to manually click each point of the polygon on a small image. After an accidental refresh of the page, losing all my polygons for practically the entire map, this approach of creating the image map was switfly abandoned.
Luckily, Illustrator can provide us with the polygon data. Simply export the drawing as an svg file, which is just an xml file. This xml file contains <polygon> and <polyline> tags which we'll need to process. Let's take a look at a snippet of the polygon data.
There is a slight manipulation we have to perform. The points attributes above do indeed contain the polygon data, but the formatting differs a little from what is necessary for an HTML <area> element. In the SVG the x,y coordinates are separated by a comma, and there is a space between coordinate-groups, whereas in an HTML <area> element it's just a string of comma separated coordinates (x,y,x,y,x,y,…)
A small Python script can take care of that quite easily, as it would be too much work to manually convert all the formatting for the points arrays. Even if you'd entered the comma's manually and extracted the points out of the SVG file, another problem would pop up: the coordinates seemed to be offset on both the x and y axis ! This is particularly strange, seeing as in the SVG file, at the top, you can see that the viewbox and x and y offsets are zero. You would assume the zero origin would be the same in the HTML <area> element, but apparently there is a difference. Instead of spending hours figuring out where the difference comes from, we can easily add an offset in the Python script when parsing the points.
We'll start the Python script with importing the xml minidom module, and defining our offsets.
Now we'll parse the SVG file. In this example I only take a look at the first <g> element, which is actually the first layer in Illustrator. Make sure you keep the Illustrator file as simple as possible, in a single layer. This will keep the SVG file much cleaner. Next, all the child nodes of the <g> elements are processed, and we store the points attributes of the <polygon> and <polyline> elements away in the inputs array.
inputs = 
dom = parse("chart.svg")
group = dom.documentElement.getElementsByTagName('g')
for child in group.childNodes:
if child.nodeName == 'polygon' or child.nodeName == 'polyline':
Finally, we'll loop over the inputs array containing all the points data. To get all individual coordinate pairs we'll split the string on every space, and to get to the x and y value of a coordinate, we split on the comma. Now that we have the x and y values we can add the offsets. All that is left is outputting the correct HTML <area> string, joining all the coordinates with a comma.
for input in inputs:
outputarr = 
components = input.split(" ")
for component in components:
xy = component.split(",")
if len(xy) == 2:
x = int(float(xy)) + offsetX
y = int(float(xy)) + offsetY
outputarr.append(str(x) + "," + str(y))
print '<area shape="poly" coords="' + ",".join(outputarr) + '" href="#" alt="" title="" />'
What about the offset numbers? They seem quite arbitrary, and in fact they are. Determining the exact values can be somewhat difficult, because you can't see the polygon areas over your image when you're testing it out. There is however a nice jQuery plugin that allows you to outline the polygons: jQuery MapHighlight. With the aid of this plugin you can let Python process the SVG file, look at the HTML and see how much the polygons are shifted, so you can adjust appropriately.
The new Safari Validator release is out, bringing HTML5 validation!
The service to validate the HTML5 pages is validator.nu. This means that, unlike W3C validation, the contents of the HTML5 pages are transferred to the remote validation.nu server, and processed there. Be careful that you don't submit pages containing sensitive information for validation!
It's best to turn off HTML5 validation in the preferences when you're not developing an HTML5 website to be sure.
I've also added, upon request, a toolbar button. This quickly toggles the Safari Validator bar so you can reclaim the precious screen real estate when you're not interested in the validation results.
As you will probably notice (the most of you being designers and all) that the button image is just about the ugliest ever created, I can only agree. I'd have put the circle with the v-checkmark in there in grayscale, but Apple forces to use a (alpha) bitmap, and I didn't have the time to create a nice looking icon. If you're interested in making a decent logo for the toolbar button, let me know. Just make sure to only use black (and transparency).
Three other fixes are also included : the W3C validation results page is now shown again, and the border added by the extension at the bottom of each website is gone. Also, the embed shouldn't be injected into iframes anymore, so it won't end up in CMS rich text editors.
A strange thing happened when importing a large number of logfiles into AWStats. Apparently only three out of twelve months were being imported correctly. Even over several years, only the same three months would be imported, the rest discarded.
I pipe the Apache logfile into Cronolog, to make sure the logs get put into the following folder structure: YYYY/MM/DD/access.log
This makes it easy to find something in the log files. Each night, the giant access.log file gets split up into several files, one for each vhost. This can easily be accomplished by appending the vhost to each log line. A script then regexes for the vhost, puts it in the relevant log folder for the specific vhost and gzips the logfile. This way I only need to have two running cronolog processes during the day, instead one for each of the vhosts.
So, when migrating a lot of sites I had to reprocess the AWStats as well, seeing as I didn't copy the AWStats databases. This is quite easy, as you can specify in the .conf file which files it needs to parse. I put in the following :
This will find all the files in the log folder, filter out the access logs, and zcat them for input into AWStats. This is where it went wrong, and most of the months went missing. The cause was actually really simple. AWStats expects logfiles to be fed in chronological order. Running find on itself clearly showed that the result was not in chronological order (indicated by the folder names).
This is apparently a difference in the find implementation. The old server, running CentOS, sorted the find output perfectly, so I never encountered this problem before. The new server, running a different Linux flavor mixes up the find result.
While migrating a Qmail installation to a new server, I ran into a peculiar problem. In order to avoid spreading out mail over two different servers while the DNS-change is propagating to all the dns-servers, you can use the smtproutes file to instruct Qmail to deliver mail to a different server. Basically you tell it to accept mail for the domain you are moving, but instantly deliver it remotely to the ip-address or domain of the new server.
The procedure is actually quite simple. Mirror all the accounts on the new server, so that all the mail gets accepted there. Then, on the old server, remove the domain from the /var/qmail/control/virtualdomains file (and locals file if it contains the domain as well). Finally, if it doesn't already exists, create /var/qmail/control/smtproutes and add example.com:184.108.40.206 in there.
220.127.116.11 is the ip-address of the new mail server (you could also enter the domain name there), and example.com is the domain you are migrating.
By using this procedure, you ensure that no new mail gets delivered on the old server. Mail servers that haven't seen the updated DNS entry yet will deliver to the old Qmail server, which will forward the message immediately to the new one.
This is where I ran into a problem. Apparently when I was testing out this configuration, the mail wouldn't end up on the new server. I would receive a bounce, containing the "554 too many hops, this message is looping" error. I could see in the logs that indeed, the message was looping. It was tried around twenty times in fast succession, never leaving the old mail server. I looked with tcpdump on the old and new server, to see which one was doing something wrong, but the new server didn't even get contacted at all.
After researching a bit several people encountered the same 'too many hops' issue in combination with smtproutes. The offered solutions were always the same:
Make sure the domain is still in /var/qmail/control/rcpthosts, otherwise Qmail won't accept the mail at all (it won't relay for unknown domains)
Remove the domain from /var/qmail/control/virtualdomains and locals
Add the domain to the /var/qmail/control/smtproutes file in the format domain:newmailserver
Everything on that list was as it was supposed to be, and yet mail wasn't being forwarded to the new server. The old server didn't even open a connection to the new one. Even worse, all incoming mail was being bounced.
It took me a while, but I finally understood what the problem was. On the old server, I have a /29 block of ip-addresses available to put servers on, and all of them were assigned to the old server. The new mailserver, on a physically different machine, was configured for one of those ip-addresses! It didn't really matter that the old server still had that ip-address active as one of its own, as the router knew about the change. The mail server however thought that the ip-address I put in smtproutes was actually on the same old machine, because the network interface for that ip-address was still up.
As soon as I ifconfig down'ed the network interface for the obsolete ip-address on the old server, the mail forwarding worked just fine. So even if the three points you need to check above when smtproutes isn't working don't do the trick, make sure you're not forwarding to an ip that's assigned to a network interface on the old machine!
I've rewritten the plugin to make use of the new extension mechanism in Safari 5. This means that the reliance on SIMBL is gone, and no private API's are used. Now it is just a webplugin-safari extension combination. The extension handles all the user interface, and the webplugin handles the actual validation.
Installation is pretty simple: copy the webplugin to ~/Library/Internet Plug-Ins, and double click the safariextz file. The latter requires you having enabled extensions in Safari. To do this, open the preferences, and go to the 'Advanced'-pane. Check 'Show Develop menu in menu bar'. In the newly visible 'Develop'-menu, select 'Enable Extensions'.
Please note that the W3C validation takes time! It will slow your browsing down, especially if the site contains a lot of (i)frames. Just go to the preferences, and disable the W3C validation.
HTML5 support isn't there yet, as I'm still investigating the best solution. The validator.nu engine (which W3C uses) assumes you will be running it as a webservice, which is not ideal in a browser situation. Opening a port and running a server in the browser is definitely a worst case solution.