Mon 03 Dec 2007
Tags: blosxom, blosxom plugins, microformats
Building on my initial set of blosxom microformat plugins,
the hcard
plugin provides a global hcard
variable for inclusion in your blosxom templates.
To use it, you simply define the set of hcard data to use in an 'hcard.yml'
file in your blosxom data directory, and then include $hcard::hcard
somewhere in your blosxom flavours/template. An example hcard.yml for me
might be:
Name: Gavin Carr
Organisation: Open Fusion
Role: Chief Geek
Email: gavin@openfusion.com.au
URL: http://www.openfusion.net/
Suburb: Wahroonga
State: NSW
Postcode: 2076
Country: Australia
Latitude: -33.717718
Longitude: 151.117158
HCard-Class: nodisplay
HCard-Style: div-span
I'm using hcard
here, so if you have microformat support in your browser
(e.g. via the Operator
plugin, if using firefox) you should be able to see my hcard on this page.
As usual, available in the
blosxom sourceforge CVS
repository.
Sat 01 Dec 2007
Tags: blosxom, blosxom plugins, microformats
I've been messing around recently with some ideas on adding some
initial microformats support to blosxom.
Microformats are fragments of html marked up with some standardised
html class names, providing a minimalist method of adding simple
structured data to html pages, primarily for machine parsing (try
out the firefox Operator
plugin to see microformats in action). Some examples of currently
defined microformats are contact details
(hcard), events
(hcalendar), links or bookmarks
(xfolk), geolocation
(geo), etc. See the main
microformats website for more.
With blosxom, one simple approach is to allow microformat attributes
to be defined within story metadata, and either autoappend the
microformat to the story itself, or simply define the microformat in
a variable for explicit inclusion in the story. So for example, if
you wanted to geocode a particular story, you could just add:
Latitude: -33.717770
Longitude: 151.115886
or
meta-latitude: -33.717770
meta-longitude: 151.115886
to your story headers (depending on which metadata plugin you're
using).
This is the initial approach I've taken, allowing you to attach
microformats to stories with a minimum of fuss. So far, the
following blosxom microformat plugins are available:
uf_adr_meta
- adr support
uf_geo_meta
- geo support
uf_hcalendar_meta
- hcalendar support
uf_hcard_meta
- hcard support
uf_xfolk_meta
- xfolk support
Note that these are beta quality, and may well contain bugs.
Feedback especially welcome from microformat gurus. There's also
a lot of other ways we might like to handle or integrate
microformats - this is just a useful first step.
All plugins are available in
blosxom sourceforge CVS
repository.
Thu 08 Nov 2007
Tags: web, advertising
Great quote from Dave Winer on
Why Google launched OpenSocial:
Advertising is on its way to being obsolete. Facebook is just another
step along the path. Advertising will get more and more targeted until
it disappears, because perfectly targeted advertising is just
information.
I don't see Facebook seriously threatening Google, as Dave does, but that
quote is a classic, and long-term (surely!) spot on the money.
I'm much more in agreement with Tim O'Reilly's
critique of OpenSocial.
Somehow OpenSocial seems all backwards from the company whose maps openness
help make mashups a whole new class of application.
It smells a lot like OpenSocial was hastily conceived just to get
something out the door in advance of the Facebook announcements today,
by Googlers who don't quite grok the power of the open juice.
Thu 08 Nov 2007
Tags: blosxom, blosxom plugins
I've been using tags here right from the beginning, because they
provide a much more powerful and flexible way of categorising
content than do simpler more static categories. This seems to be
pretty much the consensus in the blogosphere now.
I started off using xtaran's
tagging plugin. The one thing I
didn't like about tagging
was that it has a fairly brute-force
approach to doing tag filtering - it basically just iterates over the
set of candidate files and opens up and checks them all, every time.
So I started messing around with adding some kind of tag cache to
tagging
, so that the set of tags on a post could be captured
when a post was created or updated, and thereafter tag filtering
could be done by just referencing the tag cache. That means that
if you've got 100 posts, your tag query only needs to read one file -
the tag cache - instead of all 100 posts.
En route I realised I really wanted a more modular approach to
tagging than the tagging
plugin uses as well. For instance, I'm
experimenting with various kinds of
data blogging, like using dedicated
special-purpose blogs for recording bookmarks or books or photos.
And for some of these blogs I wanted to be able to do basic tagging
and querying, but didn't need fancier interface stuff like
tagclouds.
So I've ended up creating a small set of blosxom plugins that
provide most of the functionality of tagging
using a tag cache.
The plugins are:
tags
- provides base tag functionality, including checking
for new and updated stories, maintaining the tag cache, and
providing tag-based filtering. Requires my metamail
plugin.
storytags
- provides a story level $storytags::taglist
variable containing a formatted list of tags, suitable for
inclusion in a story template. Requires tags
.
tagcloud
- provides a $tagcloud::cloud variable containing
a formatted wikipedia:"tagcloud" of tags and counts, suitable
for inclusion in a template somewhere. Requires a hashref of
tags and counts, which tags
provides, but should be able to
work with other plugins.
Note that these plugins are typically less featureful than the
tagging
plugin, and that tagging
includes functionality
(related tag functionality, in particular) not provided by any
of these plugins. So tagging
is still probably a good choice
for many people. Nice to have choice, though, ain't it?
All plugins are available in
blosxom sourceforge CVS
repository.
Wed 31 Oct 2007
Tags: sysadmin, mysql, nagios
Here's an interesting one: one of my clients has been seeing mysql
db connections from one of their app servers (and only one) being
periodically locked out, with the following error message reported
when attempting to connect:
Host _hostname_ is blocked because of many connection errors.
Unblock with 'mysqladmin flush-hosts'.
There's no indication in any of the database logs of anything
untoward, or any connection errors at all, in fact. As a workaround,
we've bumped up the max_connect_errors
setting on the mysql
instance, and haven't really had time to dig much further.
Till tonight, when I decided to figure out what was going on.
Turns out there's plenty of other people seeing this too, although
MySQL seems to be in "it's not a bug, it's a feature" mode - see
this bug report.
That thread helped clue me in, however. Turns out that mysql counts
any connection to the database, even ones that don't attempt to
make an actual database connection, as a connection error, but they
only log ones that attempt to login. So there's a nice class of
silent errors - and in fact, a nice DOS attack against MySQL - if
you make standard TCP connections to mysql without logging in.
We, being clever and careful, were doing exactly that with
nagios - making a simple TCP connection to
port 3306 - in order to simply and cheaply check that mysql was
listening on that port. Hmmmm.
Easy enough to remedy, of course, once you figure out what's going
on. I even had a nice nagios plugin lying around to let me do more
sophisticated database checks -
check_db_query_rowcount -
so just had to replace the simple check_tcp check with that, and all
is right with the world.
But it's a plain and simple bug, and MySQL need to get it fixed.
Personally I think a simple tcp connection should not count as a
connection error at all without a login attempt (assuming it's not
left half-open etc.). Alternatively, if you do want to count that
as a connection error fine, but at least log some kind of error so
the issue is discoverable and can be handled by someone.
Silent errors are deadly.
Tue 30 Oct 2007
Tags: blosxom, blosxom plugins
I've tried all three of the current blosxom 'entries' plugins on my
blog in the last few months: entries_cache_meta
, entries_cache
, and the
original entries_index
.
entries_cache_meta
is pretty nice, but it doesn't work in static mode,
and its method of capturing the modification date as metadata didn't quite
work how I wanted. I had similar problems with the entries_cache
metadata
features, and its caching and reindexing didn't seem to work reliably for me.
entries_index
is the simplest of the three, and offers no caching features,
but it's pretty dense code, and didn't offer the killer feature I was after:
the ability to easily update and maintain the publication timestamps it was
indexing.
Thus entries_timestamp
is born.
entries_timestamp
is based on Rael's entries_index
, and like it offers
no caching facilites (at least currently). Its main point of difference
from entries_index
is that it maintains two sets of creation
timestamps for each post - a machine-friendly one (a gmtime
timestamp)
and a human-friendly one (a timestamp string).
In normal use blosoxm just uses the machine timestamps and works just like
entries_index
, just using the timestamps to order posts for presentation.
entries_timestamp
also allows modification of the human timestamps,
however, so that if you want to tweak the publish date you just modify
the timestamp string in the entries_timestamp.index
metadata file, and
then tell blosxom to update its machine-timestamps from the human- ones by
passing a reindex=<$entries_timestamp::reindex_password>
argument to
blosxom i.e.
http://www.domain.com/blosxom.cgi?reindex=mypassword
It also supports migration from an entries_index
index file, explicit
symlink support (so you don't have to update timestamps to symlinked
posts explicitly), and has been mostly rewritten to be (hopefully)
easier to read and maintain.
It's available in the
blosxom sourceforge CVS
repository.
Mon 22 Oct 2007
Tags: blosxom, tips
The blosxom SourceForge developers
have been foolish enough to give me a commit bit, so I've been doing
some work lately on better separating code and configuration, primarily
with a view to making blosxom easier to package.
One of the consequences of these changes is that it's now reasonably
easy to run multiple blosxom instances on the same host from a single
blosxom.cgi executable.
A typical cgi apache blosxom.conf might look something like this:
SetEnv BLOSXOM_CONFIG_DIR /etc/blosxom
Alias /blog /usr/share/blosxom/cgi
<Directory /usr/share/blosxom/cgi>
DirectoryIndex blosxom.cgi
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /blog/blosxom.cgi/$1 [L,QSA]
<FilesMatch "\.cgi$">
Options +ExecCGI
</FilesMatch>
</Directory>
The only slightly tricky thing here is the use of mod_rewrite to allow
the blosxom.cgi
part to be omitted, so we can use URLs like:
http://www.example.com/blog/foo/bar
instead of:
http://www.example.com/blog/blosxom.cgi/foo/bar
That's nice, but completely optional.
The SetEnv BLOSXOM_CONFIG_DIR
setting is the important bit for running
multiple instances - it allows you to specify a location blosxom should
look for all its configuration settings. If we can set this multiple
times to different paths we get multiple blosxom instances quite
straightforwardly.
With separate virtual hosts this is easy - just put the SetEnv
BLOSXOM_CONFIG_DIR
inside your virtual host declaration and it gets
scoped properly and everything just works e.g.
<VirtualHost *:80>
ServerName bookmarks.example.com
DocumentRoot /usr/share/blosxom/cgi
AddHandler cgi-script .cgi
SetEnv BLOSXOM_CONFIG_DIR '/home/gavin/bloglets/bookmarks/config'
<Directory /usr/share/blosxom/cgi>
DirectoryIndex blosxom.cgi
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /blosxom.cgi/$1 [L,QSA]
<FilesMatch "\.cgi$">
Options +ExecCGI
</FilesMatch>
</Directory>
</VirtualHost>
It's not quite that easy if you want two instances on same virtual host
e.g. /blog for your blog proper, and /bookmarks for your link blog. You
don't want the SetEnv
to be global anymore, and you can't put it inside
the <Directory>
section either since you can't repeat that with a single
directory.
One solution - the hack - would be to just make another copy your
blosxom.cgi somewhere else, and use that to give you two separate
directory sections.
The better solution, though, is to use an additional <Location>
section for each of your instances. The only extra wrinkle with this is
if you're using those optional rewrite rules, in which case you have to
duplicate and further qualify them as well, since the rewrite rule itself
is namespaced i.e.
Alias /blog /usr/share/blosxom/cgi
Alias /bookmarks /usr/share/blosxom/cgi
<Directory /usr/share/blosxom/cgi>
DirectoryIndex blosxom.cgi
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} ^/blog
RewriteRule ^(.*)$ /blog/blosxom.cgi/$1 [L,QSA]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} ^/bookmarks
RewriteRule ^(.*)$ /bookmarks/blosxom.cgi/$1 [L,QSA]
<FilesMatch "\.cgi$">
Options +ExecCGI
</FilesMatch>
</Directory>
<Location /blog>
SetEnv BLOSXOM_CONFIG_DIR /home/gavin/blog/config
</Location>
<Location /bookmarks>
SetEnv BLOSXOM_CONFIG_DIR /home/gavin/bloglets/bookmarks/config
</Location>
Because one blosxom just ain't enough ...
Thu 04 Oct 2007
Tags: web, rant, hardware
Today I've been reminded that while the web revolution continues
apace - witness Web 2.0, ajax, mashups, RESTful web services, etc. -
much of the web hasn't yet made it to Web 1.0, let alone Web 2.0.
Take ecommerce.
One of this afternoon's tasks was this: order some graphics cards
for a batch of workstations. We had a pretty good idea of the kind
of cards we wanted - PCIe Nvidia 8600GT-based cards. The unusual
twist today was this: ideally we wanted ones that would only take
up a single PCIe slot, so we could use them okay even if the
neighbouring slot was filled i.e.
select * from graphics_cards
where chipset_vendor = 'nvidia'
and chipset = '8600GT'
order by width desc;
or something. Note that we don't even really care much about price.
We just need some retailer to expose the data on their cards in a
useful sortable fashion, and they would get our order.
In practice, this is Mission Impossible.
Mostly, merchants will just allow me to drill down to their
graphics cards page and browse the gazillion cards they have
available. If I'm lucky, I'll be able to get a view that only
includes Nvidia PCIe cards. If I'm very lucky, I might even be
able to drill down to only 8000-series cards, or even 8600GTs.
Some merchants also allow ordering on certain columns, which
is actually pretty useful when you're buying on price. But none
seem to expose RAM or clockspeeds in list view, let alone card
dimensions.
And even when I manually drill down to the cards themselves,
very few have much useful information there. I did find two
sites that actually quoted the physical dimensions for some
cards, but the in both cases the numbers they were quoting
seemed bogus.
Okay, so how about we try and figure it out from the
manufacturer's websites?
This turns out to be Mission Impossible II. The manufacturer's
websites are all controlled by their marketing departments and
largely consist of flash demos and brochureware. Even finding
a particular card is an impressive feat, even if you have the
merchant's approximation of its name. And when you do they often
have less information than the retailers'. If there is any
significant data available for a card, it's usually in a pdf
datasheet or a manual, rather than available on a webpage.
Arrrghh!
So here are a few free suggestions for all and sundry, born
out of today's frustration.
For manufacturers:
use part numbers - all products need a unique identifier,
like books have an ISBN. That means I don't have to try and
guess whether your 'SoFast HyperFlapdoodle 8600GT' is the
same things as the random mislabel the merchant put on it.
provide a standard url for getting to a product page given
your part number. I know, that's pretty revolutionary, but
maybe take a few tips from google instead of just listening
to your marketing department e.g.
http://www.supervidio.com.tw/?q=sofast-hf-8600gt-256
keep old product pages around, since people don't just buy
your latest and greatest, and products take a long time to
clear in some parts of the world
include some data on your product pages, rather than
just your brochureware. Put it way down the bottom of the
page so your marketing people don't complain as much. For
bonus points, mark it up with semantic microformat-type
classes to make parsing easier.
alternatively, provide dedicated data product pages, perhaps
in xml, optimised for machine use rather than marketing.
They don't even have to be visible via browse paths, just
available via search urls given product ids.
For merchants:
include manufacturer's part numbers, even if you want to
use your own as the primary key. It's good to let your
customers get additional information from the manufacturer,
of course.
provide links at least to the manufacturer's home page, and
ideally to individual product pages
invest in your web interface, particularly in terms of
filtering results. If you have 5 items that are going to
meet my requirements, I want to be able to filter down to
exactly and only those five, instead of having to hunt for
them among 50. Price is usually an important determiner of
shopping decisions, of course, but if I have two merchants
with similar pricing, one of whom let me find exactly the
target set I was interested in, guess who I'm going to buy
from?
do provide as much data as possible as conveniently as
possible for shopping aggregators, particularly product
information and stock levels. People will build useful
interfaces on top of your data if you let them, and will
send traffic your way for free. Pricing is important, but
it's only one piece of the equation.
simple and useful beats pretty and painful - in particular,
don't use frames, since they break lots of standard web
magic like bookmarking and back buttons; don't do things
like magic javascript links that don't work in standard
browser fashion; and don't open content in new windows for
me - I can do that myself
actively solicit feedback from your customers - very few
people will give you feedback unless you make it very clear
you welcome and appreciate it, and when you get it, take it
seriously
End of rant.
So tell me, are there any clueful manufacturers and merchants
out there? I don't like just hurling brickbats ...
Tue 02 Oct 2007
Tags: web, firefox, greasemonkey, top list
I've been meaning to document the set of firefox extensions I'm currently
using, partly to share with others, partly so they're easy to find and install
when I start using a new machine, and partly to track the way my usage changes
over time. Here's the current list:
Obligatory Extensions
Greasemonkey - the
fantastic firefox user script manager, allowing
client-side javascript scripts to totally transform any web page before it
gets to you. For me, this is firefox's "killer feature" (and see below for
the user scripts I recommend).
Flash Block - disable
flash and shockwave content from running automatically, adding placeholders
to allow running manually if desired (plus per-site whitelists, etc.)
AdBlock Plus - block
ad images via a right-click menu option
Chris Pederick's
Web Developer Toolbar - a
fantastic collection of tools for web developers
Joe Hewitt's Firebug -
the premiere firefox web debugging tool - its html and css inspection
features are especially cool
Daniel Lindkvist's
Add Bookmark Here
extension, adding a menu item to bookmark toolbar dropdowns to add the
current page directly in the right location
Optional Extensions
Michael Kaply's Operator -
a very nice microformats toolbar, for discovering
the shiny new microformats embedded in web pages, and providing operations you
can perform on them
Zotero - a very
interesting extension to help capture and organise research information,
including webpages, notes, citations, and bibliographic information
Colorful Tabs - tabs +
eye candy - mmmmm!
Chris Pederick's
User Agent Switcher -
for braindead websites that only think they need IE
ForecastFox - nice
weather forecast widgets in your firefox status bar (and not just
US-centric)
Greasemonkey User Scripts
So what am I missing here?
Updates:
Since this post, I've added the following to my must-have list:
Tony Murray's Print Hint -
helps you find print stylesheets and/or printer-friendly versions of pages
the Style Sheet Chooser II
extension, which extends firefox's standard alternate stylesheet selection
functionality
Ron Beck's JSView
extension, allowing you to view external javascript and css styles used
by a page
The It's All Text
extension, allowing textareas to be editing using the external editor of
your choice.
The Live HTTP Headers
plugin - invaluable for times when you need to see exactly what is going on
between your browser and the server
Gareth Hunt's Modify Headers
plugin, for setting arbitrary HTTP headers for web development
Sebastian Tschan's Autofill Forms
extension - amazingly useful for autofilling forms quickly and efficiently
Wed 12 Sep 2007
Tags: web, web2.0, lifebits, microformats, data blogging
Following on from my earlier data blogging post, and along the
lines of Jon Udell's
lifebits scenarios,
here's the first in a series of posts exploring some ideas about how data blogging
might be interesting in today's Web 2.0 world.
Easy one first: Reviews.
When I write a review on my blog of a book I've read or a movie I've seen,
it should be trivial to syndicate this as a review to multiple relevant
websites. My book reviews might go to Amazon (who else does good user
book review aggregation out there?), movies reviews to IMDB, Yahoo Movies,
Netflix, etc.
I'm already writing prose, so I should just be able to mark it up as a
microformats microformats:"hReview", add some tags to control syndication,
and have that content available via one or more RSS or Atom feeds.
I should then just be able to go to my Amazon account, give it the url
for the feed I want it to monitor for reviews, and - voila! - instant
user-driven content syndication.
This is a win-win isn't it? Amazon gets to use my review on its website,
but I get to retain a lot more control in the process:
I can author content using my choice of tools instead of filling out a
textarea on the Amazon website
I can easily syndicate content to multiple sites, and/or syndicate
content selectively as well
I can make updates and corrections according to my policies, rather than
Amazon's (Amazon would of course still be able to decide what to do with
such updates)
I should be able to revoke access to my content to specific websites
if they do stupid stuff
I and my readers get the benefit of retaining and aggregating my content
on my blog, and all your standard blogging magic (comments, trackbacks,
tagclouds, etc.) still apply
It would probably also be nice if Amazon included a link back to the
review on my blog which would drive additional traffic my way, and allow
interested Amazon users to follow any further conversations (comments and
trackbacks etc.) that have happened there.
So are there any sites out there already doing this?
Mon 10 Sep 2007
Tags: blosxom, blosxom plugins
I've just released my first blosxom
plugin into the wild. 'mason_blocks' is a blosxom plugin implementing
simple conditional and comment blocks using
HTML::Mason-style syntax, for use in
blosxom flavour and template files.
Examples:
# Mason-style conditionals
% if ($pagetype::pagetype ne 'story') {
<a href="$permalink::story#comments">Comments ($feedback::count)</a>
% } else {
<a href="$permalink::story#leave_comment">Leave a comment</a>
% }
# Mason-style comments
%# Only show a comments section if there are comments
% if ($feedback::count > 0) {
$feedback::comments
% }
# Mason-style block comments
I wrote it when I couldn't get the interpolate_fancy plugin to work properly
with nested tags, and because I wanted proper perl conditions and if-else
support. mason_blocks provides all the conditional functionality of
interpolate_fancy, but not other stuff like 'actions'.
mason_blocks is available from the
blosxom plugins CVS repository.
Thu 06 Sep 2007
Tags: web, web2.0, lifebits, microformats, data blogging, inverted web
I've been spending some time thinking about
a couple of
intriguing posts
by Jon Udell, in which he discusses a hypothetical "lifebits" service
which would host his currently scattered "digital assets" and syndicate
them out to various services.
Jon's partly interested in the storage and persistence guarantees such a
service could offer, but I find myself most intrigued by the way in which
he inverts the current web model, applying the publish-and-subscribe
pull-model of the blogging world to traditional upload/push environments
like Flickr or MySpace, email, and even health records.
The basic idea is that instead of creating your data in some online app,
or uploading your data to some Web 2.0 service, you instead create it in
your own space - blog it, if you like - and then syndicate it to the
service you want to share it with. You retain control and authority over
your content, you get to syndicate it to multiple services instead of
having it tied to just one, and you still get the nice aggregation and
wikipedia:"folksonomy" effects from the social networks you're part of.
I think it's a fascinating idea.
One way to think of this is as a kind of "data blogging", where we blog
not ideas for consumption by human readers, but structured data of
various kinds for consumption by upstream applications and services.
Data blogs act as drivers of applications and transactions, rather than
of conversations.
The syndication piece is presumably pretty well covered via RSS and Atom.
We really just need to define some standard data formats between the
producers - that's us, remember! - and the consumers - which are the
applications and services - and we've got most of the necessary components
ready to go.
Some of the specialised XML vocabularies out there are presumably useful
on the data formats side. But perhaps the most interesting possibility is
the new swag of microformats currently being
put to use in adding structured data to web pages. If we can blog
people and organisations,
events,
bookmarks,
map points,
tags, and
social networks, we've got halfway
decent coverage of a lot of the Web 2.0 landscape.
Anyone else interested in inverting the web?
Thu 30 Aug 2007
Tags: linux, hardware, tips
I was building a shiny new CentOS 5.0 server today with a very nice
3ware 9650SE raid card.
Problem #1: the RedHat anaconda installer kernel doesn't support these cards
yet, so no hard drives were detected.
If you are dealing with a clueful
Linux vendor like 3ware, though, you can just go to their comprehensive
download driver page,
grab the right driver you need for your kernel, drop the files onto a
floppy disk, and boot with a 'dd' (for 'driverdisk') kernel parameter
i.e. type 'linux dd' at your boot prompt.
Problem #2: no floppy disks! So the choices were: actually exit the office
and go and buy a floppy disk, or (since this was a kickstart anyway) figure
out how to build and use a network driver image. Hmmm ...
Turns out the dd kernel parameter supports networked images out of the box.
You just specify dd=http://..., dd=ftp://..., or dd=nfs://..., giving it
the path to your driver image. So the only missing piece was putting the
3ware drivers onto a suitable disk image. I ended up doing the following:
# Decide what name you'll give to your image e.g.
DRIVER=3ware-c5-x86_64
mkdir /tmp/$DRIVER
cd /tmp/$DRIVER
# download your driver from wherever and save as $DRIVER.zip (or whatever)
# e.g. wget -O $DRIVER.zip http://www.3ware.com/KB/article.aspx?id=15080
# though this doesn't work with 3ware, as you need to agree to their
# licence agreement
# unpack your archive (assume zip here)
mkdir files
unzip -d files $DRIVER.zip
# download a suitable base image from somewhere
wget -O $DRIVER.img \
http://ftp.usf.edu/pub/freedos/files/distributions/1.0/fdboot.img
# mount your dos image
mkdir mnt
sudo mount $DRIVER.img mnt -o loop,rw
sudo cp files/* mnt
ls mnt
sudo umount mnt
Then you can just copy your $DRIVER.img somewhere web- or ftp- or
nfs-accessible, and give it the appropriate url with your dd kernel
parameter e.g.
dd=http://web/pub/3ware/3ware-c5-x86_64.img
Alternatives: here's an
interesting post
about how to this with USB keys as well, but I didn't end up going that way.
Mon 27 Aug 2007
Tags: books
Finished Janette Turner Hospital's latest novel, Orpheus Lost, on
Saturday, and am still thinking about it two days later. It's a great read -
an imaginative reworking of the Orpheus myth against a backdrop of current-day
terrorism. It has lovely quirky characters, beautiful but highly readable prose,
and a story that is told from multiple points of view, but manages to stay
coherent and whole.
And like her earlier Due Preparations for the Plague, rather than
slowing down towards the end, Orpheus Lost seems to actually accelerate,
finishing with an emotional punch that left me satisfied but also slightly
shell-shocked. So it's a compelling read, but it's not light material, with
happiness and tragedy portrayed as flipsides of the same love, particularly in a
complicated and neurotic world. Orpheus was a tragedy, after all.
Highly recommended.
Tue 21 Aug 2007
Tags: linux, hardware
We've been chasing a problem recently with trying to use dual
nvidia 8000-series cards with four displays. 7000-series cards
work just fine (we're mostly using 7900GSs), but with 8000-series
cards (mostly 8600GTs) we're seeing an intermittent problem with
one of the displays (and only one) going badly 'fuzzy'. It's not
a hardware problem because it moves displays and cables and
cards.
Turns out it's an nvidia driver issue, and present on the latest
100.14.11 linux drivers. Lonni from nvidia got back to us saying:
This is a known bug ... it is specific to G8x GPUs ... The
issue is still being investigated, and there is not currently
a resolution timeframe.
So this is a heads-up for anyone trying to run dual 8000-series
cards on linux and seeing this. And props to nvidia for getting
back to us really quickly and acknowledging the problem. Hopefully
there's a fix soonish so we can put these lovely cards to use.
Sun 19 Aug 2007
Tags: blosxom, web
I've been trying out a few of my
blosxom wishlist
ideas over the last few days, and have now got an experimental version of
blosxom I'm calling
blosphemy (Gr. to speak against, to speak evil of).
It supports the following features over current blosxom:
loads the main blosxom config from an external config file
(e.g. blosxom.conf) rather than from inline in blosxom.cgi.
This is similar to what is currently done in the debian blosxom
package.
supports loading the list of plugins to use from an external config
file (e.g. plugins.conf) rather than deriving it by walking the
plugin directory (but falls back to current behaviour for backwards
compatibility).
uses standard perl @INC to load blosxom plugins, instead of hardcoding
the blosxom plugin directory. This allows blosxom to support CPAN
blosxom plugins as well as stock $plugin_dir ones.
uses a multi-value $plugin_path instead of a single value $plugin_dir
to search for plugins. The intention with this is to allow, for
instance, standard plugins to reside in /var/www/blosxom/plugins,
but to allow the user to add their own or modify existing ones by
copying them to (say) $HOME/blosxom/plugins.
These changes isolate blosxom configuration from the cgi and plugin
directories (configs can live in e.g. $HOME/blosxom/config for tarball/home
directory installs, or /etc/blosxom for package installs), allowing nice
clean upgrades. I've been upgrading using RPMs while developing, and the
RPM upgrades are now working really smoothly.
If anyone would like to try it out, releases are at:
I've tried to keep the changes fairly minimalist and clean, so that
some or all of them can be migrated upstream easily if desired. They
should also be pretty much fully backward compatible with the current
blosxom.
Comments and feedback welcome.
Thu 16 Aug 2007
Tags: blosxom
I'm currently working on packaging blosxom as
an RPM for deployment on a few different RedHat/CentOS servers I administer.
With most small-medium software packages this is pretty straightforward - write
a simple spec file, double-check the INSTALL instructions, and replicate those
in the spec file. It's rather more challenging with blosxom.
blosxom's roots are in supporting extremely minimalist environments. It's
reasonably straightforward
to setup blosxom on a 1990s shared web hosting account
with only the most basic CGI support, and only FTP access to the server for your
files.
Blosxom itself is a single perl CGI script, which you configure by setting a few
variables at the top of the script. Blosxom plugins, which are used to implement
lots of the functionality in blosxom, are likewise little perl modules configured
(if necessary) at the beginning of each plugin. In a shared web hosting
environment you'd configure blosxom itself and your plugins the way you'd like,
and then upload them to your server home directory via FTP.
Fast forward to 2007, where virtual linux servers with full root access are
available for US$15/month, with prices continually dropping. In this kind of
environment the whole mixing-configuration-and-code thing becomes much more of a
liability than a feature.
There's a debian package
available, so the debian guys have made a start of wrestling with some of
these issues - they patch blosxom to allow it to use an external config, for
example. I've done something similar, and am realising I'm going to want to
support the same kind of thing with plugins.
So here's my current wishlist for a blosxom RPM:
the ability to install one of more blosxom packages and get blosxom itself,
a good set of blosxom plugins, and a good set of blosxom flavours and
themes all ready to go
a proper separation between config and code, so that I can upgrade any of
my blosxom packages without having to worry about losing config settings
an easy way of configuring exactly what plugins and themes are used for my
blog
most standard modern blog features available more-or-less out-of-the-box
(e.g. comments and spam protection, support for sending
"trackback":wikipedia:Trackback pings, support for receiving trackbacks and
"pingbacks":wikipedia:Pingback, OpenID support,
support for microformats, etc.)
multi-user and multi-blog support, so that an installed blosxom can be
used for multiple blogs
mod_perl support, for scalability
That's my current wishlist anyway. I'm still trying to figure out whether
others in the blosxom development community are interested in any of this
stuff too, or whether they all just still use FTP. ;-)
Thu 09 Aug 2007
Tags: linux, hardware
We've been having a bit of trouble with these motherboards under linux
recently. The two S4/S5 variants are basically identical
except that the S5 has two Gbit ethernet ports where the S4 has only one,
and the S5 has a couple of extra SATA connections - we've been using both
variants. We chose these boards primarily because we wanted AM2 boards
with multiple PCIe 16x slots to use with multiple displays.
We're running on the latest BIOS, and have tested various kernels from 2.6.9
up to about 2.6.19 so far - all evidence the same the same problems. Note
that these are much more likely to be BIOS bugs, we think, than kernel
problems.
The problems we're seeing are:
kernel panics on boot due to apic problems - we can workaround by specifying
a 'noapic' kernel parameter at boot time
problems with IRQ 7 - we get the following message in the messages log
soon after boot:
kernel: irq 7: nobody cared (try booting with the "irqpoll" option)
kernel: [<c044aacb>] __report_bad_irq+0x2b/0x69
kernel: [<c044acb8>] note_interrupt+0x1af/0x1e7
kernel: [<c05700ba>] usb_hcd_irq+0x23/0x50
kernel: [<c044a2ff>] handle_IRQ_event+0x23/0x49
kernel: [<c044a3d8>] __do_IRQ+0xb3/0xe8
kernel: [<c04063f4>] do_IRQ+0x93/0xae
kernel: [<c040492e>] common_interrupt+0x1a/0x20
kernel: [<c0402b98>] default_idle+0x0/0x59
kernel: [<c0402bc9>] default_idle+0x31/0x59
kernel: [<c0402c90>] cpu_idle+0x9f/0xb9
kernel: =======================
kernel: handlers:
kernel: [<c0570097>] (usb_hcd_irq+0x0/0x50)
kernel: Disabling IRQ #7
after which IRQ 7 is disabled and whatever device is using IRQ 7 seems to
fail intermittently or just behave strangely (and "irqpoll" would just
cause hangs early in the boot process).
This second problem has been pretty annoying, and hard to diagnose because it
would affect different devices on different machines depending on what bios
settings were on and what slots devices were in. I spent a lot of time chasing
weird nvidia video card hangs which we were blaming on the binary nvidia
kernel module, which turned out to be this interrupt problem.
Similarly, if it was the sound device that happened to get that interrupt,
you'd just get choppy or garbled sound out of your sound device, when other
machines would be working flawlessly.
So after much pain, we've even managed to come up with a workaround: it turns
out that IRQ 7 is the traditional LPT port interrupt - if you ensure the
parallel port is turned on in the bios (we were religiously turning it off as
unused!) it will grab IRQ 7 for itself and all your IRQ problems just go away.
Hope that saves someone else some pain ...
Wed 08 Aug 2007
Tags: blosxom
I'm using blosxom for this blog. I'd played with
it a while ago and really liked its simplicity and ethos, but never got it
working quite the way I wanted. When returning to the blogging world recently
I went and looked a few of the popular alternatives -
Typo, Wordpress,
Movable Type - and didn't find anything that
really grabbed me.
Yes, all three are slicker, more modern, and have a lot more functionality
out-of-the-box than blosxom, as far as I can tell. So why am I back with
blosxom?
For me, blosxom has two killer features:
you can write your blog entries offline, using a real editor, and using
nice sane rich-text formats like
Markdown
it is simple and pluggable, by design, which makes it immensely hackable
In fact, blosxom isn't really full-blown blogging software at all, especially
as it's presently packaged and distributed. Instead it's a lightweight pluggable
toolkit with which to build a blog. If you're after something that Just Works,
it's probably a bad choice; if you're after something you can play with and
bend to your will, it's really nice.
Blosxom's also suffered a bit from not having had much development love over
the last few years. Be nice to see blosxom get a bit more support for the
modern blogging world - have to see if I can help stir things up a bit ...
Mon 23 Jul 2007
Tags: general
Ok, so after much resistance I'm finally clambering aboard the
juggernaut and starting a blog.
Beware the gibberish to follow.