Sunday, January 29, 2006

NetCIDR 0.1 Released

networking :: python :: programming

As part of the on-going work with
was written to allow for a clean and logical approach when analyzing
captures. Primarily, it's use is for determining whether a given IP
address is in a given netblock or a collection of netblocks. Here are
some quick example usages from the wiki and doctests:

>>> CIDR('')
>>> CIDR('10.4.1.x')
>>> CIDR('10.*.*.*')
>>> CIDR('')
>>> CIDR('').getHostCount()

Here's how you create a collection of networks:

>>> net_cidr = CIDR('')
>>> corp_cidr = CIDR('')
>>> vpn_cidr = CIDR('')
>>> mynets = Networks([net_cidr, corp_cidr])
>>> mynets.append(vpn_cidr)

And now, you can check for the presence of hosts in various networks
and/or collections of networks:

>>> home_router = CIDR('')
>>> laptop1 = CIDR('')
>>> webserver = CIDR('')
>>> laptop2 = CIDR('')
>>> laptop3 = CIDR('')
>>> google = CIDR('')

>>> home_router in mynets
>>> laptop1 in mynets
>>> webserver in mynets
>>> laptop2 in mynets
>>> laptop3 in mynets
>>> google in mynets

Saturday, January 28, 2006

CoyoteMonitoring 3

networking :: management :: software

For the past year, I have resisted speculation and development on
CoyMon 3. But in the past few months, many pieces have begun falling
into place -- each removing objections and concerns, and some providing
a clear path of development where before there was none.

is a free, open-source network management tool. Specifically, it glues
together many open-source and difficult-to-configure subsystems for
obtaining and viewing
data used in network analysis. It is comparable to operating system
distributions in this regard: it is a NetFlow management distro. The
Department of Veterans affairs has actually been using CoyMon 2 for
about a year now, with astounding results. We are thrilled with the
success it has been for them, with the tremendous time and money it has
saved them.

The development of CoyMon 2 was sponsored in part by the VA, and due to
the timeline, much of the code was specific to their needs. However,
because of the time it would take to audit the code, CoyMon 2 has never
been publicly released. I have had many email conversations with WAN
managers and network engineerings very distressed at this. They too
have found a sad lack in the open source world when it comes to useful,
easy-to-use tools for querying, viewing, and analyzing NetFlow data.
Because of this continued interest, input, and moral support from these
hard-working individuals, I have been considering how this need could
best be addressed and met.

To be honest, my biggest technical concern has been with the current
NetFlow tools. The perl code that people have depended upon for this
(since the late 90s!) is krufty, hard to maintain, overly uses
almost-unnecessary and out-of-date perl modules, and as a group, were
not designed with maximal extensibility in mind. They have been genuine
gems in the field -- none of us could have done what we have done for
the past several years without this wonderful contribution. The
combination of these (and their dependencies!) with all the other
pieces that require configuration and glue is a difficult thing to
provide. It took an ENORMOUS amount of time and energy to get CoyMon 2
into a place where it could be deployed efficiently. CoyMon 2 runs like
a tank, though. It's a real champ and a testament to all that hard work
and organization.

However, it is time to fix the system at its roots. Enter CoyMon 3.

The biggest stumbling block to forward movement on CoyMon 3 has been
the available APIs for to tools upon which we depend, with one of the
most important being RRDTool. The python bindings for RRDTool are
archaic and decidedly non-OOP. Attempting to design a system that is
robust, effortless to maintain, and easily adapts new features but that
has composite components with difficult APIs is a recipe for
frustration, delay, and ultimately, non-delivery. The first step
towards addressing this problem came with the recent release of
It is a fully OOP wrapper for RRDTool in python that removes this pain
from the equation.

The next biggest hurdle was the old perl code called FlowScan. After
much discussion and analysis, a clean and elegant way to provide this
functionality was arrived at, and we will soon have a new product to
show for it, freely available for download. CoyoteMonitoring will
depend heavily on this piece of software and related libraries. We are
making excellent progress on this, but ultimately, the problem to solve
here is one of modularity and configuration. How do you provide an
easy, non-programmatic method of extensibility for any number of
potential rules? We've got some great ideas, but only the natural
selection of actual use to prove the best approach. This is our current

With the advances that have been made in the past several months, we
are not only comfortable making an announcement about a new version of
CoyoteMonitoring, we are down-right confident :-) For the interested,
here are more detailed points on CoyMon 3:

  • CoyMon 3 will be developed completely independently of any
    third-parties or businesses. This will mean slower development times,
    but cleaner more easily managed code. With the absence of sensitive
    customer code and/or configurations, you will see regularly available

  • Supporting libraries will all have extensive unit tests for each
    piece of functionality.

  • CoyMon 3 will have a 100% true component architecture.

  • CoyMon 3 will continue to use flow-tools and will make full use of
    the python bindings for fast processing of NetFlow captures.

  • CoyMon 3 will make use of Zope 3 technology for through-the-web
    management of such resources as collectors, protocols, queries,
    resulting data and graphics, as well as arbitrary content that is
    important to end users.

  • CoyMon 3 will make use of the Twisted Python framework for all of
    its specialized networking needs, including (but not limited to) CMOB/X
    (a recently developed "object broker" for distributed collectors
    managed at a central location).

  • CoyMon 3 will have completely re-factored supporting libraries,
    written and maintained by the CoyoteMonnitoring community. All the old
    Perl code will be replaced with light-weight, easy-to-maintain python
    libraries and scripts. These will include NetCIDR, PyRRD, and

  • CoyMon 3 will have consistent configuration across all its
    composite applications and it will make use of the famous, the useful,
    and the ever-concise Apache-style configuration files.

  • And last, but not least, CoyMon 3 will abide by Chapter 4 of Eric
    Raymond's "The The Cathedral and the Bazaar": Release Early, Release
    Often. CoyMon 3 development snapshots will be available for download

Project spaces to keep your eyes on:





Wednesday, January 25, 2006

The Future of Content Management

So this morning I read a great  post on Paul Everitt's blog. He gives a quick run-down on a comprehensive paper by Seth Gottlieb, who I hadn't heard of but have since become very impressed by. Paul provides an excellent quote from the paper, and then makes this comment himself:
Enterprise CMS, and most WCM, is organized like a mainframe. Everything
is in one system and you bring the users to that system. Federated
content management might be a growth opportunity in the market.
I proceeded to post a comment on his blog to the effect of my support and long-standing enthusiasm for this paradigm shift. At the risk of harping on this theme, this is really what's behind the post Dinosaurs
and Mammals
and more recently, The King is dead! Long live the Kinglets!. I also said that I am patiently waiting for the day when the z3 libraries are available as a programming content managing framework, when I will be able to easily integrate z3 content management components into my twisted applications.

While checking out Seth's blog, I came across several awesome posts where he discusses much the same thing, from different perspectives:
My focus is more general than just content management, but let's face it: most of what we need to do on the network these days revolves around content. Between Paul and Seth, I feel very validated for the past two years of exploration and code I have been developing and am encouraged to continue along these
lines :-)

PyRRD 0.1 on

python :: software

Great news! We've released the first version of PyRRD -- you can get it


There are currently
examples of RRD-generation with python code

up on the project site. Be sure to check them out and send me your
questions so I can improve the docs and the code. My goal is to make
RRD easy to use for python programmers... but I can only do so much
with just my own mind and perspective ;-)

PyRRD currently has all of the features we need to do the development
we are focused on. TODOs have been stubbed out in the code were some of
the lesser known and used features of RRD haven't yet been implemented
in this OOP API. As development for the next version of
kicks into gear, I will be adding more of the obscure stuff to PyRRD.


Sunday, January 22, 2006


Keeping alive the whimsy that had me pick webXcreta back up and get it working, I have returned to another old project: PyRRD. I had started work on this a couple years ago while adding functionality to CoyoteMonitoring, but had to put it on hold due to budget constraints.

In a nutshell, PyRRD is an object oriented interface (wrapper) for the RRDTool. python bindings (rrdtool). Where when using rrdtool you might see something like this:

'--imgformat', 'PNG',
'--width', '540',
'--height', '100',
'--start', "-%i" % YEAR,
'--end', "-1",
'--vertical-label', 'Downloads/Day',
'--title', 'Annual downloads',
'--lower-limit', '0',

with PyRRD, you have this:

def1 = graph.GraphDefinition(vname='downloads', rrdfile='downloads.rrd',
ds_name='downloads', cdef='AVERAGE')

area1 = graph.GraphArea(value=def1.vname, color="#990033',
legend='Downloads', stack=True)

g = graph.Graph(path, imgformat='PNG', width=540, height=100,
start="-%i" % YEAR, end=-1, vertical_label='Downloads/Day',
title='Annual downloads', lower_limit=0)

Optionally, you can use attributes (in combination with or to exclusion of parameters):

def1 = graph.GraphDefinition()

And there are aliases for the classes so that you may use the more familiar names from RRDTool:

def = graph.DEF(vname='downloads', rrdfile='downloads.rrd',
ds_name='downloads', cdef='AVERAGE')
area = graph.AREA(value=def.vname, color="#990033', stack=True)

Not only is this object approach more aesthetically pleasing to me, but the interface is much easier to manipulate programmatically. That's insanely important to me because of how I use RRDTool in other projects. I do a great deal of data manipulation and graph representation, and using the regular RRDTool python bindings is simply a show-stopper.

Another neat thing about this wrapper is that the classes use __repr__() to present the complete value of the object as an RRDTool-ready string. This means that you are not limited to using the python bindings, but can also freely and easily interact with the command line tools, configuration values in files, etc.

When I've got a first release ready to go, I'll push it up to CheeseShop and post a notice here on the blog.

Update: PyRRD is now available in Debian and Ubuntu.

ctypes on Mac OS X 10.4 with gcc 4.0

python :: programming :: macosx

I had some trouble installing ctypes on a 10.4 server tonight, and found
little post

that gave a patch for it. Unfortunately, this was for the file in 2004
and I had to manually edit the current 4000+ line file.

Apparently, this was in CVS in 2004 but hasn't made it into a distro

For the search engines, I will post the error message (where xxx are
your line numbers):

source/_ctypes.c:xxx: error: static declaration of 'Pointer_Type'
follows non-static declaration
source/ctypes.h:xxx: error: previous declaration of 'Pointer_Type' was

And here's a copy of my diff against _ctypes.c in ctypes 0.9.6:

--- third-party/ctypes-0.9.6/source/_ctypes.c (revision 53)
+++ third-party/ctypes-0.9.6/source/_ctypes.c (working copy)
@@ -2449,7 +2449,7 @@
"sO|O" - function name, dll object (with an integer handle)
"is|O" - vtable index, method name, creates callable calling COM vtbl
-static PyObject *
+PyObject *
CFuncPtr_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
CFuncPtrObject *self;
@@ -3880,7 +3880,7 @@
(inquiry)Pointer_nonzero, /* nb_nonzero */

-static PyTypeObject Pointer_Type = {
+PyTypeObject Pointer_Type = {

Friday, January 20, 2006

The Self-Referenceing Cat

software :: python :: webxcreta

I'm thinking of modifying the title- and text-generating of webXcreta
in its generation of posts to
Schrodinger's Box.
Here's what may be cooking:

  • Get all the content from the top 500 blogs like I currently am, and
    keep using the current weighting algorithm

  • But, also get all the content for every post to Schrodinger's Box

  • Have the content from Schrodinger's Box contribute towards a
    significant fraction of the material for each new post

  • And give posts with more comments greater weights

  • And! Include the comments from those posts as part of the source
    material from which new posts are created (but weighted considerably
    less than full posts)

Thus making Schrodinger's Box self-referential... simulating theme and
continuity. I think this would be very interesting, allowing
Schrodinger's Box to "gain momentum" as it were.

Update: Schrödinger's Böx is now occasionally generating posts based solely on the content of previous posts.

Monday, January 16, 2006


natural language :: semantics :: science fiction

I am delighted with the silliness issuing forth from
Schrodinger's Box, it
continues to amuse and I am always eager to see what the next post will
be. However, as I alluded to at the end of my previous post, there is
more potential here than satisfying the high-minded call of absurdity.

For example, imagine yourself at work, trapped in a project with a
bunch of tired people just wanting to go home. You've been tasked by
the boss with "thinking outside the box" and you're making no progress.
Everyone is grumpy, creativity is harder and harder to imagine. Simply
gather a bunch of text for the topic at hand, feed it into webXcreta,
and viola -- instant brainstorming material. Proceed to discuss the
sentences that webXcreta pops out, easily dig yourself out of the rut,
get off the hook with the boss, and run home to pursue the many other
distractions of a mundane existence.

This could also be used effectively to alleviate writer's block. You're
writing a historical romance in ancient Gaul, with a plot that
stretches over Teutonic tribes in Western Europe, through to Rome and
into Asia Minor? Well, gather some source material, feed it into
webXcreta, and bing-bango! Innumerable ideas and sources of inspiration
to get that ink flowing again.

So, yes -- there is some practicality involved. On a slightly more
radical note, I'm exploring as possible use of the weighting
"algorithm" I used in webXcreta for representation of minorities. The
square of the log could be a very effective means of ensuring that no
voice is completely suppressed, that no majority ever gains absolute
control. I'd like to hear what people with political science
backgrounds have to say about that sort of thing.

And then there's the potential role for this to be used in assessing
public opinion, popular trends, and predictive analysis. Now we're get
to the subject of this blog entry... psychohistory ;-) I'm talking
Isaac Asimov and science-fiction: the psychohistory of Hari Seldon in
the famous Foundation series. Or at least a part of it. webXcreta makes
use of the
Natural Language Processing Toolkit which is a great
tool, but we'd need something more to make this science-fiction a
reality. We'd need a "semantic processing toolkit". I image that the
corpora for such a toolkit would not be tagged parts of speech, etc.,
but rather semantic tags. Perhaps domain-specific tags for contextual
meaning. Then, instead of a grammatical average, you would take a
semantic average. Now *that* would be REALLY interesting...

Now playing:
Yes - And You And I (live version)

Sunday, January 15, 2006

Crazy Truth

software :: python :: webxcreta

I recently emailed a friend about the webXcreta project, and to give
him some background on it, I described Eigenradio:

A few years ago, I came across one of the most bizarre software
projects I had seen on the net: a guy at MIT had created an internet
radio station that "consumed" Top-40 songs being played live on other
internet radio stations, pushed them through massive statistical
commutation and custom software, and then spit out "new" music from
these computations. The music generated was a sonic, statistical
average of what was popular and getting air-play. Most people found the
resulting "music" disturbing. I, however, couldn't stop laughing --
literally. You could actually "hear" the statistics of the thing, if
you listened carefully. It was stunning. And hilarious.

That bit about hearing the statistics is key. And it perfectly
describes how it felt to listen to that music. It felt like an
epiphany. One of Eigenradio's taglines was brilliant:

What you hear on Eigenradio is the best of the New Music, distilled and
de-correlated. One song on Eigenradio is worth at least twenty songs on
old radio.

Now, with webXcreta, I find myself in a similar situation: I read the
posts, and I convulse with laughter. It's not the content so much that
makes it so irrepressibly funny to me, but rather what's under the
covers. To give you a quick sense of my humor, I laugh at truth. Truth
is endlessly amusing to me. I remember reading James Gleik's "Chaos"
book in high school, and laughing for about 15 minutes after I read his
description of Sierpinski Gaskets: they have zero area and infinite
points. It wasn't so much like a light going off in my mind, as a
bomb. The truth of it turned my mind upside down, and I had new eyes.
It was an ecstatic experience -- thus the laughter.

There's something similar happening in my mind with reading theses
Schrodinger's Box
blog posts. There's something hidden, under the covers that is the
true source of my laughter; the quote above from the Eigenradio site
points to the answer. To explore this further, consider this: what if
you absolutely had to read 1000 pages of text in less than a minute,
what would you do? What's the cheapest alternative to a
massive/complete data set? A random sampling! Read a shotgun-spread of
statistically sampled textual data from those 1000 pages.

And that's it. That's what's making me laugh. When I read
Schrodinger's Box,
part of my mind is actually aware that it is seeing parts of thousands
of data sources simultaneously, and the truth of that inspires a
quasi-ecstatic hilarity. Crazy truth.

In my experience (and, arguably that of the entire world of science),
Crazy Truth is a gold-mine for discovery. It will be interesting to see
how this code evolves and what strange uses it gets put to...

Now playing:
Yes - Close to the Edge

Saturday, January 14, 2006

Madness Ensues...

weblogs :: programming :: python :: natural language

Almost exactly a year ago, I proposed to do something akin to
eigenradio but with text. It was in
blog entry

Well, tonight I decided to take a break from this week's regular coding
tasks, and pick webXcreta back up. I started from scratch again, and it took off... I couldn't stop programming :-) This was a LOT of fun. The
Natural Language Toolkit

has been greatly simplified since I used it last year. It's a dream -- everything I'd always hoped it would be.

The first entry in this new "random blog" is forth-coming. I expect it
to be ready sometime Saturday evening. I've scrapped data from
bloglines' top-100 and have performed all the grammatical analysis on
it. Titles are analyzed and given weights. In fact, the title code is
ready for the first run and has produced the first title. Wanna see it?
;-) The first blog entry will be titled "A Show Of Writer -
. All these entries of insanity will be published here:

Schrodinger's Box.

The last bit I have to do is the weighting for sentences and construct
full entries (it's a little more complicated than generating the

I may end up enjoying this more than "real" blogging...

Update: The first one is up! Check it out here


Now playing:
Depeche Mode - The Sweetest Condition

Friday, January 13, 2006



I've been catching up on some of my blog reading today, and came across
hilarious and insightful post by fzZzy. For those of you that don't
know about mods, you are seriously missing out. Some of my favorite
music is in mod format (.mod, .s3m, .xm, etc.). In particular, this bit
from fzZzy's post is a perfect summation of all that I have felt about
mods and the mod scene:

There is a psychic difference between the product that one creates for
love versus money. When someone really, truly loves something and does
their absolute best, it bleeds out between the lines, oozes all over
your hands and sticks to you and won't wash off.

Perfectly said. He goes on to talk about programming as his passion and
the joy of doing what you love. Bingo. Exactly how I feel, up to and
including the programming bit ;-)

For anyone that's curious, the song he mentions first is
But don't stop there ;-) Check out that guy's site and his music.
Phenomenal. In particular, "blastoff"... it reminds me of a mix between
Yes and Rush (both awesome) and then with classic game music thrown in
(even *more* awesome!).

For those interested, here are my
mod links at
In particular,
The Mod Archive is where I have gotten
almost all my mod music. Many good memories of downloading mods during
lunch while at USi, back in the day...


All Hail the Wayback Machine!

weblogs :: internet

Where backups (or lack thereof), hardware, and all other means had
failed me, the
Wayback Machine
has come through -- in shining colors. I had lost a tremendous amount
of data in a hard drive crash last year, and in the long term, the most
painful personal part of that was the loss of my weblog entries from
the summer until my move to Colorado.

Immediately after the hard drive crash, I tried the Wayback Machine's
archives in an panic. Apparently, not enough time had passed for the
posts to be archived because nothing was there. Well, there has now,
and I am *delighted*! I am now adding these lost entries to the my blog
republisher on blogspot.

All Hail the Wayback Machine!

Thursday, January 12, 2006


music :: software :: python :: twisted

On 13 Nov 2005, I posted to my blog about a "new project" but haven't
followed up with any further info. Well, I made some phenomenal
progress lately, and I can take a break to chat about it. The new
project is
SonicVault, devised and
commissioned by
Neosynapse (a long-time
partner company with mine). As the stub on the wiki says, it is

a networked, encrypted MP3 library that supports fingerprinting
technology (to be used in determing file organization, detecting
duplicates, etc.)

Originally, we were going to do full audio fingerprinting (using
spectral densities, Mel Cepstrum Coefficients, etc.), but found that
there is an open source version of the fingerprinting technology
TRM and since TRM is used with
MusicBrainz, we decided to go
with that. I have since come across another open source solution called
fdmf, and am exploring it for
future use.

We encountered some issues with the python bindings provided by
MusicBrainz, so we ended up writing our own client using the wonderful
RDFLib python package. We put up some
sample code on the
RDFLib wiki that you can check
out. (More on RDF later...)

We now have a functional upload server that encrypts all
communications, an XML-RPC API for querying stored files, and a
built-in MusicBrainz client exposed to the XML-RPC API that provides
users with a facility for easily modifying MP3 metadata/ID3 tags.

Monday, January 09, 2006

Silverton, CO


I've just emailed my family about an old, small town in Colorado I have
just fallen in love with: Silverton. I have yet to visit it, but I've
been doing tons of research this week, and Silverton has made it to the
top of a short list.

I've leeched a bunch of photos off the net and put them here, in case
anyone is curious:

I am very excited about the possibility of heading further into the
Heart of the Mountains, higher up in elevation, and deeper into the
snow :-) I love small towns, low population counts, and pristine
forests -- all of which Silverton provides. I've heard a bunch of good
things about Durango (1 hour to the south), so I'm hoping similar
things are true of its neighbors.

I've reached out to a couple of the Silverton community members and
have started hearing back from them. As dreams begin to take shape in
plan form, I will post updates. Be sure to check it out on google maps,

Wish me luck!

Friday, January 06, 2006

Automated Blogging: Advogato & Blogger

blogging :: atom :: python

Well, today I decided I might as well start posting my blogs to in addition to my Advogato posts. A couple friends of mine
are on blogger, so I imagine I'll start posting more comments there,
and I thought it was a shame for the lack of content in my blogger
account (I was using it as a placeholder).

I figured it would take no extra work to update it, really -- my
Advogato posts are all via email. All I needed to do was write a script
to post to blogger too. Well, working with the blogger Atom API was a
little frustrating. It's very sensitive, only accepting ASCII-encoded
strings, needs certain characters escaped, and returns HTTP 500 with no
clue as to what's going on. Anyway, I finally got everything tweaked
just right, and figured I would post the python code for doing it, as
many of the examples on the net are old.

The code below is copied and pasted from a script of mine, with most of
the guts ripped out (I parse from email and do authentication, to make
sure I'm the only one sending blog emails). There might be something
left out, but this should be enough to get you started.

import time
import base64
import urllib2
from xml.sax.saxutils import escape

username = 'your_blogger_name'
password = 'your_pass'
bloggerid = 'some_number'
blogger = "" % bloggerid
title = 'blog entry title'
blog_entry = 'your blog entry'

created = time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime())
body = """<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<entry xmlns="">
<title mode="escaped" type="text/plain">%s</title>
<generator url="http://your_url/atom">Your Client
<content type="application/xhtml+xml">
<div xmlns="">%s</div>
</entry>""" % (title, created, blog_entry)

req= urllib2.Request(url=blogger)
base64string = base64.encodestring('%s:%s' % (username,
req.add_header("Authorization", "Basic %s" % base64string)
req.add_header("Content-type", "application/atom+xml")
f = urllib2.urlopen(req)

Now playing:
Yoko Kanno & The Seatbelts - Gotta Knock A Little Harder (Friday,  9:59pm MST)

Test Post via Email

atom :: python :: blogging

This is a test post from email -> postfix -> python -> atom

Let's see how it goes...

Now playing:
Yoko Kanno & The Seatbelts - Call Me Call Me (Friday,  2:57am MST)

Thursday, January 05, 2006

Future Blogging Changes

blogging :: software :: python :: twisted

Advogato has served me well over the past year, and I will continue to use it for blogging. I email a special account on one of my servers with my blog entries, and it gets published within seconds on Advogato. I have nightly cronjobs that run XML-RPC queries against Advogato and save backups (in addition to the originals which are in my "Sent" folder). This makes me very happy, as I suffered a traumatic server crash over a year ago, wherein I lost over a year's worth of world-changing blog posts.

However, I believe I am ready to take blogging back into my own hands. I managed to rescue some of the data on the drive, including early posts from 2003 and part of 2004. These are currently archived here, but I would like to unify all old posts, Advogato posts, and future posts in a single framework, look-and-feel.

As a result, I have finally broken down and started writing a collection of user-land networked applications:

  • ImageDB, a small web micro-app which provides a backend upon which one could write an image gallery application like this one; uses image-tagging like;
  • AccountsDB, which will provide distributed user and group management, authentication, as well as fine-grained permissioning with ACLs (access control lists) and ACL management for applications;
  • MessageDB, which will store and index RFC 2822-compliant messages, will allow users to tag them like ImageDB and, as well as providing many convenient means for users and parent applications to add messages;
  • WeBlog, which will be composed of the micro-applications MessageDB, AccountsDB, and ImageDB and provide a nice user interface for blog posts, blogs, and groups of blogs (community spaces).
This is just a handful of the full suite we are working on here.

All of these applications are small, networked, light-weight, and run independently of heavy software like Apache, MySQL, PostgreSQL, etc. They are python- and twisted-powered. We have plans of implementing peering (using Q2Q/Vertex) so that, for example, a trusted network of ImageDB installations could provide images for each other (like a private, distributed, There are so many possible ways that this could go, it's just insanely exciting. We're talking new paradigm here, folks.

So, stay tuned. This blog-space and my account on Advogato will eventually become re-publishers of ElectricDuncan running on WeBlog ;-) And then I'll have two more backups of these techno-babble ramblings...