Monday, December 10, 2007

How ThoughtMesh distributes and connects

The response to ThoughtMesh has been great so far, so I thought I would flesh out a bit more of the underlying architecture that makes ThoughtMesh tick. It's a model that might be useful for other applications in distributed publication.


As this workflow indicates, authors do not need to archive their essays on the ThoughtMesh server to be accessible by the mesh. While the ThoughtMesh server, operated by USC's Vectors program, does store the contents of the essays, what's more important is that it stores the metadata associated with them. In this case, the critical metadata are tags for essay excerpts and urls that point to those essay excerpts.

This means that you can upload your meshed essay to your university account, at a free Web host like Geocities, or even run it off your hard-drive--and ThoughtMesh will still connect it to other essays in the mesh on the fly.

To do this ThoughtMesh requires a form of cross-site scripting not normally available to AJaX. Fortunately, Still Water Research Fellow John Bell contributed a program called Telamon--known in Greek mythology as the "Lesser Ajax"--that cleverly permits metadata from one site to be available behind the scenes to another.

ThoughtMesh's tag lookup server avoids the problem of silo'd essay repositories because it is less a database than a metadatabase. I believe this architecture--which is mirrored in version-tracking community registries like The Pool--offers a practical approach to distributed publication that solves many of the problems plaguing the rollout of the "Semantic Web," including the potential for unintended or intentional metadata corruption. With a metadatabase, you don't have to worry about newbies botching handwritten metadata tags, and you can build in trust metrics to thwart Viagra salesmen. (Did I mention that a future release of ThoughtMesh will incorporate John Bell's RePoste trust metric?)

jon

Labels: , , , ,

Sunday, December 09, 2007

XML-Sitemaps helps Google crawl your Web site

John Bell recommended this utility, which aims to reveal more of a "deeply linked" site to Google and other search engines. You can also use it to create a human-readable HTML sitemap for visitors to your Web site.

Apart from making it easier to map the entire structure of a complex site, I'm wondering if the tool could be leveraged to expose the "dark Web" of database-driven pages--eg, pages of the form index.php?id=234, which Google normally can't find.

jon


Powered by ScribeFire.