I can see clearly now…

Posted by laura on April 27, 2011 under Semantic web, Standards | Be the First to Comment

I’ve been humming the Johnny Nash song to myself ever since reading Karen Coyle’s blog post on Visualizing Linked Data and Ed Summers’ blog post on DOIs as Linked Data.  Thanks to them I think I’ve finally conceptualized the “so-what” factor for Linked Data.   It’s the mash-ups stupid!  The key to doing something useful with Linked Data is being able to build a web page that pulls together information via various bits of linked data.  Pick and choose your information according to your need!

Likening Linked Data applications to mash-ups is probably a bit over simplistic.   It’s also the pearl diving stupid!  Pearl diving is my term for how a machine could, in theory, traverse from link to link to link in order to mine information.  Ed’s example of taking a citation, linking to journal information, then linking to Library of Congress shows how a piece of code could crawl and trawl.  But how wide a net to cast and how deep to throw it? A bit of programming is in  in order to mash-up Linked Data streams effectively.    I read Berners Lee on Linked Data over and over and over and couldn’t see what the big deal was about creating chains of metadata.  The chains are infrastructure.  The value is in what you choose to hang on those chains.   Ed’s diagrams and Karen’s explanations finally got this through my thick skull.  I invite commenters to correct me if my understanding is flawed.

Building a web page from various streams of data isn’t as simple as it seems on the surface.  Also, any Linked Data service one might develop wouldn’t be a single web page.  It would be some sort of search and retrieval tool which created results pages on-the-fly.   One has to know where there are data stores and what they contain.  One has to have some code, or bot, or what-not which does the search and retrieval and plugs the information into the resulting display.  And one has to have an overall vision of what data can be combined to create something which is larger than the sum of its parts.    I think this is a tall order for libraries, archives, and museums with small staffs with variable technical resources.  We’re used to dealing with structured data so it’s a small leap to conceive of meddling with the data a bit to expose it in the Linked Data way.  It’s a big old pole vault, however, to move from I-know-some-HTML-and-CSS to programming a retrieval system that pulls information from various quarters and presents it in meaningful interface.  That’s where the programming knowledge comes in.

I’ve struggled for a long time trying to figure out where to define role boundaries between metadata librarians and programmers.  My nascent understanding of Linked Data services has led me to a rough take: Metadata librarians create Linked Data or update legacy metadata to Linked Data.  Programmers create or implement the search & retrieval and interface tools.  Somewhere in-between is the systems analyst role.  The analyst figures out which bits of linked-data would work well together to make a service that meets customer needs.  Librarians and programmers probably share the systems analyst role.   Things get fuzzy when the library/archive/museum is a small operation or one-man band.   We’re very fortunate to have awesome programmers here.  Together we can take our knowledge of our primary end-user and create a useful (and hopefully well used) service with the Linked Data we’re creating via our faculty names project.

 

 

Add A Comment