Another a-ha moment

Posted by laura on May 2, 2011 under Semantic web, Standards | Be the First to Comment

Another librarian has seen the Linked Data light.  Mita Williams, the New Jack Librarian, writes about gaining a new appreciation for LOD at the recent Great Lakes THAT camp.  Her take-away seems similar to my understanding: librarians already know how to created Linked Data.  We need to see the application of the Linked Data in new contexts in order to comprehend the utility of exposing the data.  The tricky bit IMHO is that creating applications to use the data requires a SPARQL end point.  These SPARQL end points aren’t geared for humans. They are a “machine-friendly interface towards a knowledge base.”

I think the machine application layer of Linked Data is where librarians hit a barrier when getting involved with Linked Open Data (LOD).  I don’t have the first clue how to set up a SPARQL end point.  My technical expertise isn’t there and I’m sure there are a lot of people in the same boat (CODE4LIBers notwithstanding).  Most of the stuff I’ve read about getting libraries more involved in LOD has focused on explaining how RDF is done in subject predicate object syntax then urging libraries to get their metadata transformed into RDF.  I’ve seen precious little plain English instruction on building an app with Linked Data.  I have seen great demos on nifty things done by people in library-land.  I’ll give a shout out here to John Mark Ockerbloom and his use of id.loc.gov to enhance the Online Books Page.   John Mark Ockerbloom has a PhD in computer science.   How do the rest of us get there?

Personally, I’m working with the fine folks here to get our metadata in a ready to use Linked Data format.  And I’m plowing through the jargon laden documentation to teach myself next steps.  Jon Voss, LOD-LAM summit organizer, has posted a reading list to help and soliciting contributions.  The first title I’m delving into is Heath & Bizer’s Linked Data: Evolving the Web into a Global Data Space which has a free HTML version available.  They include a groovy little diagram which outlines the steps in the process of “getting there.”  I’m heartened to see that our 1st step (getting the data ready) reflects the 1st step in the diagram.

I can see clearly now…

Posted by laura on April 27, 2011 under Semantic web, Standards | Be the First to Comment

I’ve been humming the Johnny Nash song to myself ever since reading Karen Coyle’s blog post on Visualizing Linked Data and Ed Summers’ blog post on DOIs as Linked Data.  Thanks to them I think I’ve finally conceptualized the “so-what” factor for Linked Data.   It’s the mash-ups stupid!  The key to doing something useful with Linked Data is being able to build a web page that pulls together information via various bits of linked data.  Pick and choose your information according to your need!

Likening Linked Data applications to mash-ups is probably a bit over simplistic.   It’s also the pearl diving stupid!  Pearl diving is my term for how a machine could, in theory, traverse from link to link to link in order to mine information.  Ed’s example of taking a citation, linking to journal information, then linking to Library of Congress shows how a piece of code could crawl and trawl.  But how wide a net to cast and how deep to throw it? A bit of programming is in  in order to mash-up Linked Data streams effectively.    I read Berners Lee on Linked Data over and over and over and couldn’t see what the big deal was about creating chains of metadata.  The chains are infrastructure.  The value is in what you choose to hang on those chains.   Ed’s diagrams and Karen’s explanations finally got this through my thick skull.  I invite commenters to correct me if my understanding is flawed.

Building a web page from various streams of data isn’t as simple as it seems on the surface.  Also, any Linked Data service one might develop wouldn’t be a single web page.  It would be some sort of search and retrieval tool which created results pages on-the-fly.   One has to know where there are data stores and what they contain.  One has to have some code, or bot, or what-not which does the search and retrieval and plugs the information into the resulting display.  And one has to have an overall vision of what data can be combined to create something which is larger than the sum of its parts.    I think this is a tall order for libraries, archives, and museums with small staffs with variable technical resources.  We’re used to dealing with structured data so it’s a small leap to conceive of meddling with the data a bit to expose it in the Linked Data way.  It’s a big old pole vault, however, to move from I-know-some-HTML-and-CSS to programming a retrieval system that pulls information from various quarters and presents it in meaningful interface.  That’s where the programming knowledge comes in.

I’ve struggled for a long time trying to figure out where to define role boundaries between metadata librarians and programmers.  My nascent understanding of Linked Data services has led me to a rough take: Metadata librarians create Linked Data or update legacy metadata to Linked Data.  Programmers create or implement the search & retrieval and interface tools.  Somewhere in-between is the systems analyst role.  The analyst figures out which bits of linked-data would work well together to make a service that meets customer needs.  Librarians and programmers probably share the systems analyst role.   Things get fuzzy when the library/archive/museum is a small operation or one-man band.   We’re very fortunate to have awesome programmers here.  Together we can take our knowledge of our primary end-user and create a useful (and hopefully well used) service with the Linked Data we’re creating via our faculty names project.

 

 

Next steps faculty names as linked data

Posted by laura on April 26, 2011 under Semantic web, Standards | Be the First to Comment

I’m plugging away at getting a complete set of current Caltech faculty names into the LC NAF/VIAF .  I’ve already described what I’ve done so far to get our set of faculty names into a spreadsheet.  I had to put on my thinking cap to do the next steps.  I mentioned that we’re going to be creating the required records manually since we can’t effectively use a delimited-text to MARC translator to automate the process.  So how many of our names require original authority work? We had 741 names in our spreadsheet.  Some of the names could be eliminated very quickly. The list as-is contains adjunct faculty and staff instructors.  For our project we’re interested in tenure track faculty only.   It was a small matter to sort it and remove those not meeting the parameter.  This leaves 402 names for our complete initial working set.

The next step was to remove names from the list which already have authority records in the NAF.   It would have been efficient enough check the names one-by-one while we do the  authority work.    We would be searching the NAF anyway, in order to avoid conflicts when creating or editing records.  If a name on our list happened to be in the NAF, then we would move on to the next name.

Yet I chafe whenever there is any tedious one-by-one work to do.  The batch processing nerd in me wondered if there was a quick and dirty way to eliminate in-the-NAF-already names from our to-do list.  Enter OCLC Connexion batch searching mode.  I decided to run my list of names just for giggles.  This involved some jiggering of my spreadsheet so I could get a delimited text file that Connexion would take as input.  Thank you Excel concatenation formula!  I got results for 205 names, roughly half of the list.  There were some false positives (names which matched, but weren’t for a Caltech person) and some false negatives (names without matches but for famous people I know should have records already).  The false negatives were mostly due to form of name.  The heading in the authority record didn’t match the form of the name I had in the spreadsheet.

I compared my OCLC batch search results against the initial working set of names in my spreadsheet.  This wasn’t exactly an instant process.  I spent two working days reviewing it.   I’ve confirmed that 178 names are in the NAF.  I think this has saved a bit of time towards getting the project done.  We have four catalogers.  Let’s say each of them could go through five records per week in addition to their regular work.  It would take over 20 weeks to review the full list of 402 names. By reducing the number to 224, it would take roughly 11-12 weeks.   Much better!   I’d say my work days went to good use.  Now I can estimate a reasonable date for completing the set of  MARC authority records for our current faculty.    Let’s add a week or two (or four!)  for the “known-unknowns.”  Production fires always pop up and keep one from working on projects.  Plus summer is a time of absences due to conference attendance and vacation.  I think it’s do-able to get all of the records done by mid-August.  I would be thrilled if it our work was done by September.  We’re not NACO independent yet, so it will take a bit longer to add them to the NAF.  They need our reviewer’s stamp of approval.  Then we’ll be ready to roll with a Linked Data project.

It would be convenient if there was a way to batch search the NAF/VIAF, since the VIAF is already exposed as linked data.   I’m not aware of any such functionality so I’ve decided we should keep a local set of the MARC authority records.  I suspect it will make things simpler in the long run if we serve the data ourselves.  I also suspect that having a spreadsheet with the names and their associated identifiers will be useful (LCCN, ARN, and, eventually, VIAF number.) It may seem weird to keep database identifier numbers when one has the full MARC records.   I’ve learned, however, that having data in a delimited format  is invaluable for batch processing.   It takes a blink of an eye to paste a number into a spreadsheet when you’re reviewing or creating a full record anyway.  Sure, I could create a spreadsheet of the ARN and LCCN on-the-fly by extracting 001 and 010 fields from the MARC records.  But that’s time and energy.  If it’s painless to gather data, one should gather data.

We’ll be able to do something interesting with the records once we have exposed the full set as linked data (or we at least know the NAF or VIAF numbers so we can point to the names as linked data).  No, I don’t know yet what that something interesting will be.  I’m getting closer to imagining the possibilities though.  I’ve mentioned before that I get dazed and confused when faced with implementing a linked data project.   Two blog posts crossed my feed yesterday which cleared my foggy head a little.  Run, don’t walk, to Karen Coyle’s piece on Visualizing Linked Data and Ed Summers’ description of DOIs as linked data (btw, nice diagram Ed! ).   I’ve got more to say about my mini-ephiphany, but this post has gone on far too long already.  I’ll think-aloud about it here another day.

LOD-LAM participants list

Posted by laura on March 29, 2011 under Semantic web | Be the First to Comment

As promised, here is the link to the LOD-LAM participants group http://lod-lam.net/summit/participants/ .  Each of the attendees has a brief biography, including yours truly.  I’m even more pleased.  Congratulations to the organizers for putting the “international” into International Linked Data summit…  The detailed raw data list shows from where everybody hails.  It’s heavy on English speaking countries, esp. the U.S.  It’s not surprising that the bulk of attendees are  North American given the cost of flying over the oceans.  It’s great that so many people are there representing speakers of other languages.

Off now to review some authority records.  I have a rare day sans meetings which means I get to get a lot of work-work (vs. work about work) done.

Limp data

Posted by laura on March 28, 2011 under Semantic web | 2 Comments to Read

LOL. Andy Powell over at eFoundations gave me a good chuckle this a.m. by referring to structured data which could become linked data as “limp data.”  Andy, prepare yourself to be quoted more often in linked data presentations.  Speaking of linked data, the LOD-LAM summit participant list will be posted today on the web site.  I’m keen to see who I’ll be meeting and what projects they’re working on.

I have a confession to make.  I have a lot to learn about making linked data available.  It’s slightly embarrassing given my ardent desire to do a linked data project here.  I get how it works in theory (and please teach me if my take on the gist of it is wrong.)  Put your metadata into triple stores with URIs .  Expose it.  Layer  a useful interface over-top.    I get dazed and confused with the application.   I’m fuzzy  re: the difference between the semantic web writ large and linked data.  Especially when documentation uses comp sci  jargon like, “serializing data.“   What a way to scare off the normals.  (BTW,  Wikipedia has a somewhat understandable explanation).   I took a relational database design class in library school but I wasn’t exposed to the interplay between internet communication protocols and the contents of database tables.  To be fair, I took the class back in the Internet dark ages (1995, in case you’re counting).  At that time the web was a place of flat documents.  Fewer people were thinking about web protocols as a mechanism for interlinking databases.  My dated knowledge means  I get a bit flummoxed when I contemplate doing anything more complicated than putting RDFa into my static web pages.

What I struggle with is figuring out the level of technical proficiency a metadata librarian needs to attain in order to play in the semantic web sandbox.  The line between metadata librarian and coder gets blurry.  Libraries, archives, museums have “limp data.”  They may or may not have a database guru.  They may or may not have funds.  So librarians/archivists/museum curators need to DIY if they are to get their data from limp to linked.   Or at least understand how it all works under the hood so they can delegate or outsource the implementation (and write grant applications to underwrite it).     I learn best when I dive in and get my hands dirty so I applied to the LOD-LAM summit.  It’s forcing me to figure out a do-able project, bone up on my tech skills,  and put some of our data out there.   I’m hoping that I can somehow translate the experience into librarian-speak so I can help other institutions expose their unique content.  I need to be honest about my ignorance, however.  I’m sucking up the slight self-conscious discomfort and starting where I am.

 

Linked Open Data Libraries Archives Museums (LOD-LAM) Summit

Posted by laura on March 10, 2011 under Metadata, Semantic web, Standards | Be the First to Comment

I’m stoked. I’ve been accepted to the International Linked Open Data in Libraries, Archives and Museums Summit. From the about page, the summit: will convene leaders in their respective areas of expertise from the humanities and sciences to catalyze practical, actionable approaches to publishing Linked Open Data, specifically:

  • Identify the tools and techniques for publishing and working with Linked Open Data
  • Draft precedents and policy for licensing and copyright considerations regarding the publishing of library, archive, and museum metadata
  • Publish definitions and promote use cases that will give LAM staff the tools they need to advocate for Linked Open Data in their institutions

It’s exciting because of its potential to spark real progress for library linked data.  I’m keen to be involved with projects where I can get my hands dirty.  I’m pretty much done with librarian conferences like ALA.  IMHO, ALA is an  echo chamber of how-we-done-it-good presentations and yet-another-survey research.  I went to an ERM presentation at the mid-winter meeting and heard a speaker discuss work flows that I’ve seen implemented in libraries for the past 13 years.  Seriously.  ALA is good for networking with fellow librarians to be sure but it isn’t the place to get bleeding edge information.  I’m ready to give my time and effort to breaking new ground.   I’m very fortunate that my boss is incredibly supportive of my LOD-LAM participation.

We want to do a linked data project with author identifiers for our faculty.  We’re a small institution.  We’ve got roughly 300 current faculty members which is a small enough number for us to create a complete set of records within a reasonable amount of time.  Our goal is to contribute our metadata to the commons and to share our experience as a use case.    I’m quite honored to be invited.  I’ve been following the work of some members of  the organizing committee for years and I’m very much looking forward to finally meeting them.