I can see clearly now…

Posted by laura on April 27, 2011 under Semantic web, Standards | Be the First to Comment

I’ve been humming the Johnny Nash song to myself ever since reading Karen Coyle’s blog post on Visualizing Linked Data and Ed Summers’ blog post on DOIs as Linked Data.  Thanks to them I think I’ve finally conceptualized the “so-what” factor for Linked Data.   It’s the mash-ups stupid!  The key to doing something useful with Linked Data is being able to build a web page that pulls together information via various bits of linked data.  Pick and choose your information according to your need!

Likening Linked Data applications to mash-ups is probably a bit over simplistic.   It’s also the pearl diving stupid!  Pearl diving is my term for how a machine could, in theory, traverse from link to link to link in order to mine information.  Ed’s example of taking a citation, linking to journal information, then linking to Library of Congress shows how a piece of code could crawl and trawl.  But how wide a net to cast and how deep to throw it? A bit of programming is in  in order to mash-up Linked Data streams effectively.    I read Berners Lee on Linked Data over and over and over and couldn’t see what the big deal was about creating chains of metadata.  The chains are infrastructure.  The value is in what you choose to hang on those chains.   Ed’s diagrams and Karen’s explanations finally got this through my thick skull.  I invite commenters to correct me if my understanding is flawed.

Building a web page from various streams of data isn’t as simple as it seems on the surface.  Also, any Linked Data service one might develop wouldn’t be a single web page.  It would be some sort of search and retrieval tool which created results pages on-the-fly.   One has to know where there are data stores and what they contain.  One has to have some code, or bot, or what-not which does the search and retrieval and plugs the information into the resulting display.  And one has to have an overall vision of what data can be combined to create something which is larger than the sum of its parts.    I think this is a tall order for libraries, archives, and museums with small staffs with variable technical resources.  We’re used to dealing with structured data so it’s a small leap to conceive of meddling with the data a bit to expose it in the Linked Data way.  It’s a big old pole vault, however, to move from I-know-some-HTML-and-CSS to programming a retrieval system that pulls information from various quarters and presents it in meaningful interface.  That’s where the programming knowledge comes in.

I’ve struggled for a long time trying to figure out where to define role boundaries between metadata librarians and programmers.  My nascent understanding of Linked Data services has led me to a rough take: Metadata librarians create Linked Data or update legacy metadata to Linked Data.  Programmers create or implement the search & retrieval and interface tools.  Somewhere in-between is the systems analyst role.  The analyst figures out which bits of linked-data would work well together to make a service that meets customer needs.  Librarians and programmers probably share the systems analyst role.   Things get fuzzy when the library/archive/museum is a small operation or one-man band.   We’re very fortunate to have awesome programmers here.  Together we can take our knowledge of our primary end-user and create a useful (and hopefully well used) service with the Linked Data we’re creating via our faculty names project.



Next steps faculty names as linked data

Posted by laura on April 26, 2011 under Semantic web, Standards | Be the First to Comment

I’m plugging away at getting a complete set of current Caltech faculty names into the LC NAF/VIAF .  I’ve already described what I’ve done so far to get our set of faculty names into a spreadsheet.  I had to put on my thinking cap to do the next steps.  I mentioned that we’re going to be creating the required records manually since we can’t effectively use a delimited-text to MARC translator to automate the process.  So how many of our names require original authority work? We had 741 names in our spreadsheet.  Some of the names could be eliminated very quickly. The list as-is contains adjunct faculty and staff instructors.  For our project we’re interested in tenure track faculty only.   It was a small matter to sort it and remove those not meeting the parameter.  This leaves 402 names for our complete initial working set.

The next step was to remove names from the list which already have authority records in the NAF.   It would have been efficient enough check the names one-by-one while we do the  authority work.    We would be searching the NAF anyway, in order to avoid conflicts when creating or editing records.  If a name on our list happened to be in the NAF, then we would move on to the next name.

Yet I chafe whenever there is any tedious one-by-one work to do.  The batch processing nerd in me wondered if there was a quick and dirty way to eliminate in-the-NAF-already names from our to-do list.  Enter OCLC Connexion batch searching mode.  I decided to run my list of names just for giggles.  This involved some jiggering of my spreadsheet so I could get a delimited text file that Connexion would take as input.  Thank you Excel concatenation formula!  I got results for 205 names, roughly half of the list.  There were some false positives (names which matched, but weren’t for a Caltech person) and some false negatives (names without matches but for famous people I know should have records already).  The false negatives were mostly due to form of name.  The heading in the authority record didn’t match the form of the name I had in the spreadsheet.

I compared my OCLC batch search results against the initial working set of names in my spreadsheet.  This wasn’t exactly an instant process.  I spent two working days reviewing it.   I’ve confirmed that 178 names are in the NAF.  I think this has saved a bit of time towards getting the project done.  We have four catalogers.  Let’s say each of them could go through five records per week in addition to their regular work.  It would take over 20 weeks to review the full list of 402 names. By reducing the number to 224, it would take roughly 11-12 weeks.   Much better!   I’d say my work days went to good use.  Now I can estimate a reasonable date for completing the set of  MARC authority records for our current faculty.    Let’s add a week or two (or four!)  for the “known-unknowns.”  Production fires always pop up and keep one from working on projects.  Plus summer is a time of absences due to conference attendance and vacation.  I think it’s do-able to get all of the records done by mid-August.  I would be thrilled if it our work was done by September.  We’re not NACO independent yet, so it will take a bit longer to add them to the NAF.  They need our reviewer’s stamp of approval.  Then we’ll be ready to roll with a Linked Data project.

It would be convenient if there was a way to batch search the NAF/VIAF, since the VIAF is already exposed as linked data.   I’m not aware of any such functionality so I’ve decided we should keep a local set of the MARC authority records.  I suspect it will make things simpler in the long run if we serve the data ourselves.  I also suspect that having a spreadsheet with the names and their associated identifiers will be useful (LCCN, ARN, and, eventually, VIAF number.) It may seem weird to keep database identifier numbers when one has the full MARC records.   I’ve learned, however, that having data in a delimited format  is invaluable for batch processing.   It takes a blink of an eye to paste a number into a spreadsheet when you’re reviewing or creating a full record anyway.  Sure, I could create a spreadsheet of the ARN and LCCN on-the-fly by extracting 001 and 010 fields from the MARC records.  But that’s time and energy.  If it’s painless to gather data, one should gather data.

We’ll be able to do something interesting with the records once we have exposed the full set as linked data (or we at least know the NAF or VIAF numbers so we can point to the names as linked data).  No, I don’t know yet what that something interesting will be.  I’m getting closer to imagining the possibilities though.  I’ve mentioned before that I get dazed and confused when faced with implementing a linked data project.   Two blog posts crossed my feed yesterday which cleared my foggy head a little.  Run, don’t walk, to Karen Coyle’s piece on Visualizing Linked Data and Ed Summers’ description of DOIs as linked data (btw, nice diagram Ed! ).   I’ve got more to say about my mini-ephiphany, but this post has gone on far too long already.  I’ll think-aloud about it here another day.

MarcEdit how-to videos

Posted by laura on April 11, 2011 under Metadata | Be the First to Comment

I had the pleasure of attending an advanced MarcEdit workshop this past Friday taught by none other than Mr. MarcEdit himself, Terry Reese.  I learned quite a few tips and tricks.  Most important for me was learning how the regular expression engine functions and the extensions Terry included (not too many, yay for sticking to the .NET framework!).  The portion on programming MarcEdit from the command line was a bit beyond my ken but it was cool to see it and file under must-learn-someday.

Terry has a quite a few YouTube videos demonstrating how to make the most of the program.  They have been available for a couple of years but they’re worth reminding folks about.  And hey, it’s Monday, it’s a movie, and it’s metadata related.

Moderation mea culpa

Posted by laura on April 6, 2011 under Admin | Be the First to Comment

Humble apologies are in order.  I haven’t been moderating comments in a timely manner.  I was attending the CNI spring meeting and I didn’t take my usual laptop.  I confess I don’t have all of my passwords committed to memory (does anybody?).  They’re encrypted in a password manager on my usual machine.  So I wasn’t able to log into this blog to approve the thoughtful commentary on my last post.  Please accept my apologies for any inconvenience.  I’ll be sure to memorize that password before my next bout of business travel.   My sincere thanks to those who commented.

It’s the little things

Posted by laura on April 1, 2011 under Metadata, Standards | 4 Comments to Read

I’ve mentioned that we want to get authority records for all current Caltech faculty into the National Authority File and by extension into the VIAF.  The 1st step is to ensure that we have a current and comprehensive list of all faculty working here.  I’m happy to learn that I can easily obtain the information in a manipulate-able  form.  I was expecting that I would need ask  somebody in academic records and plead our case.  Lists can be tightly guarded by powers-that-be.  I just figured out that you can convert HTML tables to Excel via Internet Explorer.  That’s probably old news to most of you.  I’ve done .xls to html conversion, I’ve just never had the need to go in the opposite direction.  Plus I don’t use Internet Explorer.

I was able to create a spreadsheet of the necessary data by doing a directory search limited to faculty and running the conversion.  Sweet! Now we can divvy up the work and get cracking.  Getting the info is a small thing.  But it’s these little victories which make my days brighter.  I played around with the delimited text to MARC translator in MarcEdit to auto-generate records from the spreadsheet.  It worked like a charm.  Unfortunately the name info in the spreadsheet is collated within a single cell.  Also it’s in first name surname order without any normalization of middle initials, middle names, or nicknames in parens.  A text-to-MARC transform can only work with the data it is given.  A bunch of records with 1oo fields in the wrong order isn’t so helpful.   I messed about with the text-to-columns tool in Excel in order to parse the name data more finely, to no avail.  It worked but would require much post-split intervention to ensure the data is correct.   Might as well do that work within Connexion.

In fact, I’m ok with creating the authority records from scratch since we’re training to be NACO contributors.  People need the practice.   In my experience, it’s easier to do original cataloging  vs. using derived records.   Editing requires a finer eye and original work can be helped along with constant data and/or macros.   Regardless, it was fun to play with the transform and teach myself something new.  And it’s very exciting to take a step towards meeting our goal of authority/identity information/identifiers for our constituents.