I went to DLF and all I got was…

Posted by laura on November 4, 2011 under Metadata | Be the First to Comment

I’m now home from the DLF 2011 Forum.  This was my 2nd year and it has become my favorite conference.  I met people I wanted to meet both times I have attended.   I liked being able to express my impressed-ness  and talk shop. This year, I made connections from which some collaborations may emerge.   Exciting!

I’m back with some very useful information, stuff I can apply to my day-to-day.  The Linked Data hands-on session was awesome and merits its own blog post.  Best thing: it helped me make a bit of headway with the CIT faculty names linked data work.

CURATEcamp also gets its own blog post. The conversation between catalogers and coders made a great leap.  There are concrete next steps.  See details at the CURATEcamp wiki.  See also the pretty pretty picture of the distribution between catalogers and coders attending.  Catalogers represent!

I laughed so much.  This is truly the conference’o’mirth.  Dan Chudnov proposed a weekly call-in show where a cataloger and a coder take questions from the field.  The names proposed in the twitter back-stream made it difficult to hold back laughter for fear of disturbing the front-stream speaker.  Chuckles aside, this seems to have strong possibility of actually happening.   I’m getting over my instinct that back-chat is rude at conferences.  It seems to have a group bonding effect.  I can see the value of nurturing professional relationships through shared discussion and commentary.  See part above about concrete next steps from CURATEcamp.

I got caught up on Data Management Plans and the eXtensible Catalog project.  I asked for & received advice about migrating our archival information system.  I asked people about their use of descriptive metadata and best practices for image management.  I have new software to evaluate and test.   I also got to see a preview of an RDA training workshop which focuses on as data elements (not RDA as records!!!).

My brain hurts because it’s so full.  I’ll need to collect my thoughts.  I absorbed so much that it wouldn’t be fair to blog one big brain dump.  I’ll be able to synthesize it better if I break it down in chunks.  Stay tuned.

Wherein I get stuck with library linked open data

Posted by laura on October 28, 2011 under Semantic web | Be the First to Comment

For the past few months I’ve been writing about our pilot Linked Open Data project to expose name identifiers for Caltech’s current faculty.  So far we’ve been working on creating/obtaining the initial data set.  I’m very happy to report that we’ve now got 372/412 faculty names in the LC/NAF and, by extension, the VIAF.   We expect to complete the set within the next month or so, give or take our other production responsibilities.  Meanwhile, I’ve been messing with the metadata so I can figure out what the heck to do next.

We have the data in full MARC21 authority records.  From that I’ve made a set of MADS records (thanks MarcEdit XSLT!).  I also created a tab delimited .txt file of the name heading and LCCN.

According to the basic linked data publishing pattern, it can be as simple as publishing a web page.  We are able to put out the structured data under an open license and in a non-proprietary format and call ourselves done.   This is what you would call 3  star linked open data.    We’d like to do a bit better than that.   In order to achieve 4 star Linked Open Data we need to do stuff like mint URIs and make the data available in forms more readily machine process-able.

This is where I get stuck.

Fortunately, there will be a hands-on linked data workshop at the DLF Forum (#dlfforum) next week.   I’m highly looking forward to it.  I’ve volunteered to give an overview of linked data to Maryland tech services librarians in December.   Getting our data out there will provide some necessary street cred.

 

Should academic libraries expose bib data as linked data?

Posted by laura on September 27, 2011 under Semantic web | 2 Comments to Read

There are many calls for the library world to get their act together and expose the bibliographic data in their catalogs as linked data.  But should they?

It  only makes sense if your catalog has a lot of unique data about your unique local holdings and those records aren’t represented in WorldCat.   If you’re in that boat, this blog post doesn’t apply.   I would bet that most academic libraries do their original cataloging with OCLC tools and then add those records to their local catalog.   If your stuff is in WorldCat then let OCLC do the work.   They are doing a pilot to release 1 million of the top WorldCat records as linked data.  Eventually, this is going to be broadened.

Of course, there are problems in simply waiting for OCLC to do it.  The biggest is rights and licensing.  OCLC could make linked bib data available to it’s membership.  But opening that linked data to the world is another matter entirely.        Roy Tennant, commenting on June 2  at JOHO the blog (aka David Weinberger’s blog), says:

This is a project that is still being formed. The intent of the project is to investigate how to best release bibliographic data as linked data that will provide an opportunity for us to get feedback from other practitioners about how well we’ve done it and how useful it might be. A part of this project will include considering the policy and licensing issues. There’s not yet a firm timetable, but as that is determined it will be shared with the community. We’re encouraged by the enthusiastic response folks have shown in what we’re doing. It will help influence the shape of what we ultimately do.

You may own your records.  But if they’re aggregated in WorldCat, OCLC may or may not release them depending on what licensing/usage model they come up with.

The other problem, of course, is the waiting.  Nobody knows how long it’s going to be before WorldCat is a linked data resource.  If you’re gunning to get your library catalog out there in the linked data ether you’re not going to be satisfied with the time lag.   If you’re in an organization with limited resources, however, waiting will be to your strategic advantage.  Times are tough.  Budgets are small.  Cataloging departments are understaffed.
Why spend resources when  bibliographic records as linked data will come?

Focus instead on authority records and on enhancing your repository.  This is the metadata unique to your organization.  This is where you add value for your customers.  The value for others is a nice side-effect.  Faculty, researchers,  and newly hooded PhD graduates will  require identity management tools to assist them in scholarly communication.  Think beyond bibliometrics for their publications.  Grant makers will want to track researchers.  Universities will want to track research impact.   The ground is being sown.  Witness the NSF grant to U. Chicago and Harvard announced today, which will be used to research the impact of ORCID on science policy (i.e. the ability to put it into use for things like FastLane).

When you hear the calls for libraries to expose their ILS as linked data, consider how your users are getting bibliographic information.  Most likely from the web.  WorldCat feeds into Google.  OpenLibrary, LibraryThing, and other sources of bib data abound.  They’re probably going to your catalog for holdings, if they’re going there at all.    Bibliographic linked data is a good thing.  I just don’t think it’s realistic for the academic institutions sans lots of unique stuff in Worldcat to heed the call.  I’d rather that the calls for academic libraries to participate in the linked data movement get broader.   It’s not about freeing the ILS.  It’s about pushing out metadata that exposes the work of the people in your organization, not the metadata that exposes the library materials available to the people in your organization.

 

Return from LOD-LAM

Posted by laura on June 6, 2011 under Semantic web, Standards | Be the First to Comment

I’m back from the International Linked Open Data in Libraries Archives and Museums Summit which was held June 2-3, 2011 San Francisco, CA, USA.  My brain is still digesting all that I learned.  I’ve posted my rough notes in case anybody else can find them useful.  I thought the event was really well done.  I learned about several LOD projects which might provide tools we can use here at Caltech.  I’ve got to dig in a bit more detail.  The organizers will be posting a summit report with all of the action items for next steps.  Of particular interest to me – there will be some work on publishing citations as linked data and there will be some materials released which can assist with educating the LAM community about LOD.   I’ll probably write about each aspect as more information comes to light and as I wrap my head around it.

 

More on wikipedia as authority

Posted by laura on May 25, 2011 under Semantic web, Standards | Be the First to Comment

Re: my post yesterday saying I was unsure if the VIAF text mining approach to incorporating Wikipedia links within their records was Linked Data.  There’s a good little conversation over at the LOD-LAM blog which elucidated the difference for me.  They say it better than I can, so go have a look-see.  The money quotation, ” Linked Data favors “factoids” like date-and-place-of-birth, while statistical text-mining produces (at least in this case) distributions interpretable as “relationship strength”.”

Wikipedia authority template

Posted by laura on May 24, 2011 under Semantic web | Read the First Comment

There’s been some conversation lately about using Wikipedia in authority work.  Jonathan Rochkind recently blogged about about the potential of using Wikipedia Miner to do add subject authority information to catalog records, reasoning that the context and linkages provided in a Wikipedia article could provide better topical relevance.  Then somebody on CODE4LIB asked for help using author information in a catalog record to look up said author in Wikipedia on-the-fly.  The various approaches suggested on the list have been interesting although there hasn’t been an optimal solution.  Although I couldn’t necessarily code such an application myself, it’s good to know how a programmer could go about doing such a thing.  What I did learn was that Wikipedia has a way of marking up names with author identifiers. The Template:Authority Control gives an example of how to do it.

I haven’t done much authoring or editing at Wikipedia, so the existence of the “template” is news to me.  I think it’s pretty nifty, so I just had to blog it.  The template gets me thinking.   Perhaps we’ll be able to leverage our faculty names Linked Data pilot into some sort of mash-up with Wikipedia, pushing our author identifiers into that space or pulling Wikipedia info into our work.   Our group continues to make progress on getting all our current faculty are represented in the National Authority File, with an eye to exposing our set of authority records as Linked Data.  We haven’t figured out yet precisely what we’re going to do with the Linked Data once we make it available.  Build it and they will come is nice, but we need a demonstrable benefit (i.e. a cool project) to show the value of the Library’s author services.

VIAF already provides external links to Wikipedia and WorldCat Identities with its display of an author name. Ralph Levan explained how OCLC did it, in general fashion, in the CODE4LIB conversation. Near as I understand it, they do a data dump from Wikipedia, do some text mining, run their disambiguation algorithms over it, then add the Wikipedia page if they get a match. I don’t know if this computational approach is a Linked Data type of thing or not. I need to continue working my way through chapter 5 & chapter 6 Heath & Bizer’s Linked Data book (LOD-LAM prep!).  Nonetheless, it’s a good way of showing how connections can be built between an author identity tool and another data source which enrich the final product.   I have a hazy vision of morphing the Open Library’s “one web page for every book every published” into “one web page for every Caltech author.”  More likely it will be “one web page tool for every Caltech author to incorporate into their personal web site,” given the extreme individualism and independence cherished within our institutional culture.  But I digress.  Yes. “One web page for every Caltech author” would at least give us the (metaphorical) space to build a killer app.

Another a-ha moment

Posted by laura on May 2, 2011 under Semantic web, Standards | Be the First to Comment

Another librarian has seen the Linked Data light.  Mita Williams, the New Jack Librarian, writes about gaining a new appreciation for LOD at the recent Great Lakes THAT camp.  Her take-away seems similar to my understanding: librarians already know how to created Linked Data.  We need to see the application of the Linked Data in new contexts in order to comprehend the utility of exposing the data.  The tricky bit IMHO is that creating applications to use the data requires a SPARQL end point.  These SPARQL end points aren’t geared for humans. They are a “machine-friendly interface towards a knowledge base.”

I think the machine application layer of Linked Data is where librarians hit a barrier when getting involved with Linked Open Data (LOD).  I don’t have the first clue how to set up a SPARQL end point.  My technical expertise isn’t there and I’m sure there are a lot of people in the same boat (CODE4LIBers notwithstanding).  Most of the stuff I’ve read about getting libraries more involved in LOD has focused on explaining how RDF is done in subject predicate object syntax then urging libraries to get their metadata transformed into RDF.  I’ve seen precious little plain English instruction on building an app with Linked Data.  I have seen great demos on nifty things done by people in library-land.  I’ll give a shout out here to John Mark Ockerbloom and his use of id.loc.gov to enhance the Online Books Page.   John Mark Ockerbloom has a PhD in computer science.   How do the rest of us get there?

Personally, I’m working with the fine folks here to get our metadata in a ready to use Linked Data format.  And I’m plowing through the jargon laden documentation to teach myself next steps.  Jon Voss, LOD-LAM summit organizer, has posted a reading list to help and soliciting contributions.  The first title I’m delving into is Heath & Bizer’s Linked Data: Evolving the Web into a Global Data Space which has a free HTML version available.  They include a groovy little diagram which outlines the steps in the process of “getting there.”  I’m heartened to see that our 1st step (getting the data ready) reflects the 1st step in the diagram.

I can see clearly now…

Posted by laura on April 27, 2011 under Semantic web, Standards | Be the First to Comment

I’ve been humming the Johnny Nash song to myself ever since reading Karen Coyle’s blog post on Visualizing Linked Data and Ed Summers’ blog post on DOIs as Linked Data.  Thanks to them I think I’ve finally conceptualized the “so-what” factor for Linked Data.   It’s the mash-ups stupid!  The key to doing something useful with Linked Data is being able to build a web page that pulls together information via various bits of linked data.  Pick and choose your information according to your need!

Likening Linked Data applications to mash-ups is probably a bit over simplistic.   It’s also the pearl diving stupid!  Pearl diving is my term for how a machine could, in theory, traverse from link to link to link in order to mine information.  Ed’s example of taking a citation, linking to journal information, then linking to Library of Congress shows how a piece of code could crawl and trawl.  But how wide a net to cast and how deep to throw it? A bit of programming is in  in order to mash-up Linked Data streams effectively.    I read Berners Lee on Linked Data over and over and over and couldn’t see what the big deal was about creating chains of metadata.  The chains are infrastructure.  The value is in what you choose to hang on those chains.   Ed’s diagrams and Karen’s explanations finally got this through my thick skull.  I invite commenters to correct me if my understanding is flawed.

Building a web page from various streams of data isn’t as simple as it seems on the surface.  Also, any Linked Data service one might develop wouldn’t be a single web page.  It would be some sort of search and retrieval tool which created results pages on-the-fly.   One has to know where there are data stores and what they contain.  One has to have some code, or bot, or what-not which does the search and retrieval and plugs the information into the resulting display.  And one has to have an overall vision of what data can be combined to create something which is larger than the sum of its parts.    I think this is a tall order for libraries, archives, and museums with small staffs with variable technical resources.  We’re used to dealing with structured data so it’s a small leap to conceive of meddling with the data a bit to expose it in the Linked Data way.  It’s a big old pole vault, however, to move from I-know-some-HTML-and-CSS to programming a retrieval system that pulls information from various quarters and presents it in meaningful interface.  That’s where the programming knowledge comes in.

I’ve struggled for a long time trying to figure out where to define role boundaries between metadata librarians and programmers.  My nascent understanding of Linked Data services has led me to a rough take: Metadata librarians create Linked Data or update legacy metadata to Linked Data.  Programmers create or implement the search & retrieval and interface tools.  Somewhere in-between is the systems analyst role.  The analyst figures out which bits of linked-data would work well together to make a service that meets customer needs.  Librarians and programmers probably share the systems analyst role.   Things get fuzzy when the library/archive/museum is a small operation or one-man band.   We’re very fortunate to have awesome programmers here.  Together we can take our knowledge of our primary end-user and create a useful (and hopefully well used) service with the Linked Data we’re creating via our faculty names project.

 

 

Next steps faculty names as linked data

Posted by laura on April 26, 2011 under Semantic web, Standards | Be the First to Comment

I’m plugging away at getting a complete set of current Caltech faculty names into the LC NAF/VIAF .  I’ve already described what I’ve done so far to get our set of faculty names into a spreadsheet.  I had to put on my thinking cap to do the next steps.  I mentioned that we’re going to be creating the required records manually since we can’t effectively use a delimited-text to MARC translator to automate the process.  So how many of our names require original authority work? We had 741 names in our spreadsheet.  Some of the names could be eliminated very quickly. The list as-is contains adjunct faculty and staff instructors.  For our project we’re interested in tenure track faculty only.   It was a small matter to sort it and remove those not meeting the parameter.  This leaves 402 names for our complete initial working set.

The next step was to remove names from the list which already have authority records in the NAF.   It would have been efficient enough check the names one-by-one while we do the  authority work.    We would be searching the NAF anyway, in order to avoid conflicts when creating or editing records.  If a name on our list happened to be in the NAF, then we would move on to the next name.

Yet I chafe whenever there is any tedious one-by-one work to do.  The batch processing nerd in me wondered if there was a quick and dirty way to eliminate in-the-NAF-already names from our to-do list.  Enter OCLC Connexion batch searching mode.  I decided to run my list of names just for giggles.  This involved some jiggering of my spreadsheet so I could get a delimited text file that Connexion would take as input.  Thank you Excel concatenation formula!  I got results for 205 names, roughly half of the list.  There were some false positives (names which matched, but weren’t for a Caltech person) and some false negatives (names without matches but for famous people I know should have records already).  The false negatives were mostly due to form of name.  The heading in the authority record didn’t match the form of the name I had in the spreadsheet.

I compared my OCLC batch search results against the initial working set of names in my spreadsheet.  This wasn’t exactly an instant process.  I spent two working days reviewing it.   I’ve confirmed that 178 names are in the NAF.  I think this has saved a bit of time towards getting the project done.  We have four catalogers.  Let’s say each of them could go through five records per week in addition to their regular work.  It would take over 20 weeks to review the full list of 402 names. By reducing the number to 224, it would take roughly 11-12 weeks.   Much better!   I’d say my work days went to good use.  Now I can estimate a reasonable date for completing the set of  MARC authority records for our current faculty.    Let’s add a week or two (or four!)  for the “known-unknowns.”  Production fires always pop up and keep one from working on projects.  Plus summer is a time of absences due to conference attendance and vacation.  I think it’s do-able to get all of the records done by mid-August.  I would be thrilled if it our work was done by September.  We’re not NACO independent yet, so it will take a bit longer to add them to the NAF.  They need our reviewer’s stamp of approval.  Then we’ll be ready to roll with a Linked Data project.

It would be convenient if there was a way to batch search the NAF/VIAF, since the VIAF is already exposed as linked data.   I’m not aware of any such functionality so I’ve decided we should keep a local set of the MARC authority records.  I suspect it will make things simpler in the long run if we serve the data ourselves.  I also suspect that having a spreadsheet with the names and their associated identifiers will be useful (LCCN, ARN, and, eventually, VIAF number.) It may seem weird to keep database identifier numbers when one has the full MARC records.   I’ve learned, however, that having data in a delimited format  is invaluable for batch processing.   It takes a blink of an eye to paste a number into a spreadsheet when you’re reviewing or creating a full record anyway.  Sure, I could create a spreadsheet of the ARN and LCCN on-the-fly by extracting 001 and 010 fields from the MARC records.  But that’s time and energy.  If it’s painless to gather data, one should gather data.

We’ll be able to do something interesting with the records once we have exposed the full set as linked data (or we at least know the NAF or VIAF numbers so we can point to the names as linked data).  No, I don’t know yet what that something interesting will be.  I’m getting closer to imagining the possibilities though.  I’ve mentioned before that I get dazed and confused when faced with implementing a linked data project.   Two blog posts crossed my feed yesterday which cleared my foggy head a little.  Run, don’t walk, to Karen Coyle’s piece on Visualizing Linked Data and Ed Summers’ description of DOIs as linked data (btw, nice diagram Ed! ).   I’ve got more to say about my mini-ephiphany, but this post has gone on far too long already.  I’ll think-aloud about it here another day.

MarcEdit how-to videos

Posted by laura on April 11, 2011 under Metadata | Be the First to Comment

I had the pleasure of attending an advanced MarcEdit workshop this past Friday taught by none other than Mr. MarcEdit himself, Terry Reese.  I learned quite a few tips and tricks.  Most important for me was learning how the regular expression engine functions and the extensions Terry included (not too many, yay for sticking to the .NET framework!).  The portion on programming MarcEdit from the command line was a bit beyond my ken but it was cool to see it and file under must-learn-someday.

Terry has a quite a few YouTube videos demonstrating how to make the most of the program.  They have been available for a couple of years but they’re worth reminding folks about.  And hey, it’s Monday, it’s a movie, and it’s metadata related.