I’ve mentioned that we want to get authority records for all current Caltech faculty into the National Authority File and by extension into the VIAF. The 1st step is to ensure that we have a current and comprehensive list of all faculty working here. I’m happy to learn that I can easily obtain the information in a manipulate-able form. I was expecting that I would need ask somebody in academic records and plead our case. Lists can be tightly guarded by powers-that-be. I just figured out that you can convert HTML tables to Excel via Internet Explorer. That’s probably old news to most of you. I’ve done .xls to html conversion, I’ve just never had the need to go in the opposite direction. Plus I don’t use Internet Explorer.
I was able to create a spreadsheet of the necessary data by doing a directory search limited to faculty and running the conversion. Sweet! Now we can divvy up the work and get cracking. Getting the info is a small thing. But it’s these little victories which make my days brighter. I played around with the delimited text to MARC translator in MarcEdit to auto-generate records from the spreadsheet. It worked like a charm. Unfortunately the name info in the spreadsheet is collated within a single cell. Also it’s in first name surname order without any normalization of middle initials, middle names, or nicknames in parens. A text-to-MARC transform can only work with the data it is given. A bunch of records with 1oo fields in the wrong order isn’t so helpful. I messed about with the text-to-columns tool in Excel in order to parse the name data more finely, to no avail. It worked but would require much post-split intervention to ensure the data is correct. Might as well do that work within Connexion.
In fact, I’m ok with creating the authority records from scratch since we’re training to be NACO contributors. People need the practice. In my experience, it’s easier to do original cataloging vs. using derived records. Editing requires a finer eye and original work can be helped along with constant data and/or macros. Regardless, it was fun to play with the transform and teach myself something new. And it’s very exciting to take a step towards meeting our goal of authority/identity information/identifiers for our constituents.
As promised, here is the link to the LOD-LAM participants group http://lod-lam.net/summit/participants/ . Each of the attendees has a brief biography, including yours truly. I’m even more pleased. Congratulations to the organizers for putting the “international” into International Linked Data summit… The detailed raw data list shows from where everybody hails. It’s heavy on English speaking countries, esp. the U.S. It’s not surprising that the bulk of attendees are North American given the cost of flying over the oceans. It’s great that so many people are there representing speakers of other languages.
Off now to review some authority records. I have a rare day sans meetings which means I get to get a lot of work-work (vs. work about work) done.
LOL. Andy Powell over at eFoundations gave me a good chuckle this a.m. by referring to structured data which could become linked data as “limp data.” Andy, prepare yourself to be quoted more often in linked data presentations. Speaking of linked data, the LOD-LAM summit participant list will be posted today on the web site. I’m keen to see who I’ll be meeting and what projects they’re working on.
I have a confession to make. I have a lot to learn about making linked data available. It’s slightly embarrassing given my ardent desire to do a linked data project here. I get how it works in theory (and please teach me if my take on the gist of it is wrong.) Put your metadata into triple stores with URIs . Expose it. Layer a useful interface over-top. I get dazed and confused with the application. I’m fuzzy re: the difference between the semantic web writ large and linked data. Especially when documentation uses comp sci jargon like, “serializing data.“ What a way to scare off the normals. (BTW, Wikipedia has a somewhat understandable explanation). I took a relational database design class in library school but I wasn’t exposed to the interplay between internet communication protocols and the contents of database tables. To be fair, I took the class back in the Internet dark ages (1995, in case you’re counting). At that time the web was a place of flat documents. Fewer people were thinking about web protocols as a mechanism for interlinking databases. My dated knowledge means I get a bit flummoxed when I contemplate doing anything more complicated than putting RDFa into my static web pages.
What I struggle with is figuring out the level of technical proficiency a metadata librarian needs to attain in order to play in the semantic web sandbox. The line between metadata librarian and coder gets blurry. Libraries, archives, museums have “limp data.” They may or may not have a database guru. They may or may not have funds. So librarians/archivists/museum curators need to DIY if they are to get their data from limp to linked. Or at least understand how it all works under the hood so they can delegate or outsource the implementation (and write grant applications to underwrite it). I learn best when I dive in and get my hands dirty so I applied to the LOD-LAM summit. It’s forcing me to figure out a do-able project, bone up on my tech skills, and put some of our data out there. I’m hoping that I can somehow translate the experience into librarian-speak so I can help other institutions expose their unique content. I need to be honest about my ignorance, however. I’m sucking up the slight self-conscious discomfort and starting where I am.
When I was a newbie manager I experimented with more regular staff meetings for the people in the Metadata Services Group. I wanted to incorporate shared learning and group discussion into our meetings to make training more fun and relevant. So I added metadata videos to our Monday morning agenda. I would bring homemade vegan muffins to encourage attendance and participation since we met early and it was Monday after all (pix available via Flickr!). We called it the 4M: Monday morning metadata movies & muffins. You can pronounce that Mmmm.
We eventually abandoned that experiment. Folks liked the videos, but wanted to watch them on their own time. Since then, my periodic sharing of links for metadata-related videos with the folks on my team has dwindled. I was reminded of this practice when a dear friend recently asked me for the links to the videos. I was also reminded of this when Mod Librarian started posting a Metadata Monday series on her blog. Great minds and all that. I’ve finally managed to post the link to the YouTube play list for the late-lamented (at least by me) experiment. Drum roll please…for your viewing pleasure:
The 4M: Monday Morning Metadata Movies play list.
Caveat: the movies we watched were not always strictly about metadata, but they were on topics relevant to metadata management within academic libraries. They were intended for an audience of paraprofessionals & professionals. And sometimes they were more fun than educational.
Some past 4M videos which weren’t on the YouTube play list:
I’m inspired now to resume my quest for videos relevant to metadata workers in academic libraries. Perhaps I’ll even post them each Monday. Or at least on some Mondays. And I don’t promise to bake vegan muffins on Sunday nights.
I spent a fun day at the regional OCLC Good Practices, Great Outcomes event yesterday, where I was an invited speaker. It’s always a treat to hear Roy Tennant give a keynote. I was very impressed with the efficiencies Helen Heinrich implemented at CSU Northridge and the big dent Sharon Benamou made in the cataloging back log at UCLA. Holly Tomren did a fabulous job summing up the major themes which emerged. Video and slides from the event will be made available on OCLC web site soon. I promise to share the links. Meanwhile, I’ve put my slides up on slideshare.
It was great to give a talk again. I haven’t presented professionally in several years. I used to do it frequently but fell out of the habit when I switched career streams from public to technical services. Partially it was due to lack of time. I was busy learning the intricacies of MARC and volunteering my time on CC:DA during the development of RDA. Partially it was due to major illness. I spent a good chunk of 2009 on medical leave. And partially it was due to self-doubt. As a new metadata maven I wanted to have something useful to discuss before I began speaking about my work.
The best part of doing the talk was figuring out those things we’re doing at Caltech which may be useful for other tech services librarians. Reflecting back on my four years here, I realize we’ve accomplished a great deal.
- We’re adding more bibliographic records to our ILS despite a reduction in staff — on order of 10′s of thousands more. That’s the beauty of batch loading and purchasing record sets.
- We’re more efficient at our batch loading because we’ve tapped into regular expressions (shout out to Terry Reese. MarcEdit has been the major player in making us more efficient.
- We’ve learned how to apply business process analysis techniques to review our work flows and improve them, freeing up time for for training and developing next generation metadata services.
I have to give credit where credit is due. The Metadata Services Group team has really stepped up to the plate and wholeheartedly embraced the changes we’ve made. I’m so proud of them. It was easier for me to stand up and talk to a hundred or so people because I could share their success.
I’m stoked. I’ve been accepted to the International Linked Open Data in Libraries, Archives and Museums Summit. From the about page, the summit: will convene leaders in their respective areas of expertise from the humanities and sciences to catalyze practical, actionable approaches to publishing Linked Open Data, specifically:
- Identify the tools and techniques for publishing and working with Linked Open Data
- Draft precedents and policy for licensing and copyright considerations regarding the publishing of library, archive, and museum metadata
- Publish definitions and promote use cases that will give LAM staff the tools they need to advocate for Linked Open Data in their institutions
It’s exciting because of its potential to spark real progress for library linked data. I’m keen to be involved with projects where I can get my hands dirty. I’m pretty much done with librarian conferences like ALA. IMHO, ALA is an echo chamber of how-we-done-it-good presentations and yet-another-survey research. I went to an ERM presentation at the mid-winter meeting and heard a speaker discuss work flows that I’ve seen implemented in libraries for the past 13 years. Seriously. ALA is good for networking with fellow librarians to be sure but it isn’t the place to get bleeding edge information. I’m ready to give my time and effort to breaking new ground. I’m very fortunate that my boss is incredibly supportive of my LOD-LAM participation.
We want to do a linked data project with author identifiers for our faculty. We’re a small institution. We’ve got roughly 300 current faculty members which is a small enough number for us to create a complete set of records within a reasonable amount of time. Our goal is to contribute our metadata to the commons and to share our experience as a use case. I’m quite honored to be invited. I’ve been following the work of some members of the organizing committee for years and I’m very much looking forward to finally meeting them.
There are plenty of communities that manage metadata besides libraries. I frequently see job postings for data curators here at MPOW. I’m considering starting a metadata or data curation interest group on campus. I think librarians need to be proactive about the types of metadata services we can provide to our customers. Some of our peer libraries do a great job making metadata services a public service. See how MIT, Cornell, University of Indiana, and University of Wisconsin promote their metadata expertise.
At this point in time, I think our library can manage taking on a consulting type of role. Most of the people managing metadata on campus are specialists with advanced degrees in the discipline. There’s a reason for that. Their “collections” require subject expertise in order to properly create descriptive metadata. The experts don’t necessarily have training in creating metadata or doing digital preservation, however. And they probably run into the typical issues in managing metadata that libraries do. At the very least it would be useful to network with people that share common interests. It can only help us figure out how the library fits in with the emerging paradigms of scholarly communication.
I’m looking at incentives for making our serials holdings MARC standard compliant. MARC Holdings Format Data, pronounced “muffed” I’m told, isn’t supported very well within our ILS. MFHD is held within check-in records. It makes sense to a degree. One needs coverage ranges when checking in journals. The data is buried, however, in a place where most people using the ILS will not see it. Customers or staff. We would love to get it current, correct, and usable.
The biggest reason for standardizing is to make interlibrary loan work smoother. We get requests for “titles-not-owned” when OCLC indicates we own a journal but we don’t have a specific issue. This brings down our fulfillment rate. That makes us naughty players in the shared resources game. But what are the consequences of that? I’m not quite sure at the moment. Patrons beyond Caltech are important to us, absolutely. Yet they fall lower in our priority queue than Caltech faculty, staff, and students. When resources are limited we focus on projects with the biggest payoffs for our primary user group.
There are other good reasons for standardizing. Machines manipulate standardized data better. It’s a metadata truism. Let’s ignore the real-world issues with interoperability that have been demonstrated over the years. Those are really a result of human factors. We all know that standardized data is not truly standardized. See Naomi Dushay and Diane Hillmann’s excellent identification problems encountered in sharing Dublin Core records. But let’s live in an ideal world for a minute and say that we did get our data nice and clean and in a standardized format. All of a sudden we would have the means to re-use our data outside of our ILS. Theoretically at least. Much depends on the export capacity of our ILS.
It would be lovely if we could better automate maintenance of coverage ranges within our OpenURL resolver, for example. I’m sure there are more rationales for holdings standardization that I haven’t thought about. I’ve begun reviewing the literature. We can’t make a decision to do a large conversion project based on all of these feel-good reasons, however. The business case relies upon multiple factors: the state of our current data, the capacities of our ILS, the interoperability of our ILS and OCLC, and our staffing and budgetary resources. All of these need thorough analysis. So we’re holding on holdings at present while we gather information and ask hard questions. Ultimately it comes down to answering the question, will the payoff be worth the investment? Stay tuned.
OCLC has released more modules for their “cloud-based” integrated library system. This has lit our fire. We’ve known for awhile that we need to review how we use our ILS and determine if there is a business case for retaining it or migrating to another option. The OCLC option represents a sea-change in library automation. It’s the only web-services based ILS in the current market.
It is time for us to stop talking and start doing. We know how in a rough sense (needs assessment, functional requirements, review). It’s the devil-y details which make the process intimidating. The ILS is one of the largest expenses in a library. Almost all daily work processes touch it. It is a capital B big deal. Any mistake will be costly.
So, we’re planning. Something concrete will emerge shortly. Meanwhile, I await the publication of this . Guidance of any sort is welcome at this stage in the game.
I anticipate doing a thorough review of the integrated library system here at MPOW within the next couple of years. A wise friend once told me, “there are two types of librarians. Those who have done an ILS migration. And those who will.” I’ve been lucky so far. In 15 years as a librarian I haven’t yet had the pleasure migrating an ILS. It remains to be seen if we do a migration here. Periodic review of systems is necessary due diligence. There is always a chance that we will discover our current system continues to meet our needs.
There have been many developments in the field since this library implemented its online catalog. The number of vendors selling ILS has declined through mergers & acquisitions and economic attrition. The open source ILS movement has grown with the development of Koha, Greenstone, and others. There has been a movement to dis-integrate the integrated library system by separating the search layer from the administrative inventory modules. The options available are somewhat overwhelming. It’s time to evaluate the current state of the marketplace to ensure that we’re providing the best possible system for our customers needs.
That’s key. One needs to understand the functional requirements of a system before one can evaluate options. There’s no point going out to test drive cars if you don’t know why you need a car and how you’re going to use it. So we need to do some user needs assessment. This could, and should, take many forms. I haven’t yet brainstormed the different modes of information gathering available. There is usage information from our current systems. There is human inquiry to find out how our customers use our systems (surveys, interviews, focus groups). There is current workflows analysis to determine how we utilize our systems from the back end. All of this information can be translated into use case scenarios for our “ideal” system. Obviously there is no perfect system. The idealized system provides us with an evaluation template, however. We can prioritize which components of an ideal system are most critical. Then we have a check-list of features that we compare to the functions provided by any given system.
It sounds simple in writing. In practice, it’s a long and difficult process involving multiple library departments and a variety of stakeholders. I’m glad it’s not an imminent project. It’s on my mind though. I’m monitoring developments in the field and gathering quite the file of documents for my “to-read” pile.