Moving into MODS and beyond

Posted by laura on August 9, 2012 under Integrated Library Systems, Standards | 3 Comments to Read

I’m stoked about the work I’ve been doing lately.  I finally get to do some hands-on production work with metadata standards beyond AACR2/RDA/MARC, DC, and EAD.  I taught myself XML in the late 90s and I’ve spent about a decade of avidly following the progress of METS, MODS, PREMIS, without any practical application.  Learning in a vacuum sucks. Getting neck-deep into a project gives me a firmer grasp of concepts.

We’re currently re-vamping our Archives systems architecture.  This has involved a year of analyzing workflow and current system functions, scoping our the functional requirements of what we need our systems to do, and evaluating the various solutions available.  Our integrated archival management system is a bespoke FileMaker Pro database (well, actually several databases).  It ties together patron management, financial management, archival description/EAD generation, digital object management, and web/interface layer.   It worked well for us, but our system is 20 years old and won’t scale to handle more complex digital archiving.  Ideally a new system for Archives would be as “integrated,” as our custom developed system.  Unfortunately, no such Integrated Archival System exists.

We decided to go with ArchivesSpace for archival description, Aeon for patron/financial management, and Fedora/Islandora for our digital asset management system (DAMS).  A unified interface layer wasn’t a critical component for us in the near-term.  We figure we can do something after the other components are implemented.  The strategic goal for us was to create an architecture with what I call the four systems virtues: extensibility, interoperability, portability, and scalability.  We wanted to future-proof ourselves as much as possible.

We’re implementing Fedora/Islandora in our first phase.  We’re starting by migrating our small collection of 10,000 digitized images from FileMaker Pro to Islandora, with the help of the folks from Discovery Garden.  We’re in the metadata mapping stage, making decisions on schema structure and indexing/searching/display functions.  We’re considering using a modified MODS schema with some local and VRA Core elements.  I’ve need to quickly climb a relatively steep learning curve.  First, I don’t have a detailed knowledge of data content standards for cataloging images.  It’s not a medium one typically handles in science & engineering. I’ve been reacquainting myself with Cataloging Cultural Objects (CCO) so I can help our archivists with descriptive data entry.  A knowledge of data content should inform choices of data structure and format (i.e. which elements of MODS and VRA Core to have in our schema).   Second, I’m learning the Fedora “digital object” model and how it relates to Islandora functionality.   Digital objects in this context is not digital objects in the librarian sense, a label for born-digital/digitized content.   Third, I’m simultaneously considering the crosswalk between our legacy records and MODS while specifying our future image descriptive cataloging needs.

My biggest philosophical brain effort right now is figuring out how to implement best practice in image cataloging within Islandora/Fedora. Per CCO/VRA Core, there should be a clear distinction between the work and the image (analogous to FRBR work and expression/manifestation).  In theory, this is can be done by having two records and relating them via a wrapper metadata like METS, or by using “related item” elements within the descriptive metadata schema.  In practice, I simply don’t know how Islandora can manage it.  Fedora was made for this type of thing, fortunately, so I assume that it’s possible.  Obviously I’ll be looking at the work of others to inform our choices in the metadata structure.

Fortunately, I consider this fun.

Reviewing the ILS

Posted by laura on July 26, 2010 under Integrated Library Systems | Be the First to Comment

OCLC has released more modules for their “cloud-based” integrated library system.  This has lit our fire.  We’ve known for awhile that we need to review how we use our ILS and determine if there is a business case for retaining it or migrating to another option.  The OCLC option represents a sea-change in library automation.  It’s the only web-services based ILS in the current market.

It is time for us to stop talking and start doing.  We know how in a rough sense (needs assessment, functional requirements, review).  It’s the devil-y details which make the process intimidating.  The ILS is one of the largest expenses in a library.  Almost all daily work processes touch it.  It is a capital B big deal.   Any mistake will be costly.

So, we’re planning.  Something concrete will emerge shortly.  Meanwhile, I await the publication of this .  Guidance of any sort is welcome at this stage in the game.

NextGen catalog

Posted by laura on February 11, 2010 under Integrated Library Systems | Be the First to Comment

I anticipate doing a thorough review of the integrated library system here at MPOW within the next couple of years. A wise friend once told me, “there are two types of librarians.  Those who have done an ILS migration.  And those who will.” I’ve been lucky so far.  In 15 years as a librarian I haven’t yet had the pleasure migrating an ILS.  It remains to be seen if we do a migration here.  Periodic review of systems is necessary due diligence.  There is always a chance that we will discover our current system continues to meet our needs.

There have been many developments in the field since this library implemented its online catalog.  The number of vendors selling ILS has declined through mergers & acquisitions and   economic attrition.  The open source ILS movement has grown with the development of Koha, Greenstone, and others.   There has been a movement to dis-integrate the integrated library system by separating the search layer from the administrative inventory modules.  The options available are somewhat overwhelming.  It’s time to evaluate the current state of the marketplace to ensure that we’re providing the best possible system for our customers needs.

That’s key.  One needs to understand the functional requirements of a system before one can evaluate options.  There’s no point going out to test drive cars if you don’t know why you need a car and how you’re going to use it.  So we need to do some user needs assessment.  This could, and should, take many forms.  I haven’t yet brainstormed the different modes of information gathering available. There is usage information from our current systems.  There is human inquiry to find out how our customers use our systems (surveys, interviews, focus groups).  There is current workflows analysis to determine how we utilize our systems from the back end.  All of this information can be translated into use case scenarios for our “ideal” system.  Obviously there is no perfect system.  The idealized system provides us with an evaluation template, however.   We can prioritize which components of an ideal system are most critical.  Then we have a check-list of features that we compare to the functions provided by any given system.

It sounds simple in writing.  In practice, it’s a long and difficult process involving multiple library departments and a variety of stakeholders.  I’m glad it’s not an imminent project.  It’s on my mind though.   I’m monitoring developments in the field and gathering quite the file of documents for my “to-read” pile.