Marshall Breeding¬†came to Barcelona and during his visit he gave a seminar entitled Global Business and Technology Trends in Library Automation. The seminar was organised by the CBUC, Catalan University Library Association, and held on 2 April. It was open to all of the association‚Äôs members, their collaborators and companies from the sector.
Francesc and I were able to attend and talk to Mr Breeding about which trends are going to define the future of library services in academic libraries.
Of the questions that came up, one of the most important ones was the current transition from integrated library systems to library services platforms ‚Äď a term coined by Breeding to define the evolution of current systems that are still designed to manage collections on paper.
Library services platforms are designed instead for content management, regardless of the format, thus integrating into one single platform the dispersed contents on a range of platforms, as is currently the case at libraries:¬†ILS + Electronic Resource Management + OpenURL Link Resolver + Digital Collections Management platform + Institutional Repository + Discovery-layer services for broader access to library collections + …
One of the main features of these platforms is the flexibility and ability to manage different kinds of library materials, multiple metadata formats and the corresponding workflows.
Openness has been defined as the backbone to the library‚Äôs technology strategies.
One of the issues stressed by Breeding was the need for these new platforms to move away from the ‚Äėblack box‚Äô paradigm and become open systems. Openness has been defined as the backbone to the library‚Äôs technology strategies. Libraries need to do more things with their data and they have to be able to process these data easily. The need for open APIs and interoperability is key in these systems.
The future of these systems depends on easier access to the data held in the their knowledge bases, which are their greatest asset; and in the flexibility to generate and process the growing diversity of data and metadata models.
A few days ago I was asked about the way we access the ILS information database to use the data we store on it. My answer was the same one I always give when asked about this subject: it is impossible to access the database directly. As far as I know the great majority of ILS software solutions keep the data locked in their databases.
Commercial ILSs are usually black boxes.
Libraries need the information stored on the ILS. Bibliographic records are needed if the library has a discovery tool or other OPAC interfaces like Vufind or the OPAC visualization of Drupal, etc. If the discovery tool is from the same software company then there is no problem, but if it is from another company then there is a problem.
Another important subject is analysis of the information. At this time when budget problems are affecting libraries, analysis of the use of the collection and patron behaviour are crucial. This need to extract data from the ILS has been resolved by developing tools to manage this information and load it into other databases or systems for processing. Projects like Cyclops or Toto, which we are currently working on, use the Millennium data extractions. Other extractions that we have worked on include data extractions from the ILS for loading into Summon.
Innovative Interfaces, the company producing the Millennium ILS, has announced the Sierra ILS. The aim of this new software is to be more open, using PostgreSQL as a database, and provide users with multiple APIs to connect to the system. We will have to see the cost of these APIs and what type of information they let us access.
In any case, the information is always there, whether in a black box or openly available. The key is finding out what that information can offer us and using our imagination to do something valuable with it.
I have recently been asked several times about my job and what I really do.
I tried to find a good description on the web but, in the end, I decided to write this post with my own description.
I usually call myself a systems librarian, but others call this position an IT librarian or digital initiatives librarian, among other alternatives.
The systems librarian is the person who deals with the library‚Äôs IT.
The need for a systems librarian arose when libraries starting automating tasks; converting from catalogues on paper cards to automated catalogues. This is when a librarian was needed who could handle the data and deal with the library software companies.
Originally, the person in this position was in charge of the ILS and its modules (OPAC, cataloguing, acquisitions, circulation, serials, etc.). Since library automation, a huge number of technologies have emerged ‚Äď web, electronic resources, databases, etc. ‚Äď and they now have to manage all of these as well.
Currently, then, the role of a systems librarian is to deal with information, technologies and librarians. There are many variations and they have many different responsibilities, depending on the library organization.
Another question is whether there is a specific qualification for this job and the answer is no. Systems librarians are usually librarians with knowledge of technologies and self-taught. They need to have the ability to learn constantly whenever they start a new project.
The internet acts as a place for all these professionals to meet and share knowledge. They use tools like blogs and Twitter. A starting point for finding information about systems librarians would be the ALA TechSoucre Twitter account. It offers information from the ALA, but it also offers the chance to find interesting professionals from the field among its followers.
So, this is my description of a systems librarian: a strange mix of librarian and IT skills.
At this time of the year it is time to think about the future and time to sum up everything that we have worked on this year, as in the ‚ÄúTop tech achievements of the year‚ÄĚ post. In this case, I want to sum up what I have learned this year.
The next task was how to load records into ILS Millennium, so we had to deal with Record Load Tables. We needed to import order and patron information to Millennium. We were able to load data using our own load tables. We decided to create MARC records by mapping Millennium tables. As well as creating these tables, we also needed to transform the data for MARC records and ISO2709 for loading into ILS. We had the data in a Java application so we used marc4j to export it to ISO2709.
¬†Managing and loading patron information represented another challenge.
Thousands of patron records had to be processed, grouped and formatted to be loaded into Millennium. MySQL was the best option to manage the information before loading. So I had to install a LAMP server on Ubuntu, learn some SQL basics and, helped by phpMyAdmin, create the filters, format the data and group the patrons to be loaded. The next step was to create a MARC record for each patron and load it; processing them with PERL to add the MARC tags and create the records, and using yaz-marcdump to transform them into ISO2709.
The next project was to work with Symfony, the PHP web development framework, as a way to develop in a more structured way, but it will take some time to get used to it: another challenge.
So, that was a summary of the things that I did this year, but in this job learning is part of the journey. If you want more information on any of the subjects I work with or you have information that could be helpful, please add a comment so we can help each other.
These days,¬†apart from celebrating the first anniversary of libtechnotes, we are working on a new project aimed at creating a loan service management tool.
Each ILS has the option of showing data on library transactions: how many checkouts, renewals, holds, etc. This is standard data easily found in each system‚Äôs reports function.
However, the UOC Library needs more information. We want to calculate how long it takes each book to go from one place to another. The Library has a big warehouse where most of the collection is stored. Individual items are then sent to other branches where users ask to pick them up or are sent to the users‚Äô homes.
The Millennium ILS does not collect this information, which is essential to improving service quality.
This is just one of the issues we would like to solve by implementing a tool to monitor the loan service. As the project is defined, new requirements will come up.
Right now, we are analysing what data can be exported from Millennium. This information cannot be managed inside Millennium, which is why we are working on generating data extractions to be managed and displayed in a new tool.
On our labs website we will be explaining what we are doing, step by step. We will try to update the labs site with each new development of the project. At present, it merely states the project‚Äôs goals; however, there will be a work-in-progress section, which will allow visitors to follow the project‚Äôs progress.
We think the information that we are dealing with could be useful for other libraries with the same needs. Perhaps other libraries are working on it, too.
If you are working on a similar project or have useful information, please post a comment. We could help each other.
Let‚Äôs keep working to develop a useful tool that ‚Äď who knows? ‚Äď may be implemented by other libraries that also use Millennium ILS.