Simon Whibley is Digital Collections Conservator at the British Library


The digital preservation community has invested a great deal of effort over the past decade into developing ways of evaluating the maturity and trustworthiness of preservation processes and services. The tools and frameworks now available to the community vary from relatively simple maturity models to highly-detailed audit standards designed to support the certification of services and organisations.

Over the past few years, the British Library and its partners have used a range of these assessment frameworks as a means of benchmarking our progress and for identifying gaps. This post will explore some of the lessons that we have learned from undertaking these assessments.

ISO 16363 self-assessment

Back in 2015, the Library conducted a self assessment based on the ISO 16363:2012 standard. The standard defines 109 metrics covering all aspects of digital preservation services, including organisational infrastructures, digital object management, technical infrastructures and security risk management.

The objectives of the self-assessment were to evaluate the Library’s approach to preserving digital collections, to identify areas for improvement, and, following review with stakeholders, to develop an outline plan of action to address these.

The comprehensive nature of the framework meant that the self assessment required input from all across the Library, and could not, for example, purely focus on systems used for the ingest and storage of digital content. 

A key part of the assessment was gathering evidence from colleagues about how the Library was performing against particular metrics. Challenges included the identification of relevant stakeholders and finding time to schedule interviews. There was also a need to compile supporting evidence and documentation, while remaining flexible in determining the best method for recording findings across the many metrics.

We found that the ISO 16363 self-assessment was useful for benchmarking the British Library’s approach to digital preservation and for identifying areas where we could make improvements. It was also a very good way to raise awareness about digital preservation across the organisation as well as a means of documenting policies, procedures and processes in a consistent way.

A poster presented at the 11th International Digital Curation Conference (IDCC16) conference summarises other lessons learnt from managing a self-assessment based on ISO 16363.

Digital Preservation Coalition assessment of the NPLD digital preservation capability (2017)

In 2017 and 2018, the Digital Preservation Coalition produced an assessment of the British Library’s digital preservation capability for content collected under Non-Print Legal Deposit Regulations (NPLD).

This was an external assessment, the basis being a customised version of the Core Trust Seal (CTS) certification framework (see below). The report informed the Post Implementation Review of the Legal Deposit libraries (Non-print Works) Regulations 2013 and is available on the British Library website.

The results of the assessment were broadly positive, as well as identifying a number of issues for the Library to investigate further. This led to the development of some of the Digital Preservation Team’s current initiatives, including the Integrated Preservation Suite (IPS) project (find out more information about this project in Peter May’s blogpost and on our website).

Core Trust Seal self assessment (2020)

Earlier this year, the Library conducted a further self-assessment of its digital preservation capability, this time based on the full Core Trust Seal (CTS) certification framework. The CTS defines sixteen requirements for trustworthy repositories, and these were used to benchmark the Library’s digital collections and existing repository prior to the implementation of a new repository system and the integration of IPS.

It is intended that the CTS assessment will act as a baseline whilst the new repository system is being implemented, with regular reviews to clearly measure the effects of changes introduced during the implementation process.

One of the challenges that we found with using the CTS framework for self-assessment was that it was difficult to apply its scoring system across the whole of the Library’s large and diverse digital collections. While it was relatively straightforward  to score some of the higher-level requirements around organisational infrastructures, it was more difficult to reflect the range of workflows and processes utilised at lower levels, which reflect fundamental differences in the way different collections are managed. The Library will need to be flexible in its application of future self-assessment frameworks, if it does not want to produce separate assessments for different collection types.

Some final thoughts

The completion of a self-assessment or certification exercise is never the end of the matter, especially in organisations such as the British Library which are constantly evolving.

Self-assessments need to be repeated and reviewed in order to help monitor progress and to ensure that improvements haven’t been made in some areas to the detriment of others. It is also necessary to select assessment methodologies that are appropriate to the scale of an organisation and the nature of its collections.

There are always lessons to be learned or tweaks that can be made to assessments in order to improve future exercises. That said, it must be remembered that monitoring progress and self-assessment is as much about organisational culture as it is about the application of particular frameworks.

Do you want to know more about models and frameworks for the evaluation of services and processes?

There are a variety of models and frameworks now available to support the benchmarking and assessment of digital preservation services and processes. These range from maturity models designed to support self-assessment and gap analysis, to detailed schemes designed to support the certification of trustworthy services by third parties (although as we have demonstrated, they can also be used to support self-assessment).

The models and frameworks currently available include:

  • DPC Rapid Assessment Model (DPC RAM) -- Developed last year as part of a joint project of the Digital Preservation Coalition and the UK Nuclear Decommissioning Authority, DPC-RAM was adapted from a maturity model published in Adrian Brown’s book on Practical Digital Preservation. The model is intended to be used for the benchmarking of an organisations’ digital preservation capability and to help monitor and document progress.
  • NDSA Levels of Digital Preservation -- The NDSA Levels were first published by the US National Digital Stewardship Alliance in 2013, with a fully-revised Version 2.0 being released by the Levels of Preservation Working Group in 2018, together with supporting documentation and resources. The Levels provide a matrix for self-assessment, measuring status under the general headings of: Storage, Integrity, Control, Metadata and Content. These are mapped against four progressive levels of sophistication. The Levels are deliberately designed to be less daunting than more formal approaches to repository certification, but to still help organisations of all sizes identify specific areas where they might be able to make real progress.
  • Core Trust Seal -- A certification framework and process based on the DSA-WDS Core Trustworthy Data Repositories Requirements catalogue and procedures; an initiative of the World Data System of the International Science Council and the Data Seal of Approval, through the Research Data Alliance (RDA).
  • TRAC (Trustworthy Repositories Audit & Certification) -- The assessment framework that was used as the basis for the development of ISO 16363; TRAC was initially developed by OCLC/RLG and the US National Archives and Records Administration, with certification currently administered by the Center for Research Libraries (CRL).
  • ISO 16363:2012 (CCSDS 652.0-R-1) Space data and information transfer systems -- Audit and certification of trustworthy digital repositories -- Defines comprehensive criteria for the audit and certification of trustworthy digital repositories (TDR); 109 metrics cover a wide range of topics, covering: organisational infrastructures, digital object management, technical infrastructures and security risk management.

Additional information about audit and certification for digital preservation can also be found in the DPC handbook.

Comments   

#1 Robert Gillesse 2020-08-14 07:50
Thanks for this blog! Regarding CTS, and speaking as a recent CTS certified institute, i must say that i found the way that that the criteria 'force' you to look at your different archiving workflows (as most institutes have quite a few) and see how one is operating better than the other, actually one of the strong points of CTS. This gives you clear incentive where you have to do better next time (renewing the seal after three years). But I do realize that in case of a real big organisation like the BL this will be a lot of work.
Quote
#2 Simon Whibley 2020-08-17 10:00
Hi Robert. Thank you very much for your comment, I'm glad that you liked the blog. I agree with you that one of the strengths of CTS is that it requires an organisation to document and evaluate practical things like content workflows in detail. Our assessment did cover the core workflows that feed into the Library's current repository system, though at a relatively high-level. This will inform the process of reviewing all of our workflows as part of the migration project to our new repository.
Quote

Scroll to top