DPC

Joint Digital Curation Centre / Digital Preservation Coalition Workshop for OAIS 5 Year Review

OAIS Five-year review Recommendations for update

This workshop will prepare a joint comment, on behalf of members of the Digital Curation Centre Associates Network and members of the Digital Preservation Coalition, for submission to the OAIS (Open Archival Information Systems Reference Model) 5 year review. The workshop is open to members of both organisations who are familiar with details of the standard and actively implementing or preparing to implement an OAIS.

In compliance with, the International Organization for Standardization (ISO) and the Consultative Committee for Space Data Systems (CCSDS) procedures, the Open Archival Information Systems Reference Model (OAIS): ISO 14721:2003 must be reviewed every five years and a determination made to reaffirm, modify, or withdraw the existing standard. The comment process for this review is now open with recommendations for updates that will reduce ambiguities or improve missing or weak concepts sought by 30 October 2006.

9.30 - 10.00

Registration and coffee

10.00 - 10.30

Introductions - Sheila Anderson, AHDS
Summary of OAIS: Lyn Lewis Dafis, National Library of Wales

10.30 - 11.10

Discussion on Chapter 1: facilitated by Chris Rusbridge, Digital Curation Centre

11.10 - 11.50

Discussion on Chapters 2 and 3: facilitated by Michael Day, UKOLN

11.50 - 12.45

Discussion on Chapter 4.1 facilitated by Lyn Lewis Dafis, National Library of Wales

12.45 - 14.15

Buffet lunch: DCC Offices, Appleton Tower

14.15 - 15.00

Discussion on Chapter 4.2 and 4.3 facilitated by Derek Sargeant, University of Leeds

15.00 - 15.30

Discussion on Chapter 5

15.30 - 16.00

Discussion on Chapter 6: facilitated by Steve Bordwell, National Archives of Scotland

16.00 - 16.30

Round-up discussion and close

Read More

Policies for digital repositories: models and approaches

The DPC held a Briefing day on different implementations of digital repositories. This event was aimed at people who are in the process of planning digital repositories and who want to find out more about available tools and the benefits of each model. The day highlighted different approaches to and aspects of repository models and was useful for those who wish to take a more modular approach to repository building. The day was also useful to people who are starting to move from strategic planning to actual repository implementation.

Sayeed Choudhury emphasised that institutional repositories should provide both an institutional service and long-term custodianship of digital academic output. He also highlighted however how there was a definite lack of discussion on the digital preservation element within repositories; preservation is very hard indeed, and there need to be more case studies. An important theme was that there is no one-size-fits-all solution and institutions should focus on their needs rather than on their systems.
Sayeed Choudhury - Johns Hopkins University: Digital Repository Models (PDF 123KB)

Dave Thompson of Wellcome Trust outlined why they chose Fedora as a system. The outstanding reasons were that it was cheap and versatile. He outlined the implementation of the relationship builder and the metadata extraction tool. As a test bed Dave chose to archive an email spam collection and the methodologies for doing this provide a good case study. The overall message was that a good approach is to manage the information and metadata, not the technology.
Dave Thompson - Wellcome Trust: Fedora at the Wellcome Library, progress and work to date (PDF 150KB)

Paul Bevan continued on the Fedora theme and how they have used it at the National Library of Wales for their digital asset management system. Their interesting project has mapped OAIS elements into the archive. Paul emphasised the importance of cross-organisation buy-in, and the technical challenges of moving digital objects into a managed environment.
Paul Bevan - National Library of Wales: Implementing an Integrated Digital Asset Management System: FEDORA and OAIS in Context (PDF 1.1MB)

Steve Hitchcock outlined the history of institutional repositories and how they have gradually developed out of the open archival initiative. The highlighted the results of a survey of repositories; for example, how only one out of the eighteen surveyed even have a preservation policy for their repository, which is a cause for concern.
Steve Hitchcock - Southampton University: Repository models and policies for preservation (PDF 581KB)

Andrew Wilson's talk outlined the SherpaDP project which investigates a distributed preservation model. This comprehensive work flow is to be the basis of a business model, not a free service; almost an exemplar for outsourced preservation services. The project will also create a handbook. He emphasised that there is a need for 'object mobility' within the model, and how each of the detailed workflow modules map to the OAIS model.
Andrew Wilson - AHDS: SHERPA-DP: Distributed Repositories/Distributed Preservation (PDF 153KB)

Read More

Joint DPC/DCC Forum - Policies for Digital Curation and Preservation

The Digital Preservation Coalition (DPC) and Digital Curation Centre (DCC) delivered a two-day workshop to explore the range of policies required to manage, preserve, and reuse the information held within digital repositories over time. This event was co-sponsored by the Oxford Internet Institute (OII) and held at Wolfson College at the University of Oxford on 3rd and 4th of July, 2006.

Developing and implementing a range of policies is vital for enabling the effective management, discovery, and re-usability of information held within digital repositories. This workshop provided concrete examples of the range and nature of the policies required and shared real-life experiences in implementing these policies through a series of case studies and panel discussions.

Monday July 3rd, 2006

Setting the Scene

Session One: This session explored issues including: roles and responsibilities in developing policies, relationships with other institutional policies, workflow issues, key themes of specific policies, problems encountered during development.

Tuesday July 4th, 2006

Session Two: Implementating Curation and Preservation Policies

Session Three: Evaluating and Reviewing Curation and Preservation Policies

Read More

DPC Forum on Web Archiving

The DPC held a one-day web archiving forum at the British Library. The first DPC web archiving forum was held in 2002 to promote the need to archive websites given their increasing importance as information and cultural resources.

Four years on, this event again brought together key web archiving initiatives and provided a chance to review progress in the field. The day provided an in-depth picture of the UK Web Archiving project as well as European initiatives. Technical solutions and legal issues were examined and the presentations encouraged much debate and discussion around different strategies and methodologies. The event made clear that the field has moved on tremendously from four years ago. The debate has broadened and so have the tools and methodologies.

The first presentation was from Philip Beresford, Web Archiving Project manager at the British Library [BL]. He spoke about the BL's involvement with UKWAC, the tools the project had built, the challenges they have had with the PANDAS software and the overall constraints of web archiving, especially as it is such a technology dependent discipline. Philip also outlined the web curator tool developed with the National Library of New Zealand and the next version of PANDAS. UKWAC - the first two years [PDF 33 KB]

Adrian Brown, Head of digital preservation at the National Archives followed on from Philip's talk as he outlined the future of UKWAC and its recent evaluation report. Adrian outlined the collection methods at the National Archives as well as database preservation and transactional archiving. He touched on one rather overlooked aspect, that of the long term preservation of the actual content. Collecting and preserving web content [PDF 401KB]

John Tuck spoke in the second session about the legal BL's deposit bill. He touched on issues regarding collection, capture, preservation and access to non-print collections. Of interest is how the legal deposit bill translates into the e-environment and web archiving; should web archiving extend to UK-related sites, not just UK-domain sites and are national boundaries less relevant now? He outlined the BL's two different strategies - taking a twice yearly snapshot of the entire UK web and the second being a more selective approach of sites that are deemed to be of national and cultural interest. He also stressed the lengthy permissions process that gathering each web site entails. Collecting, selecting and legal deposit [PDF 42KB]

Andrew Charlesworth highlighted the complexity of the UK legal framework regarding web archiving. An emerging theme throughout the day was the debate about whether archives should ask for permission before or after they have collected websites. Andrew stressed the importance of understanding the regulatory framework. The field has moved on in that we know more today about the risks and benefits regarding web archiving than we did a few years ago. Any web archiving project probably needs to carry out risk analysis and to have insurance, in particular with regard the defamation law, ensure that they don't hold anything in their archive that could be used as legal evidence. Archiving of internet resources: the legal issues revisited [PDF 33KB]

Julien Masanes spoke about the European Web Archive [PDF 530KB] . He presented an interesting approach to web archiving - the information architecture of the web is such that its archiving should follow the natural structure of the web. Julien reminded the audience that web content is already digital and readily processable and that the web is cross-linkable and cross-compatible, a good foundation for an archive. He also stressed that web archiving requires functional collaboration. What is needed is a mutualisation of resources which combines competence and skills. Internet preservation: current situation and perspectives for the future [PDF 530KB]

Paul Bevan outlined the UKWAC project to archive the 2005 UK election sites. He described how three national libraries collaborated on this web archive. He touched on selection, collection remit for each library and frequence of snapshots. Did the general election classify as an event or as a known instance? Paul stressed the difficulties involved in obtaining permissions to archive electoral websites and the difficulties in identifying candidates websites. On a technical level the slowness of the gathering engine was also highlighted. Archiving the 2005 UK General Election [PDF 129KB]

Catherine Lupovici of the International Internet Preservation Consortium [IIPC] outlined the activities of the IIPC and outlined all the life-cycle tools that the team are working on such as ingest and access tools. She stressed the importance of collaboration in web archiving and it is clear that both UKWAC and IIPC do this successfully. IIPC activity: standards and tools for domain scale archiving [PDF 149KB]

The panel session was most productive. The panel leaders stressed that we are still in the early days of web archiving. We can never be fully sure that the techniques employed are correct, but we have to make a start. However, more research needs to be carried out into the preservation techniques of the actual content. Access issues are also critical; searching a digital web archive won't employ the same search and retrieval tools as a traditional archive would and crucial access tools need to be developed for successful use of web archives.

On a technical note, we need to be aware of issues of browser compatibility in the future; there was a debate about whether it was an acceptable solution to obtain the source code of browsers in order to assist rendering pages in the future. It was highlighted that we have to be aware of unknown plugins which could hinder the readability of web pages. The importance of the ingest stage was stressed and the transformation of the digital object that should occur at this point to ensure readability. There may be legal issues to consider here however in transforming from one format to another.

Web archiving is not an isolated activity - so many web formats are now available as well as different content delivery mechanisms e.g. blogs and chat rooms. These formats make archiving even more challenging.  There was a recognition that the community needs smarter tools to make web archiving scalable. There is definitely a need to semi automate quality assurance and selection. The question was raised whether or not we still need manual and selective archiving which is both time-consuming and costly compared to automatic sweeping of the web? The general consensus was that both methodologies should still be employed. The overall conclusion and recurring theme of the day is that collaboration is essential and no single organisation can carry out web archiving on its own. Projects such as UKWAC, IIPC and the European Web Archive demonstrate that much can be achieved in terms of solutions and methodologies.

Read More

DPC Briefing on OAIS

The DPC held a briefing day on the OAIS model on 4th April 2006 at the York Innovation Centre. The purpose of the day was to examine the model and provide an informal but in-depth look at the practical application of the Open Archival Information System [OAIS] model in various UK institutions. OAIS is a high-level reference model which describes the functions of a digital archive and has been used as a model for a number of digital archiving repositories. It is now a recognized and highly-prominent international standard.

There were four presentations in total, all of which presented interesting case-studies and examples of OAIS implementation in a variety of institutions, giving a valuable overview of how it has been interpreted and applied.

Najla Semple, Digital Preservation Coalition, began the day with an overview of OAIS and her experience of implementing the model at Edinburgh University Library in 2002 (Overview of OAIS PDF 1MB). She gave a summary of the pilot project and how she used the model to digitally archive the online University Calendar. Each of the six OAIS processes were examined and used as part of the archival workflow. She also gave an overview of the detailed OAIS metadata scheme that was implemented.

Jen Mitcham, Curatorial Officer at the Archaeology Data Service [ADS] presented next (Working with OAIS PDF 2.6MB). Her approach to OAIS was different from that of Edinburgh University Library as ADS already have a digital archive up and running. At ADS they have applied the model to their existing digital archive, which is both an interesting and practical way to approach the model. Her talk identified which areas of her organization the model could be applied to, and she clarified this by including photographs of the actual staff involved in each of the OAIS processes. The issue of registration and access to online archives was debated.

Andrew Wilson, Preservation Services and Projects Manager at AHDS spoke in the afternoon (Sherpa-DP and OAIS PDF 300KB) about the use of OAIS in the Sherpa DP project http://ahds.ac.uk/about/projects/sherpa-dp/. They are using a disaggregated model for implementing the model throughout the university-based institutional repositories and he indicated that they will share an AHDS preservation repository. He then initiated the question, 'What does OAIS compliance mean?', an interesting question with regard to institutions setting up their own archives. He touched on the OAIS audit process developed by RLG and what this will mean for future implementation of the model. A certification process might lead to the assumption that the model will have to be implemented in a certain prescriptive way and perhaps this goes against the 'open' spirit of OAIS. Some of the processes are 'deliberately vague' therefore they shouldn't be set in stone as to how one applies them. This issue initiated much lively debate amongst the delegates.

The final presentation of the day was a joint effort by Hilary Beedham of the UK Data Archive and Matt Palmer of the National Archives (Mapping to the OAIS PDF 500KB). They gave an interesting insight into two archives that are both assessing their existing organizational structure against the OAIS model. Interestingly, they both arrived at similar conclusions and found certain shortcomings with OAIS. A couple of areas that they struggled with were management of the Dissemination Information Package, as well as the metadata model which they thought could perhaps be made more detailed to include access controls and IPR concerns.

Matt also pointed out that it is fairly easy to be compliant with OAIS as most of the functions and processes are core to any digital archive. Both the TNA and UKDA Designated Communities are wide-ranging and they indicated that it might be the case that the model assumes a homogenous user community. However, this point was disputed by the audience as indeed the issue of the Designated Community is a very important feature of OAIS and establishing who you are preserving the information for is crucial. The Representation Information metadata field assumes that you will include an appropriate detailed technical description according to who will read the data in the future.

Hilary Beedham concentrated on their recently published report, 'Assessment of UKDA and TNA Compliance with OAIS and METS Standards' http://www.jisc.ac.uk/uploaded_documents/oaismets.pdf. The JISC-funded report was written partly to assist regional county-councils apply the model and simplify it.

The discussion at the end of the day proved very fruitful, and overall conclusions were as follows:

  • That it was really useful to have some real-life examples and case-studies.  
  • OAIS vocabulary and terminology is now recognised as really useful across a range of institutions.  The value of having OAIS-compliant repositories will also enable a 'seamless transfer' of data between archives.
  • While the model may be vague in its prescription, it certainly indicates what to think about when setting about creating a digital archive.
  • One delegate suggested that the starting point should be to look at your own organization first, analyse the processes involved and apply OAIS processes accordingly.
  • A practical guide as to how to set up an OAIS repository would be very useful, especially one which indicated different communities and organizational-specific interpretations.  This guide could ideally take the form of 'OAIS-lite'.
Read More

DPC Meeting on Preservation Metadata

The Digital Preservation Coalition has commissioned a series of Technology Watch reports on themes known to be of key interest to DPC members. The authors of the Technology Watch Report on Preservation Metadata (PDF 209KB) - Brian Lavoie, OCLC, and Richard Gartner, University of Oxford - agreed to lead an informal meeting of DPC members, many of whom have an active interest in this area..

Attendance was open to DPC members only and there was no charge. Numbers were limited to a maximum of thirty, to allow scope for interaction.

An overview of the meeting is provided by Michael Day's presentation (PDF 121KB)

Presentations in the meeting:

Read More

Report for the DCC/DPC Workshop on Cost Models for preserving digital assets

The DCC/DPC joint Workshop on Cost Models for preserving digital assets was held at the British Library on 26th July, and was the first joint workshop between the two organisations. Around seventy delegates from the UK, Europe, and the US were treated to a rich and stimulating source of information and discussion on costs and business models with a number of key themes emerging.

Maggie Jones gave the welcome and introduction, on behalf of Lynne Brindley, and emphasised the need, not just to discover how much it costs to preserve X digital objects over time, but the implications of inaction and the strategic drivers which would motivate institutions to invest in digital preservation and curation. Laurie Hunter provided the keynote address (PDF 16KB) and set the scene by placing digital preservation within a wider context of the business strategy of an organisation. The keynote stressed that there is a need to understand not just the costs but also the value of digital preservation and referred to the model scorecard as one tool which can be adapted for use in the digital preservation environment and which the eSPIDA project is investigating further.

James Currall referred to major obstacles to progress as including a very poor understanding of digital preservation issues among senior managers and creators and discussed some of the tools being developed by eSPIDA (PDF 182KB) to help counteract those obstacles. Once again, the importance of the strategic direction of the organisation, was noted as being of critical importance. The eSPIDA approach to the model scorecard placed the information asset at the centre, with the other perspectives (customer, internal business process, innovation and development) tending to feed into the financial perspective. Currall noted that, while this was being applied within the University of Glasgow, the same principles can be applied anywhere.

Paul Ayris and James Watson gave a presentation describing the LIFE project (PDF 164KB), which, like eSPIDA, has been funded under the JISC 4/04 programme. The LIFE project is a collaboration between UCL and the British Library. Paul Ayris described the context for the project, and drivers, which for UCL are the management of e-journals and the strategic issue of moving from print to e-journals. The BL needed additional information to help them manage multiple digital collections, acquired through voluntary and legal deposit, or created by them, and to maintain them in perpetuity. James Watson described the work to date in developing a generic lifecycle model which can be applied to all digital objects. The project also hoped to identify cost reductions and potential efficiencies. The major findings of this one-year project would be announced at a conference at the BL, in association with LIBER, on 12 December 2005.

The next sessions focussed on practical case studies. Anne Kenney described the work at Cornell (PDF 198KB) on identifying the costs associated with taking on Paul Ginsparg's arXiv. A quote from a Victor Mature movie, "If we had some horses, we'd have a cavalry - if we had some men" seemed to appropriately sum up an attitude to digital preservation programmes, "we'd have a digital preservation programme, if we had some staff - if we had some content!". Kenney emphasised the importance of getting concrete cost figures since no senior management will be prepared to write a blank cheque. This reflects the recommendation Hunter made during his keynote address for digital preservation proponents to speak to senior management in concrete, economic terms. The presentation covered cost centers, which were principally staff costs, and also identified costs needed to support the work but which were often hidden. The arXiv.org archive is highly automated and is relatively cheap to maintain, with an estimated submission cost of between US$1-5. Expenses are minimised in this case by having a gatekeeper function at the beginning and having most cost of ingest borne by the depositor. Kenney also noted that the costs of the server had significantly reduced each year but cautioned that it was critical to ensure an ongoing annual budget, as it is not possible to skip a year in digital preservation.

The Cornell case study contrasted with the TNA case study (PDF 1.7MB), presented by Adrian Brown. In this case, a publicly funded body with a mandate for preserving selected digital records so they must deal with a large number of formats. This illustrates the implications of organisational role and mission on potential costs. National libraries and archives will need to make different commitments to organisations who are more able to control the material they ingest. While TNA can influence creators, they cannot mandate that they will only accept certain formats. The TNA experience has shown some elements of costs for which there is a good understanding and others which there is little concrete knowledge of at this stage. Brown used the OAIS model to illustrate costs. Ingest costs represent the most substantial portion of costs and have been roughly calculated as £18.76 per file. As developments in automation progress and standards are agreed with creators, these costs may well fall over time. The time and human effort involved in creating metadata records for deposited materials was cited as a potentially high-cost element. Current research into automated metadata extraction could prove extremely beneficial in helping to minimise these costs. Data storage is relatively straightforward to prepare costs for but it is very difficult to predict transfer volumes over the next two years, and therefore difficult to plan longer term, so Preservation Planning is a major cost at the moment as it involves much R&D work. TNA also foresees opportunities to reduce costs through collaboration (not everyone needs to reinvent the wheel) and automation.

Erik Oltmans presented a model developed by the KB (PDF 350KB), in collaboration with the Delft Institute of Technology, which compares costs over time of two key digital preservation strategies, emulation and migration. This is based on the assumption that migration must apply to every single object in a collection, while emulation does not. The emulation approach seems to work best with collections with very few formats - for example a large digital repository of pdf files. However, it can become much more costly when there are a vast range of formats to be emulated. Oltmans conceded that the model, may not be entirely realistic but provides a useful starting point. The KB experience indicates that volume is less of an issue regarding costs as the complexity of submissions.

The afternoon session began with David Giaretta discussing science data characteristics (PDF 323KB) and how these dictate the most appropriate and cost-effective strategy. For example, emulation is almost certainly not enough for science data, which is increasingly processed "on the fly" so the archive keeps the raw data and processes on the fly. Issues such as bandwidth are critical (how do you get data into the system and then how do you get it out?). Other issues are migrating a file (relatively straightforward) and migrating a collection (much more complex). The costs of keeping information useable were those which would be the most difficult.

Matthew Addis and Ant Miller did a joint presentation on PrestoSpace (PrestoSpace Presentation One (PDF 1.8MB) and PrestoSpace Presentation Two( PDF 2.4MB)), an EU-funded project on audio-visual archives. The project began in February 2004 and will last for 40 months, and has 35 partner institutions. A key issue for a/v archives is that digital formats are rapidly becoming obsolete. Individual items on a shelf will cause huge logistical problems as they become obsolete. However once mass storage systems are developed, then it becomes imperative to have metadata in order to find and keep track of individual objects. The aim is to establish a framework for medium-large archives at this stage. Miller said that there is a need to "scare budget holders into action" but solid numbers are needed to back this up. Addis referred to the urgent need for planning as "whatever you put your stuff on will be obsolete at some stage." A workflow model was demonstrated, which enables decisions to be made on priorities for action. The next stage will be to test how well the model works against existing archives' plans. Some copies of the preliminary report were made available at the workshop for those interested in further information. The DCC and DPC will make the final version of this report available on their web sites when it is released later this year.

Andy Rothwell and Richard House provided the final presentation on costing EDRM programmes (PDF 605KB). Rothwell echoed earlier discussion in indicating that the pre-ingest stage is crucial in driving down costs. It was also necessary to look at the implications of the Governments Modernising Government white paper, which has been a key catalyst in moving from paper to electronic records. When coupled with looking at the whole information space, it needed to be understood that only c. 2% of records ultimately end up at TNA, so organisations need to manage the other 98%. The value lies not so much in putting material in but in being able to access it, so search and retrieval capabilities are key. The costs of implementation are not trivial, and it can take anywhere for 18 months to 2 years to implement the change in management and to provide the necessary training to staff. These costs are often not considered and can be significant. Other issues to be considered are the volatility of the marketplace. A practical example used was when EDRM product A is no longer supported and needs to be migrated to EDRM Product B. Without tried and tested export facilities, this is not a trivial undertaking. Rothwell also noted that data migration costs are not currently being factored into EDRM programmes. House went on to make the point that the key issue is not replacing paper systems with electronic but rather the integration of paper and electronic records systems. In terms of costs, staff costs are substantial and classification system design is frequently underestimated.

The workshop concluded with a panel session of all speakers and was chaired by Chris Rusbridge (DCC Director). Questions raised during this session highlighted a range of issues that were explored during the workshop.

For instance, it will be essential to determine what level of fiscal responsibility content creators and end-users share for the long-term preservation of digital assets. End-users potentially stand to benefit most from the preservation of digital assets and, as such, should be made aware that they may have a role to play in bearing the costs of preservation. Related to this were questions regarding the costs of accessing and retrieving digital assets over time.

The issue of metadata and representation information was raised several times during the panel session. Many participants stressed that without quality contextual information being preserved with the digital asset, there is little to no value in preserving the object. For example, even if a statistical digital data set is preserved and accessible 100 years after its creation, unless key items are defined, such as table headings, the data will be unusable. Users could undertake archaeological processes to try and ascertain the meaning of table headings, but ultimately they would at best only be able to guess at their true meanings.

The limit to which digital repositories may dictate acceptable formats for deposit was also a topic discussed during this session. While it is widely acknowledged that most repositories will not have the capabilities to preserve every format, there was also concern about placing too many constraints on content creators and depositors. As noted during the TNA case study, some organisation will not have the luxury of selecting the formats they will accept due to the very nature of their organisations, though they may be able to influence creators. In other cases, user communities may influence the formats that are deposited within repositories. This was the case with arXiv who did not originally impose restrictions but found that most depositors used the LaTex standard. This illustrates that identifying preferred formats for deposit does not always come from the managerial level, but could indeed be user-driven. Ultimately, a compromise is needed between reducing constraints on creators and depositors but also with facilitating effective preservation activities over time. Where there are equally viable alternatives, it may be acceptable to suggest one choice of format over another.

Very few repositories will have the capacity to care for every format or will have staff with all the skills needed to carry out preservation activities. Many of the participants felt that sharing resources and skills across a wide range of repositories would be the most logical approach to ensuring long-term preservation. PrestoSpace has investigated the creation of a European market place in which repositories and service providers can benefit from a shared approach. Several participants thought that the DCC and DPC might be able to assist in facilitating such an approach in the UK.

Participants felt that determining the value of preservation itself rather than simply identifying the costs will be of paramount importance in securing funding for digital preservation activity. This reflects suggestions made by several of the speakers. For instance, Richard House argued that it will be crucial for organisations to identify potential benefits that are not only appreciated by senior management but also by their stakeholders as well. It was acknowledged by several participants that a given stakeholder community may change over time and, as such, identifying benefits could be quite a difficult task.

It is highly unlikely that repositories will be able to accept and care for everything that is offered to them. Accordingly, sound appraisal and selection processes must be established within organisations to determine exactly what they will and will not preserve. Again, an organisational mission statement can be very useful in selecting and appraising digital assets for preservation. Selection and appraisal policies may change over time as the organisation changes. As such, periodic review of these documents will be necessary. Indeed, such changes may result in holdings within the repository no longer fitting in with the overall organisational mission. Therefore, some type of de-accessioning or disposal policy must be taken into consideration.

Many of the questions highlighted that, as yet, we have very few concrete answers. As such, much more work must be done in determining useable cost models, in identifying practical benefits, and establishing the value of digital preservation. The DCC and the DPC are currently looking into making available the spreadsheets for the cost models presented at this event via our web sites. We will also endeavour to monitor the progress of current projects and to report major findings as they are released.

Read More

Report on IS & T Archiving 2005 Conference, Washington, 26 - 29 April 2005

Sarah Middleton

Sarah Middleton

Last updated on 30 September 2016

By Hugh Campbell, PRONI

1. I attended the Imaging Science & Technology (IS&T) Archiving 2005 conference at the Washington Hilton. This is my report on the conference.

2. Washington is quite a long way away – home to hotel was about 20 hours with hotel to home about 18 hours. This needs to be borne in mind when planning travel to such a conference and return to work - the body needs time to recover.

3. The conference itself started on Tuesday, 26 April with a number of tutorials. I attended the Long-Term Archiving of Digital Images tutorial – see attached summary. The conference proper ran from Wednesday 27 April – Friday 29 April, kicking off at 0830 each morning (and finishing at 1700 on Wednesday and Thursday and 1500 on Friday). Wednesday featured a 40-minute keynote address and 15 20-minute sessions; Thursday featured a 40-minute keynote address, 10 20-minute sessions and approximately 20 90second poster previews followed by the opportunity to visit the poster presentations. Friday featured a 40-minute keynote address and 10 20-minute sessions. I felt that there were too many sessions, cramming too much into a short space of time.

Read More

Scroll to top