This morning kicks off with reports from four breakout sessions yesterday afternoon
Workflow
What is a ‘workflow tool’ – everything from Excel to Goobi. Decided to be inclusive
Things people were using:
Czech national database systel ‘Digital Registry’ – locally developed s/w with possibility it might go open source
Goobi
Zend framework
In-house systems
Those not using workflow tools often saw them as overly complex for small projects
Existing tools are related to the scale of the project
But projects don’t have to be that large to generate workflow – 500 manuscripts – or less if lots of people involved
What would ideal workflow tool look like?
Reusable in multiple projects at all sizes
Monitors performance
Gives statistical evidence about every step
Track books & activites over multiple sites
…
Needs to manage notes, tags and details for every item (fragility, lack of metadata, broken parts etc)
Tool should interoperate with ILS/Repository s/w
Workflow systems should work with each other – e.g. being able to push information to centralised workflow tools that could aggregate view of what has been done
Should index new digital item in the ILS
Automatically create records in the ILS when new digital item is available
Scalable … and free of charge!
Business Models
Looked at 6 different business models
- Publisher model
- Proquests Early European Books. Publisher funds digitisation and offers subscription for x years to subscribers outside country of origin; free in country of origin; Year x+1 resource is full open access
- brightsolid and the BL – making 40 million pages of newspapaers available – brightsolid makes material available via paid for website
- lots of potential for more activity in this model
- National Government funding
- e.g. France gave 750 million euros to libraries to digitise materials. However, government then decided it wanted financial return, so now National Library has launched an appeal for private partners
- Seems unlikely to be viable model in near future
- Government Research Councils/Research funders have mandates for data management plans and data curation – but perhaps not always observed by those doing the work. Perhaps if compliance was better enforced would give better outcome
- International funding – specifically EU funding
- LIBER in discussion with EU bodies to have libraries considered as part of European research infrastructure- which would open new funding streams through a new framework programme
- Philanthropic funding
- National Endowment for the Humanities/Library of Congress fund a National Digital Newspaper programme
- Santander who funds digitisation – e.g. of Cervantes. Motiviation for company is good PR
Two further models that are possibilties going forward:
- Public funding/crowdsource model
- e.g. Wikipedia raised ‘crowdsourced’ funding
- Can the concept of citizen science be applied to digistation – e.g. FamilySearch has 400,000 volunteers doing scanning of genealogical records and transcription
- Social Economy Enterprise Models
- Funders might fund digitistaion for ‘public good’ reasons – people employed will have more digital skills as a result; progresses employment agenda – for the funder the digitisation is not the point, it is the other outcomes
- Such a model might draw investors from a range of sectors – KB in The Netherlands which uses such an approach for the preparation of material for digitisation
User ExperienceFirst discussed ‘what do we know about user experience’ – have to consider what users want from digitisation
Crowdsourcing – experience and expectations – experience so far seems to suggest there is lots of potential. However noted need to engage with communities via social media etc. Question of how sustainable these approaches are – need to have a plan as to how you preserve the materials being added by volunteers. Have to have clear goals – volunteers need to feel they have reach an ‘endpoint’ or achieved something concrete
Challenge to get input from ‘the crowd’ outside scholarly community
Metadata and Reuse
Group got a good view of the state of ‘open (meta)data’ across Europe and slightly beyond. Lots of open data activity across the board – although better developed in some areas. In some countries clear governmental/political agenda for ‘open’, even if library data not always top of the list
Some big plans to publish open data – e.g. 22 million records from a library network in Bavaria planned for later this year.
A specific point of interest was a ruling in France that publicly funded archives could not restrict use of the data they made available – that is they could not disallow commercial exploitation of content e.g. by genealogical companies
Also another area of legal interest, in Finland a new Data management law – emphasises interoperability and open data open metadata etc. The National library – building a ‘metadata reserve’ (would have been called a ‘union catalogue’) – bibliographic data, identifiers, authorities.
There was some interesting discussion around the future of metadata – especially the future of MARC in light of the current Library of Congress initiative to look at a new bibliographic framework – but not very clear what is going to happen here. However discussion suggested that whatever comes there will be an increased use of identifiers throughout the data – e.g. for people, places, subjects etc.
It was noted that Libraries, archives and museums have very different traditions and attitudes in terms of their metadata – which leads to different views on offering ‘open’ (meta)data. The understanding of ‘metadata’ very different across libraries, museums and archives. The point was made that ‘not all metadata equal’ – for example an Abstract may need to be treated differently when ‘opening up’ data than the author/title. A further example here was where Table of Contents information had been purchased separately to catalogue records, and so had different rights/responsibilities attached in terms of sharing with others
There was some discussion of the impact of projects which require a specific licence. For example, some concern that the Europeana exchange agreements which will require data to be licensed as CC0 will lead to some data being withdrawn from the aggregation.
Final part of discussion turned to search engines – they are looking at different metadata formats – i.e. http://schema.org. Also our attitudes to data sharing change when there is clear benefit – while some organisations enter into specific agreements with search engines – e.g. OCLC – in the main most libraries seemed happy for Google to index their data without the need for agreements or licensing. Those with experience noted the immediate increase in web traffic once their collections were indexed in Google.