Conference Item

 

Assessing the Impact of OCR Quality on Downstream NLP Tasks Public Deposited

Downloadable Content

Download PDF
Resource type
  • Conference paper (unpublished)
Creator
  • van Strien, Daniel  ( ORCID )
  • Beelen, Kaspar  ( ORCID )
  • Coll Ardanuy, Mariona  ( ORCID )
  • Hosseini, Kasra  ( ORCID )
  • McGillivray, Barbara  ( ORCID )
  • Colavizza, Giovanni  ( ORCID )
Abstract
  • A growing volume of heritage data is being digitized and made available as text via optical character recognition (OCR). Scholars and libraries are increasingly using OCR-generated text for retrieval and analysis. However, the process of creating text through OCR introduces varying degrees of error to the text. The impact of these errors on natural language processing (NLP) tasks has only been partially studied. We perform a series of extrinsic assessment tasks—sentence segmentation, named entity recognition, dependency parsing, information retrieval, topic modelling and neural language model fine-tuning — using popular, out-of-the-box tools in order to quantify the impact of OCR quality on these tasks. We find a consistent impact resulting from OCR errors on our downstream tasks with some tasks more irredeemably harmed by OCR errors. Based on these results, we offer some preliminary guidelines for working with text produced through OCR.
Date published
  • 2020
Institution
  • British Library
Project name
  • Living with Machines
Funder
  • name: Arts and Humanities Research Council 
  • DOI: http://dx.doi.org/10.13039/501100000267 
  • ISNI: 0000 0004 3497 6001 
  • ROR ID: https://ror.org/0505m1554 
  • Awards: AH/S01179X/1

Event title
  • ICAART 2020: 12th International Conference on Agents and Artificial Intelligence
Event location
  • Valletta, Malta
Related URL Rights statement Keyword

Relationships

Items