IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2306.17810.html
   My bibliography  Save this paper

A Massive Scale Semantic Similarity Dataset of Historical English

Author

Listed:
  • Emily Silcock
  • Melissa Dell

Abstract

A diversity of tasks use language models trained on semantic similarity data. While there are a variety of datasets that capture semantic similarity, they are either constructed from modern web data or are relatively small datasets created in the past decade by human annotators. This study utilizes a novel source, newly digitized articles from off-copyright, local U.S. newspapers, to assemble a massive-scale semantic similarity dataset spanning 70 years from 1920 to 1989 and containing nearly 400M positive semantic similarity pairs. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridgement. The headlines of reproduced articles form positive semantic similarity pairs. The resulting publicly available HEADLINES dataset is significantly larger than most existing semantic similarity datasets and covers a much longer span of time. It will facilitate the application of contrastively trained semantic similarity models to a variety of tasks, including the study of semantic change across space and time.

Suggested Citation

  • Emily Silcock & Melissa Dell, 2023. "A Massive Scale Semantic Similarity Dataset of Historical English," Papers 2306.17810, arXiv.org, revised Aug 2023.
  • Handle: RePEc:arx:papers:2306.17810
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2306.17810
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Abhishek Arora & Xinmei Yang & Shao-Yu Jheng & Melissa Dell, 2023. "Linking Representations with Multimodal Contrastive Learning," Papers 2304.03464, arXiv.org, revised Jun 2024.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Melissa Dell & Jacob Carlson & Tom Bryan & Emily Silcock & Abhishek Arora & Zejiang Shen & Luca D'Amico-Wong & Quan Le & Pablo Querubin & Leander Heldring, 2023. "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," Papers 2308.12477, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Melissa Dell, 2024. "Deep Learning for Economists," Papers 2407.15339, arXiv.org, revised Sep 2024.
    2. Xinmei Yang & Abhishek Arora & Shao-Yu Jheng & Melissa Dell, 2023. "Quantifying Character Similarity with Vision Transformers," Papers 2305.14672, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2306.17810. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.