Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark

Jason Hoelscher-Obermaier, Julia H. Persson, Esben Kran, Ioannis Konstas, Fazl Barez

Research output: Contribution to book or proceedingConference articlepeer-review

17 Scopus citations

Abstract

Recent model editing techniques promise to mitigate the problem of memorizing false or outdated associations during large language model (LLM) training. However, we show that these techniques can introduce large unwanted side effects which are not detected by existing specificity benchmarks. We extend the existing COUNTERFACT benchmark to include a dynamic component and dub our benchmark COUNTERFACT+. Additionally, we extend the metrics used for measuring specificity by a principled KL divergence-based metric. We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity. Our findings highlight the need for improved specificity benchmarks that identify and prevent unwanted side effects.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics, ACL 2023
PublisherAssociation for Computational Linguistics (ACL)
Pages11548-11559
Number of pages12
ISBN (Electronic)9781959429623
DOIs
StatePublished - 2023
Externally publishedYes
EventFindings of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada
Duration: Jul 9 2023Jul 14 2023

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

ConferenceFindings of the Association for Computational Linguistics, ACL 2023
Country/TerritoryCanada
CityToronto
Period07/9/2307/14/23

Scopus Subject Areas

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark'. Together they form a unique fingerprint.

Cite this