Self-Consistency of Large Language Models under Ambiguity

Henning Bartsch, Ole Jorgensen, Domenic Rosati, Jason Hoelscher-Obermaier, Jacob Pfau

Research output: Contribution to book or proceedingConference articlepeer-review

2 Scopus citations

Abstract

Large language models (LLMs) that do not give consistent answers across contexts are problematic when used for tasks with expectations of consistency–e.g. question-answering, explanations, etc. Our work presents an evaluation benchmark for self-consistency in cases of under-specification where two or more answers can be correct. We conduct a series of behavioral experiments on the OpenAI model suite using an ambiguous integer sequence completion task. We find that average consistency ranges from 67% to 82%, far higher than would be predicted if a model’s consistency was random, and increases as model capability improves. Furthermore, we show that models tend to maintain self-consistency across a series of robustness checks, including prompting speaker changes and sequence length changes. These results suggest that self-consistency arises as an emergent capability without specifically training for it. Despite this, we find that models are uncalibrated when judging their own consistency, with models displaying both over- and under-confidence. We also propose a nonparametric test for determining from token output distribution whether a model assigns non-trivial probability to alternative answers. Using this test, we find that despite increases in self-consistency, models usually place significant weight on alternative, inconsistent answers. This distribution of probability mass provides evidence that even highly self-consistent models internally compute multiple possible responses.

Original languageEnglish
Title of host publicationBlackboxNLP 2023 - Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 6th Workshop
EditorsYonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
PublisherAssociation for Computational Linguistics (ACL)
Pages89-105
Number of pages17
ISBN (Electronic)9798891760523
StatePublished - 2023
Externally publishedYes
Event6th Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2023 - Singapore, Singapore
Duration: Dec 7 2023 → …

Publication series

NameBlackboxNLP 2023 - Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 6th Workshop

Conference

Conference6th Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2023
Country/TerritorySingapore
CitySingapore
Period12/7/23 → …

Scopus Subject Areas

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Information Systems

Cite this