“As an AI language model, I cannot”: Investigating LLM Denials of User Requests

Allgemeines

Art der Publikation: Conference Paper

Veröffentlicht auf / in: CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems

Jahr: 2024

Seiten: 979, 1-14

Veröffentlichungsort: New York, NY

Verlag (Publisher): ACM

DOI: https://doi.org/10.1145/3613904.3642135

ISBN: 979-8-4007-0330-0

Autoren

Joel Wester

Tim Schrills

Henning Pohl

Niels van Berkel

Zusammenfassung

Users ask large language models (LLMs) to help with their homework, for lifestyle advice, or for support in making challenging decisions. Yet LLMs are often unable to fulfil these requests, either as a result of their technical inabilities or policies restricting their responses. To investigate the effect of LLMs denying user requests, we evaluate participants’ perceptions of different denial styles. We compare specific denial styles (baseline, factual, diverting, and opinionated) across two studies, respectively focusing on LLM’s technical limitations and their social policy restrictions. Our results indicate significant differences in users’ perceptions of the denials between the denial styles. The baseline denial, which provided participants with brief denials without any motivation, was rated significantly higher on frustration and significantly lower on usefulness, appropriateness, and relevance. In contrast, we found that participants generally appreciated the diverting denial style. We provide design recommendations for LLM denials that better meet peoples’ denial expectations.

Zitation kopiert