Zhengtao Xu, Tianqi Song, Yi-Chieh Lee
International Journal of Human-Computer Studies (IJHCS) 2025
Due to the human-like nature, large language models (LLMs) often express uncertainty in their outputs. This expression, known as "verbalized uncertainty", can appear in phrases such as "I'm sure that [...]" or "It could be [...]". However, few studies have explored how this expression impacts human users' feelings towards AI, including their trust, satisfaction and task performance. Our research aims to fill this gap by exploring how different levels of verbalized uncertainty from the LLM's outputs affect users' perceptions and behaviors in AI-assisted decision-making scenarios. To this end, we conducted a between-condition study (N = 156), dividing participants into six groups based on two accuracy conditions and three conditions of verbalized uncertainty. We also used the widely played word guessing game Codenames to simulate the role of LLMs in assisting human decision-making. Our results show that medium verbalized uncertainty in the LLM's expressions consistently leads to higher user trust, satisfaction, and task performance compared to high and low verbalized uncertainty. Our results also show that participants experience verbalized uncertainty differently based on the accuracy of the LLM. This study offers important implications for the future design of LLMs, suggesting adaptive strategies to express verbalized uncertainty based on the LLM's accuracy.
Zhengtao Xu, Tianqi Song, Yi-Chieh Lee
International Journal of Human-Computer Studies (IJHCS) 2025
Due to the human-like nature, large language models (LLMs) often express uncertainty in their outputs. This expression, known as "verbalized uncertainty", can appear in phrases such as "I'm sure that [...]" or "It could be [...]". However, few studies have explored how this expression impacts human users' feelings towards AI, including their trust, satisfaction and task performance. Our research aims to fill this gap by exploring how different levels of verbalized uncertainty from the LLM's outputs affect users' perceptions and behaviors in AI-assisted decision-making scenarios. To this end, we conducted a between-condition study (N = 156), dividing participants into six groups based on two accuracy conditions and three conditions of verbalized uncertainty. We also used the widely played word guessing game Codenames to simulate the role of LLMs in assisting human decision-making. Our results show that medium verbalized uncertainty in the LLM's expressions consistently leads to higher user trust, satisfaction, and task performance compared to high and low verbalized uncertainty. Our results also show that participants experience verbalized uncertainty differently based on the accuracy of the LLM. This study offers important implications for the future design of LLMs, suggesting adaptive strategies to express verbalized uncertainty based on the LLM's accuracy.
Liangkang Peng, Jiangbo Qian, Zhengtao Xu, Yu Xin, Lijun Guo
IEEE Transactions on Image Processing (TIP) 2023
Liangkang Peng, Jiangbo Qian, Zhengtao Xu, Yu Xin, Lijun Guo
IEEE Transactions on Image Processing (TIP) 2023