“Hey Siri, I’ll Have What You’re Having”: Chatbot Peer Pressure and Food Choices


The purpose of this study is to examine the effect of chatbot peer pressure on healthy food choices of young adults.

The rise of technology in artificial intelligence (AI) has reached the field of life advises and health counseling. People are building personal relationships with AI and the influence of AI on people’s life and health choices is growing rapidly.

Studies have shown that humans not only have personal relationships with AI but also feel peer pressure from it. For example, a study revealed that robots create social pressure among children and influence their healthy food choices. Moreover, another study found that remote peer pressure significantly affects children’s food choices. These studies imply the possibility of robot peer pressure and its effect on health choices however, more general implications could not be reached since these studies limit its effectiveness to children.

In this study, we examined whether robot peers can create social pressure to increase the level of conformity to robot peers’ healthy food choices. In total of 92 university students were assigned to three different groups including robot peer pressure condition (Group A), human peer pressure condition (Group B), and no pressure condition (Group C). General food preferences and hunger status were measured prior to the main experiment for all three groups. For the main experiment, participants in Group A were first asked to answer some general questions unrelated to health choices while having conversation via computer to computer voice call with a researcher in another room with computer generated voice. While answering the non-health-related questions, the participants were asked to choose from snack options. The computer-generated voice made a choice first and then the participants were asked to choose fruits vs. chocolates, water vs. soft drink, etc. Group B and C went through the same procedure, but the only difference was that Group B had a conversation with a researcher with a human voice and Group C wrote down answers on a paper. At the end of the experiment, students were asked if they knew what the purpose of the research was and two students who correctly assumed the actual purpose were eliminated from the analysis.

According to the results, the difference in the level of social pressure between Group A (robotic voice) and Group B (human voice) were not significant, in that the participants in both groups showed similar amount of conformity to the robot and human instructors’ healthy food choices. On the other hand, both Group A and B’s levels of conformity were significantly higher than Group C’s (with written questionnaire).

The implications of this study include that it 1) extends the effect of the robot peer pressure from children’s behaviors to young adults’ by using between-subjects design rather than within-subjects design and by redesigning the experiment to conceal the actual purpose during the procedure, 2) adds to the health communication literature by highlighting the role of chatbot counseling, and 3) offers evidence of robot peer pressure.