AI bots could be a new tool to get people to be open about their feelings

As the legislative election in France approached this summer, a research team decided to reach out to hundreds of citizens to interview them about their views on key issues. But the interviewer asking the questions wasn’t a human researcher — it was an AI chatbot. To prepare ChatGPT to take on this role, the researchers started by prompting the AI bot to behave as it has observed professors communicating in its training data. The specific prompt, according to a paper published by the researchers, was: “You are a professor at one of the world’s leading research universities, specializing in qualitative research methods with a focus on conducting interviews. In the following, you will conduct an interview with a human respondent to find out the participant’s motivations and reasoning regarding their voting choice during the legislative elections on June 30, 2024, in France, a few days after the interview.” The human subjects, meanwhile, were told that a chatbot would be doing the online interview rather than a person, and they were identified to participate using a system called Prolific, which is commonly used by researchers to find survey participants.  Part of the research question for the project was whether the participants would be game to share their views with a bot, and whether ChatGPT would stay on topic and, well, act professional enough to solicit useful answers. The chatbot interviewer is part of an experiment by two professors at the London School of Economics, who argue that AI could change the game when it comes to measuring public opinion in a variety of fields. “It could really accelerate the pace of research,” says Xavier Jaravel, one of the professors leading the experiment. He noted that AI is already being used in the physical sciences to automate parts of the experimental process. For example, this year’s Nobel Prize in chemistry went to scholars who used AI to predict protein folds.  And Jaravel hopes that AI interviewers could allow more researchers in more fields to sample public views than is feasible and cost-effective with human interviewers. That could end up causing big changes for professors around the country, adding sampling public opinion and experience as part of the playbook for many more academics. But other researchers question whether AI bots should stand in for researchers in the deeply human task of assessing the opinions and feelings of people. “It’s a very quantitative perspective to think that just having more participants automatically makes the study better — and that’s not necessarily true,” says Andrew Gillen, an assistant teaching professor in the first-year engineering program at Northeastern University. He argues that in many cases, “in-depth interviews with a select group is generally more meaningful” — and that those should be done by humans. AI doesn’t judge In the experiment with French voters, and with another trial that used the approach to ask about what gives life meaning, many participants said in a post-survey assessment that they preferred the chatbot when it came to sharing their views on highly personal topics. “Half of the respondents said they would rather take the interview again, or do a similar interview again, with an AI,” says Jaravel. “And the reason is that they feel like the AI is a non-judgmental entity. That they could freely share their thoughts, and they wouldn’t be judged. And they thought with a human, they would feel judged, potentially.” About 15% of participants said they would prefer a human interviewer, and about 35% said they were indifferent to chatbot or human. The researchers also gave transcripts of the chatbot interviews to trained sociologists to check the quality of the interviews, and the experts determined that the AI interviewer was comparable to an “average human expert interviewer,” Jaravel says. A paper on their study points out, however, that “the AI-led interviews never match the best human experts.” The researchers are encouraged by the findings, and they have released their interviewing platform free for any other researcher to try out themselves.  Jaravel agrees that in-depth interviews that are more typical in ethnographic research are far superior to anything their chatbot system could do. But he argues that the chatbot interviewer can collect far richer information than the kind of static online surveys that are typical when researchers want to sample large populations. “So we think that what we can do with the tool here is really advancing that type of research because you can get much more detail,” he tells EdSurge. Gillen, the researcher at Northeastern, argues that there is something important that no chatbot will ever be able to do that is important even when administering surveys — something he called “positionality.” The AI chatbot has nothing at stake and can’t understand what or why it is asking questions, and that in itself will change the respon

AI bots could be a new tool to get people to be open about their feelings
As the legislative election in France approached this summer, a research team decided to reach out to hundreds of citizens to interview them about their views on key issues. But the interviewer asking the questions wasn’t a human researcher — it was an AI chatbot. To prepare ChatGPT to take on this role, the researchers started by prompting the AI bot to behave as it has observed professors communicating in its training data. The specific prompt, according to a paper published by the researchers, was: “You are a professor at one of the world’s leading research universities, specializing in qualitative research methods with a focus on conducting interviews. In the following, you will conduct an interview with a human respondent to find out the participant’s motivations and reasoning regarding their voting choice during the legislative elections on June 30, 2024, in France, a few days after the interview.” The human subjects, meanwhile, were told that a chatbot would be doing the online interview rather than a person, and they were identified to participate using a system called Prolific, which is commonly used by researchers to find survey participants.  Part of the research question for the project was whether the participants would be game to share their views with a bot, and whether ChatGPT would stay on topic and, well, act professional enough to solicit useful answers. The chatbot interviewer is part of an experiment by two professors at the London School of Economics, who argue that AI could change the game when it comes to measuring public opinion in a variety of fields. “It could really accelerate the pace of research,” says Xavier Jaravel, one of the professors leading the experiment. He noted that AI is already being used in the physical sciences to automate parts of the experimental process. For example, this year’s Nobel Prize in chemistry went to scholars who used AI to predict protein folds.  And Jaravel hopes that AI interviewers could allow more researchers in more fields to sample public views than is feasible and cost-effective with human interviewers. That could end up causing big changes for professors around the country, adding sampling public opinion and experience as part of the playbook for many more academics. But other researchers question whether AI bots should stand in for researchers in the deeply human task of assessing the opinions and feelings of people. “It’s a very quantitative perspective to think that just having more participants automatically makes the study better — and that’s not necessarily true,” says Andrew Gillen, an assistant teaching professor in the first-year engineering program at Northeastern University. He argues that in many cases, “in-depth interviews with a select group is generally more meaningful” — and that those should be done by humans. AI doesn’t judge In the experiment with French voters, and with another trial that used the approach to ask about what gives life meaning, many participants said in a post-survey assessment that they preferred the chatbot when it came to sharing their views on highly personal topics. “Half of the respondents said they would rather take the interview again, or do a similar interview again, with an AI,” says Jaravel. “And the reason is that they feel like the AI is a non-judgmental entity. That they could freely share their thoughts, and they wouldn’t be judged. And they thought with a human, they would feel judged, potentially.” About 15% of participants said they would prefer a human interviewer, and about 35% said they were indifferent to chatbot or human. The researchers also gave transcripts of the chatbot interviews to trained sociologists to check the quality of the interviews, and the experts determined that the AI interviewer was comparable to an “average human expert interviewer,” Jaravel says. A paper on their study points out, however, that “the AI-led interviews never match the best human experts.” The researchers are encouraged by the findings, and they have released their interviewing platform free for any other researcher to try out themselves.  Jaravel agrees that in-depth interviews that are more typical in ethnographic research are far superior to anything their chatbot system could do. But he argues that the chatbot interviewer can collect far richer information than the kind of static online surveys that are typical when researchers want to sample large populations. “So we think that what we can do with the tool here is really advancing that type of research because you can get much more detail,” he tells EdSurge. Gillen, the researcher at Northeastern, argues that there is something important that no chatbot will ever be able to do that is important even when administering surveys — something he called “positionality.” The AI chatbot has nothing at stake and can’t understand what or why it is asking questions, and that in itself will change the respon