May 9 2019
Investigators at Stanford have proven that a comparatively new technology—a chatbot also called “QuizBot”—can be considerably more effective when compared to flashcards in assisting students to learn and retain information.
A study was conducted with 36 students learning with either a QuizBot or a flashcard app. The team discovered that students accurately recalled over 25% more correct answers for content covered in QuizBot and spent over 2.6 times longer studying with QuizBot than flashcards. The QuizBot, as well as the flashcard app, taught information ranging from science and personal safety to advanced English vocabulary. They had the same sequencing algorithms for choosing the item to be presented next to students.
The longer study time is a major reason for why QuizBot works so much better, reported computer science assistant professor Emma Brunskill, who is a co-author of the paper to be published in the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. The group exhibited its findings on May 8th, at CHI 2019 in Glasgow, Scotland.
QuizBot is more conversational and more fun. Students felt like they had a true study partner.
Sherry Ruan, Graduate Student, Stanford University
Sherry Ruan led the study, while James Landay, a professor of computer science, was the senior author of the paper.
ChatBots are a derivative of artificial intelligence in which computer programs communicate with humans through text message. They are much more usual in customer service and e-commerce applications than in classrooms. For people who have earlier chatted with a customer service representative when contemplating an online purchase, there is a possibility that they have interacted with a ChatBot.
QuizBot is a new revolution on the chatbot formula. It puts forward factual questions through text, just like a teacher. The student enters answers, asks clarifying questions, and asks for clues. QuizBot then understands and reacts conversationally, as if another person is on the other end. ChatBots are different from flashcards in that they can identify near-miss answers and provide extra guidance and even encouragement to the student.
One obstacle in developing such an educational system is that the computer must have the ability to identify the right answers in different forms. Every student answers in a different way, using different syntax or words. Moreover, certainly, there is always the possibility for typos. It is at this point that artificial intelligence plays its role.
To check QuizBot’s accuracy in grading, the group randomly chose 11,000 conversational logs from their investigations. They identified that 96.5% of the time, QuizBot was correct—just five incorrect judgments out of 144 questions posed. Among those, one was a typo (“ture” for ‘true’) and three took place since the algorithm penalized answers that were very short, yet right. There was only one error caused due to the misinterpretation of the language by QuizBot.
The scientists believe that QuizBot could be the beginning of a post-flashcard world for informal learning.
I think there’s a lot of excitement around chatbots in general, though they aren’t in widespread use in education, just yet. But that should change.
Emma Brunskill, Assistant Professor of Computer Science, Stanford University
Brunskill also directs the Artificial Intelligence for Human Impact Lab.
James Landay is also the Professor of Anand Rajaraman and Venky Harinarayan in the School of Engineering and is a member of the Wu Tsai Neurosciences Institute and the Stanford Institute for Human Centered Artificial Intelligence. Other Stanford contributors include postdoctoral scholar Elizabeth L. Murnane, graduate student Bryce Joe-Kun Tham, and research assistant Zhengneng Qiu. Researchers from Colby College, Tsinghua University in China also contributed to this research.
This study was funded by the TAL Education Group.