Researchers at MIT and Columbia University recently published a study measuring whether using an AI language model that answers questions using the Socratic method results in higher levels of critical thinking and metacognition compared to standard language model interactions.

I tested something similar after OpenAI used a “Socratic tutor” prompt to demonstrate the “steerability” of GPT-4. I found the experience to be surprisingly transformative compared to the standard ChatGPT “Q&A” workflow.

Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, and Pattie Maes:

AI models can be biased, deceptive, or appear more reliable than they are, leading to dangerous decision-making outcomes… This is especially concerning when AI systems are used in conjunction with humans, as people have a tendency to blindly follow the AI decisions and stop using their own cognitive resources to think critically.

[…]

This paper presents the novel idea of AI-framed Questioning inspired by the ancient method of Socratic questioning that uses intelligently formed questions to provoke human reasoning, allowing the user to correctly discern the logical validity of the information for themselves. In contrast to causal AI-explanations that are declarative and have users passively receiving feedback from AI systems, our AI-framed Questioning method provides users with a more neutral scaffolding that leads users to actively think critically about information.

[…]

Our results show that AI-framed Questioning increase the discernment accuracy for flawed statements significantly over both control and causal AI-explanations of an always correct AI system.

Assuming the results here are accurate, what are the implications for traditional pedagogy?