AI Can Teach Our Students the Art of Dialogue

The same technology that’s ruining essay writing as a pedagogical tool can help students learn how to reason via conversations.
MIT Press Reader/Source image: Adobe Stock
By: David Weinberger

Recent research from MIT found that students using AI to help them structure and outline essays showed less executive brain activity than unaided students. Even more troubling, the effect carried over to when they later wrote an essay on their own. That’s quite concerning, of course.

But instead of taking this as proof that AI fatally weakens our intellectual capabilities, we might also see it as an opportunity to reclaim a powerful and needful skill: helping students learn to think through thoughtful, open dialogue. After all, most students are rarely going to write essays and reports after they graduate. But we hope they will all be able to engage in constructive conversations that broaden and deepen their thinking.

AI can be a powerful, even game-changing tool in this pursuit. As many of us have learned, the large language models such as ChatGPT, Gemini, and Claude are not simply vending machines for answers. You can “talk” with them, drill down on a topic, challenge them, or expand a “conversation” in unexpected directions.

As a small example, here’s the beginning of a conversation I had with Claude, based on my prompt asking it to assume I’m a college freshman and to challenge my ideas. At the end, I had Claude compress it for this brief article:

STUDENT: We did the Ship of Theseus in class today. You replace all the planks, and then everyone argues about whether it’s the same ship. I honestly don’t see why that’s supposed to be a big deal. It just depends on how you’re using the word “same.”

LLM: That’s reasonable — maybe it is just semantic. Though, are you the same person you were when you were five? Your cells have mostly been replaced.

STUDENT: I’m the same person in a different body.

LLM: So, what stays the same — the person part? Your memories and personality have changed pretty dramatically, too.

STUDENT: Yeah, but they’re not planks you swap out, like with the ship. My new memories build on my old ones.

LLM: Oh, that’s good. So, it’s about developmental continuity, not just continuity? That makes me reconsider the ship — planks don’t grow together. But what about a river? New water constantly flows through, yet it’s the same river.

STUDENT: The river’s defined by its shape, not its water.

LLM: Right — so for rivers it’s structure, for persons it’s developmental continuity. What about nations? Is America the same country as it was in 1800?

STUDENT: Probably developmental continuity, but that’s my point: Sameness is just definitional. Who cares?

LLM: But you rejected some definitions as wrong. Maybe we’re not choosing randomly — maybe what we mean by sameness tracks with what actually matters? Like, it matters whether someone who committed a crime decades ago is “the same person” we punish today.

This is not an AI writing an essay for the student. It’s a quick example of a student thinking through an idea in conversation — probing, being challenged, defending their point of view, and perhaps heading toward a reevaluation of their argument. Engaging this way is a fundamentally intellectual activity, hard to teach because it requires the one-on-one attention that few teachers can afford under today’s stresses. But AI can offer students individualized attention at scale, helping them become critical and constructive thinkers by engaging them in respectful, probing dialogues that aim to discover and test ideas, as we did in the ancient roots of Western thought.


When I was a philosophy graduate student, we got one full hour of teacher training. Despite the session’s ludicrous brevity, one line has stuck with me for many decades: Don’t teach the course you would have wanted as an undergraduate. (Thank you, Professor Frank Cunningham.) In other words, we in the room were the one percent who went on to study philosophy as graduate students, and as teachers, we needed to serve the needs and interests of the 99 percent who would not.

Similarly, writing essays won’t be a part of most students’ work once they graduate. This isn’t to say that students shouldn’t write any essays while in school. After all, writing essays also teaches students how to read them: how to follow an argument’s flow, listen with sympathy, evaluate evidence, and challenge ideas. Clearly, those skills are crucial for learning how to think, but they are not the only skills needed for post-collegiate success. Real-time conversations require listening, finding points of common interest, flexing and diverging as appropriate, and following a complex set of social rules for respectful interaction. (Rupert Wegerif has written brilliantly about this as a pedagogy.)

AI enables us to pursue an idea from history to physics to literature to pop culture to feminist ethics, and beyond, all in one sitting.

In fact, this is precisely why Socrates was so skeptical of the written word: In the “Phaedrus,” he lists three drawbacks of writing: It weakens our memory; it gives the same fixed text to everyone rather than adapting to the individual reader; and you can’t ask a written document a question. You don’t have to deny the importance of literacy to see the merit of his critiques: Certainly, outsourcing memory to documents has diminished our memory. (Remember when the local bards recited thousands of lines from “The Iliad” as we listened, enthralled? Me neither.)

But learning to converse about ideas with AI can overcome Socrates’ second and third criticisms. First, we’re on the verge of language models responding in ways closely tailored to the particularities of each learner’s interests, knowledge level, and conversational style. Second, unlike a static text, you can ask questions of an LLM, and it will answer, often uncannily well (and sometimes wildly incorrectly, which students must be taught how to detect).

There’s also been an epochal change since Socrates’ time. He believed the reasoning exercised in a good conversation could be applied to any topic; that’s an implication of his claim that he knew nothing yet could engage in worthwhile conversations with generals about courage, politicians about virtue, religious experts about piety, and so on. Today, topical knowledge matters more than ever, which can lead to the fragmentation of understanding into separate fields, each requiring deep expertise. But AI enables us to pursue an idea from history to physics to literature to pop culture to feminist ethics, and beyond, all in one sitting. That sort of freedom unleashes curiosity, which is itself a pedagogical good. So is letting students experience the interconnectedness of all knowledge — a reflection of the world’s own callous disregard of disciplinary boundaries.


None of this means AI is a substitute for human teachers or that its answers should be trusted uncritically. On the contrary, AI’s errors and biases make it useful for teaching critical skills: Students must learn how to push back, question sources, and test claims. And teaching them to do this in a conversation with no social stakes can make students more fearless in their critiques.

In human-to-human dialogues, there are stakes — for one thing, developing social and community bonds — so AI dialogues should not be presented as a replacement for human-to-human conversations or human mentorship. This should be clearly demonstrated in the class itself.

The teacher remains essential to this project. When students turn in transcripts of an AI conversation instead of an essay, the teacher would grade it, commenting on the dialogue’s flow and process, as well as its substance. What did the student probe and why? Where were they too trusting? What assumptions went unchallenged? What seems dubious in what the AI asserts so confidently? The teacher can also lead classroom discussions of the process by which we can discover and test ideas conversationally.

AI’s errors and biases make it useful for teaching critical skills: Students must learn how to push back, question sources, and test claims.

Crucially, students should be allowed to discuss one another’s AI dialogues, identify their strengths and weaknesses, and continue the collaborative thought process in fully human spaces. This would turn the exercise into an occasion for human-to-human interaction, which is ultimately its aim.

Used this way, AI is not a plagiarism machine but a tool for developing ideas by learning to explore them in conversation. At a moment when thoughtful dialogue too often gives way to dismissal or disdain, learning to develop and critique ideas via conversation is a crucial skill to cultivate. Now more than ever.


David Weinberger, PhD, is an author, philosopher, and researcher at Harvard’s metaLAB, with a 20-year affiliation with Harvard’s Berkman Klein Center. He is the editor of MIT Press’s open-access book series, “Strong Ideas,” and has published widely in articles and multiple books. His forthcoming book, “Beautiful Particulars,” will be published in October by MIT Press.

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.