I Know You Talked to a Chatbot About Your Case
How AI potentially keeps clients locked in conflict and impedes mediation
One recent client interaction left me shook.
The client had already prepared an appellate brief on his own, and he sent it to me for my review. It was long, complex, and completely unpersuasive. A decade of legal experience allows you to rapidly digest briefs, and I immediately saw that the structure and arguments were all wrong. We had a lot of work to do, so I quickly called him to try to explain why.
A bit too quickly, it turns out. In his mind, I hadn’t spent nearly enough time reading the brief. He raised his voice at me, screaming that he had spent hours on the highest-paid version of a popular AI chatbot, and what did I know? While we were able to bring down the tone of the conversation, it was impossible to penetrate the views of his case that had developed through hours of chatting with an LLM and, in my opinion, the presentation on appeal suffered greatly for it. (LLM stands for Large Language Model, used interchangeably with “AI” throughout this article, although they’re technically not the same thing.)
The interaction led me to want to see to what extent this issue has been studied. It turns out this was very far from an isolated incident, and the most current research on AI chatbots and their effect on the human mind shows some alarming trends. Lawyers in the United States are already calling it “the WebMD effect on steroids”, with clients arriving convinced their case is a slam-dunk because a chatbot told them so. Meanwhile, the more extreme cases continue to make headlines: like a recent Toronto man with no psychiatric history who spent 300 hours in a delusional spiral, to say nothing of a major AI chatbot company settling multiple lawsuits alleging its chatbots contributed to psychological crises and self-harm.
The focus of this article, however, is on how AI may be hurting clients in more subtle ways. First, we’ll dive into three problems currently being identified by researchers of AI/LLMs, and then we’ll discuss how those problems relate to mediation.
The Cognitive Dependence Problem
In a study from the MIT Media Lab and OpenAI (Fang et al., 2025), a four-week experiment with nearly a thousand participants found that higher use of chatbots correlated with increased loneliness and emotional dependence. People who used chatbots for advice and idea generation began to display loss of agency and reduced confidence in their own ability to make decisions. The Gerlich study (2025), published in the peer-reviewed journal Societies, found similar results: a significant negative correlation between AI usage and critical thinking abilities.
The Sycophancy Problem
In a study out of Northeastern University (Kelley & Riedl, 2026), researchers found that chatbots tended to uncritically conform to users’ beliefs. LLMs are designed to tailor their responses depending on the users’ personality traits and preferences. Indeed, the study seems to suggest that users, through their interactions with AI chatbots, cause the chatbot to mold its responses to fit a certain “role.” That role can be more of an advisor/authority (in which case it would be more likely to push back against faulty reasoning) or, alternatively, that of a peer/friend (in which case the chatbot is more likely to modulate itself to accommodate and support the user’s beliefs).
In other words, current research indicates that the chatbots are more or less willing to go along with exactly what the user already believes depending on what the user wants from the experience.
The Overconfidence Problem
Have you heard about the Dunning-Kruger Effect? It’s the one that describes how people with higher abilities in a field tend to underestimate themselves, while people with lower ability tend to overestimate themselves. (Sound like anyone you know?)
Well, in a study out of Aalto University (Fernandes et al., “AI makes you smarter but none the wiser,” Computers in Human Behavior, 2026), researchers looked at participants preparing for the LSAT exam, the one that American students have to take to get into law school. They found that the use of AI chatbots to assist them completely eliminated the Dunning-Kruger Effect.
Sounds good, right? Well, actually, it wasn’t because people more accurately evaluated their abilities. Far from it. The researchers actually observed that the use of AI made EVERYONE overestimate their abilities.
The majority of participants demonstrated a high level of trust in AI, often accepting its suggestions without further inquiry. The researchers also found that people who considered themselves more AI literate did even worse in terms of overconfidence in their abilities. As the study puts it: “AI improves performance but leads to highly biased self-assessments.”
The Obvious Dangers to Clients Researching Legal Issues
Let’s ask ourselves, is a client spending hours discussing their case with an AI chatbot a highly trained and discerning legal mind looking for level-headed assistance from an objective advisor, or are they angry and looking for validation?
The dangers are clear: clients most likely to experience problems using LLMs are exactly the ones using them for validation in their overwhelming desire to be told that they have a great case, and they will spend hours with the chatbot trying to get that result.
All users of AI products, including mediators using them to assist with sessions, should be warned that they risk putting themselves into a “digital echo chamber” that reinforces whatever the user already believes, but this is especially so for clients in the midst of conflict.
So where does this all leave us practitioners of conflict resolution? It can’t be anywhere good. Let’s talk about it.
The Conflict Spiral: Transformative Mediation and What AI Might Do to It
Let’s remind ourselves of the theory of conflict resolution in the transformative model of mediation. Bush and Folger, in their groundbreaking book The Promise of Mediation (2005), describe parties in conflict as locked in a negative spiral. The conflict hangs over them and causes them to feel trapped, weaker, less confident and less able to make decisions for themselves. They also become self-absorbed, focused on their own point of view, their own grievance, and less able to see the other persons involved in the conflict in their totality. Each person locked in the conflict views themselves as a victim and the other a villain.
The role of the mediator in a transformative process is to facilitate communication between the parties to foster recognition and empowerment. Recognition of other points of view and to feel recognized in their own point of view, recognition that the other people in the conflict are complex human beings, and that, as with most things in life, their conflict is not black and white. Empowerment is fostered through recovering personal strength and confidence in the parties’ ability to make decisions for themselves.
These shifts in perspective must come from the parties and cannot be imposed. The mediator creates conditions for those shifts to happen and encourages the parties to talk it out, seeking out those moments where recognition and empowerment can occur.
Where AI and the Conflict Spiral Collide
It would seem under the latest research that AI chatbots are extremely likely to undermine the transformative process in all of the key ways it is supposed to work.
People already in a negative spiral of weakness and self-absorption are, by definition, unable to maintain the perspective necessary to defend themselves against the worst of where chatbots might lead them.
So let’s go one by one through the potential pitfalls and see how.
Cognitive dependence: Just at the moment when people feel at their lowest and most powerless, along comes an AI chatbot feeding them easy answers to their problems and making them feel even LESS able to make decisions for themselves. This interaction directly undermines the goal of helping people in conflict regain the power that they feel they have lost.
Sycophancy: Again, the research here indicated that people who go into the AI chatbot experience looking for a peer to make them feel vindicated in their viewpoint are going to find just that. In the conflict resolution perspective, people who are already, by definition, locked in a negative spiral of self-absorption and powerlessness, are only more likely to want to engage in one-sided communication that caters to that. So they are even more likely to find a chatbot that agrees with and foments their view of the world in which they are victims. Despite it ultimately being in the users’ best interests, these solitary interactions in which the chatbot only hears one person’s perspective leave no incentive for the AI to interrupt the user who wants it to reinforce his or her participation in the negative conflict spiral.
Finally, the overconfidence problem: A successful mediation that ends in an agreement requires the parties to accept an agreed upon outcome and, usually, leave something on the table that they might have won in court. They must place a greater value on their own decision-making process and ability to solve their problems through communication and agreement than on winning a battle against an adversary.
If AI chatbots cause people to be overconfident in themselves or in their ability to “win” the case, this also undermines their ability to engage in that process.
People in conflict who turn to LLMs are more likely to use chatbots to solidify themselves in their previous positions and views on the case, undermining the goals of transformative mediation and keeping them trapped in an adversarial posture, unable to expand their views and see the situation from a different perspective, and making them less confident in their ability to make decisions for themselves.
So What Do We Do as Mediation Practitioners?
Well, we have to accept that people DO use AI chatbots, and that the people who come to us for assistance in conflict resolution are likely to have used them. We need to talk about it openly, as we would any other issue in the case that could impede resolution of the conflict. We could talk about these studies with our clients and bring up the possibility that the parties agree in their initial mediation contract that they will not use an AI chatbot to discuss the case during the mediation. We can say that it is a part of the mediation agreement that many people make and find valuable. Of course, if it turns out that one of the parties has been spending a lot of time using the technology and relies heavily on it, and this use has been shaping their view of the case, this initial discussion has a likelihood of getting a strong emotional reaction. So, while we have to tread carefully, waiting until one of the parties brings it up in the middle of the sessions seems too late.
We can’t prevent people from using this technology if they want to, but in light of how obviously detrimental it is for our clients to talk endlessly to AI chatbots about their cases, we have to face this problem head-on. We cannot be afraid to raise this issue up front during the first session of mediation and incorporate clauses into our mediation agreements that the participants will not use LLMs to discuss their case while they participate in the process, making it part of our initial issues for discussion.
As is always the case, forcing or pressuring the parties to agree not to use AI would go against the goals of transformative mediation and should be avoided. We must always avoid imposing our views on the parties. This issue is nothing more or less than another point, however important, we should have the parties discuss to determine the extent to which they are willing to agree.
It is the next frontier in the endlessly fascinating and rewarding field of dispute resolution. We’ll be staying on top of it!
Frequently Asked Questions
Can I use an AI product to help me prepare for mediation?
The question isn’t whether you can, it’s whether you should. But the answer isn’t so simple. As always, if you have a legal question that concerns you, the best person to ask is a lawyer who practices that kind of law in the relevant jurisdiction. If you’re using an LLM to ask very general questions to help guide you towards the kind of issues you should be thinking about, it’s probably less of a concern. The more you find yourself giving the chatbot details about your specific case, working on the specific legal theories and arguments that will be made, the more counterproductive it is likely to be.
Why would a chatbot give me wrong information about my legal situation?
The problem is not so much that the chatbot gives you wrong information (although it might do that too). The problem is that chatbots are not designed to challenge you. It is much more likely to solidify you in the thinking and wishes you bring to it rather than interrupt your thought patterns, even though that would be in your best interests. The value of interactions with chatbots depends entirely on the user. A person in a conflict is unlikely to have the perspective necessary to effectively use the technology. You are better off listening to licensed experienced professionals.
What is a conflict spiral in mediation?
Under the theory of transformative mediation, people in conflict are trapped in a negative spiral in which their inability to resolve the conflict causes them to feel powerless and self-absorbed. The goals of transformative mediation are for the parties locked in that pattern to feel recognized and to gain recognition in the other parties’ perspectives, and to feel more empowered in their ability to resolve their problem on their own. Use of AI potentially interrupts this process by keeping people locked in their original perspectives.
Can a mediator require me not to use AI during mediation?
Mediators can’t force you to do anything. This process is entirely voluntary. What I suggest is that mediators open up a conversation about these products, which are ever more ubiquitous, and ask the parties to decide whether they would agree not to use during the mediation. Since the overwhelming body of research shows how likely these technologies are to harm this process, I think we have to get ahead of it and talk about it openly.