top of page

Mediation beyond algorithms: What AI misses in mediation - Part 2

  • firefly31
  • Nov 3
  • 1 min read

By by Asal Anarkulova and Alix Povey


This article considers whether artificial intelligence can meaningfully advance—or replace—the ideal of neutrality in mediation. It situates mediator bias as a cognitive default rather than a moral failing, then surveys evidence from social science and real-world systems (e.g., résumé and healthcare algorithms, facial recognition) to show how AI frequently reproduces historical skews instead of correcting them. Against this record, the article distinguishes what machines do well (consistency, scale, pattern-surfacing) from the core human capacities mediation requires: self-awareness, empathic listening, and context-sensitive ethical judgment. Drawing on correction models from psychology, it argues that neutrality in practice is not bias-free decision-making but disciplined, reflective management of bias—something current AI cannot perform because it lacks intention, experience, and moral understanding. The conclusion positions AI as an assistive tool for mediators, not a substitute: technology may flag patterns and reduce certain human errors, but the dignity, trust, and repair at the heart of mediation still depend on a human mediator’s capacity to notice, pause, and choose differently.


Download the full article



Comments


© Mediation Chambers. All Rights Reserved 2025

bottom of page