Evaluating The Impact of Explainable AI in Automated Legal Decision-Making Systems

A Rangamma, K Bhaskar, G Vidyu Latha

This research evaluates the impact of Explainable AI (XAI) techniques, such as LIME, SHAP, and saliency maps, in automated legal decision-making systems. With the increasing use of AI in legal domains, concerns regarding transparency, fairness, and accountability have emerged due to the opaque nature of traditional AI models. This study compares AI models with and without XAI integration, focusing on key metrics including accuracy, interpretability, user trust, bias detection, and fairness. The results show that while the XAI-enabled model demonstrates a slight reduction in accuracy (82%) compared to the traditional AI model (85%), it significantly improves interpretability (9/10), bias detection (75%), and fairness (8/10), fostering greater trust among legal professionals. The findings suggest that integrating XAI techniques is essential for ensuring ethical, transparent, and fair AI-driven decisions in high-stakes legal environments, even if minor trade-offs in accuracy and processing time are involved.
PDF