This research explores the integration of fuzzy logic into traditional machine learning classifiers to
enhance model interpretability without significantly sacrificing predictive accuracy. As machine learning
models increasingly influence critical decisions in fields like healthcare, finance, and autonomous
systems, the need for transparent and understandable decision-making processes has become paramount.
This study compares a traditional machine learning model with a fuzzy-enhanced version, evaluating
their performance based on accuracy, fidelity, simplicity, and stability. While the fuzzy-enhanced model
shows a slight reduction in accuracy (84.9% compared to 85.7% for the traditional model), it offers
substantial improvements in interpretability and consistency. The fuzzy model achieves high fidelity
(92.5%), uses a simplified decision-making process with fewer rules and a shallower tree depth, and
demonstrates greater stability in its explanations. These findings suggest that incorporating fuzzy logic
into machine learning classifiers can create models that are not only effective but also more transparent
and trustworthy, making them better suited for applications where interpretability is critical.