ARCHIVES

Original Article

Improving Transparency in Deep Learning Models using Explainable AI Techniques

Aquela Nawaz Qureshi1Dr.P.Vishvapathi2

¹ Assistant Professor, Department of Computer Science and Engineering, Deccan college of Engineering and Technology, Nampally, Hyderabad, Telangana, India. ² Professor Department of Computer Science and Engineering, Deccan college of Engineering and Technology, Nampally, Hyderabad, Telangana, India.

Published Online: May-June 2026

Pages: 29-35

Abstract

View PDF

Explainable Artificial Intelligence (XAI) has become an important area of work in overcoming the shortcomings of the conventional models of deep learning which tend to act as black boxes. Although such models are highly predictive, they are not interpretable, which raises questions about their reliability and responsibility and ethical use in sensitive areas like healthcare and finance. The presented work is aimed at enhancing the level of transparency of deep learning models with the help of such sophisticated XAI methods as SHAP ( SHapley Additive exPlanations ) and LIME (Local Interpretable Model-agnostic Explanations ). These techniques can be used to interpret model predictions, determining the impact of input features and give human-interpretable explanations. The introduced solution combines explainability functionality into the machine learning pipeline to make the process of decision-making more transparent without impacting the model performance to a considerable extent. With SHAP to analyze feature importance globally and locally and LIME to analyze the behavior of the model on a case-by-case basis, the system has insights into the behavior of the models. Experimental results show that using XAI methods can enhance user trust, debugging of the model, and adherence to ethical and regulatory requirements. These findings demonstrate the need to provide explainability in the implementation of responsible and trustworthy AI systems in the real world.

Related Articles

2026

AI-Based Stomach Cancer Detection Using Biomarkers, Medical Images, and Voice Analysis

2026

Hydrogen-Efficient Eco-Driving and Route Planning for Fuel-Cell Electric Vehicles Using Multi-Objective Optimization Under Traffic and Terrain Uncertainty

2026

A Data-Driven Machine Learning Framework for Assessing Patent Commercial Value and Technological Significance

2026

Evaluating Student Academic Performance Through a Benchmark of Fuzzy Reasoning Models

2026

A Hybrid Soft Computing Approach for Managing Uncertainty in Data Analytics

2026

Soft Computing Approaches for Robust Analysis of Imbalanced and Noisy Data

2026

Mock Interviewer

2026

Smart Attendance System Using Face Recognition and Gaze-Based Attention Monitoring

2026

Analyzing Customer Review Sentiments using Machine Learning

2026

Agentic Artificial Intelligence as a Strategic HR Partner: Redefining Decision-Making Authority and Strategic Roles

Improving Transparency in Deep Learning Models using Explainable AI Techniques | IJIRE