HEALTHCARE WORKER USING AI

how to make Irresponsible AIs more responsible and explainable?

Explainable AI and Responsible AI are the terms that have been starting to appear everywhere in recent times. They have been used frequently in huge legislations to companies including these into their products and everywhere else AI/ML tools have been deployed. 

So, what exactly do they mean? 

Let us start with Explainable AI. 

The main objective of Explainable AI (also referred to as XAI), is to provide an explanation or reasoning behind an AI algorithm’s decision for a particular input. The users of that AI should have an understanding of how the algorithm has come up with its decision in the simplest form. Although they are not required to know every technical detail regarding the working of an AI model, they should at least have an understanding of how the AI came up with its decision to generate trust in it. 

These explanations, which although sound trivial and unnecessary, are very helpful in generating user trust in the AI ecosystem. Other than that, it can help us understand bias and problems within a model and help the makers of the model eliminate them right away before deploying it into production. 

So, then what is Responsible AI? 

Responsible AI or RAI talks about how an AI model should be reliable and fair to everyone while being interpretable and trustable. Hence, RAI helps create an environment where ethics are at the forefront while building an AI product. It is important that an AI model has qualities such as Interpretability, fairness, robustness and trustability embedded within it and RAI helps in developing such models. 

So why are they relevant in today’s world? 

In the world of Machine Learning Development, there’s a term called “Black Box Model” which means that when a machine learning model is trained and validated, we can simply trust its decisions/predictions without really understanding what is happening behind the curtains. Surely, we can know about its architecture and have a list of weights that it has learned during training but how it comes up with a certain decision rather than some other remains unknown.  

This is generally okay if your model has simpler tasks at its hands such as recommending a movie or classifying images of dogs and cats, as these predictions do not have stronger repercussions on a user’s life. In such cases, you could just trust the model’s decision and you wouldn’t need to dive deeper if your model has already given a good accuracy in its testing phase. 

But in certain contexts, which are socially sensitive such as criminal justice, medicine and banking, it becomes necessary to not trust the model’s decision blindly as it could have a more damaging impact on its subjects. More importantly a wrong decision can lead to huge losses for a financial institute, increased severity for an already sick patient and also innocent people getting apprehended just because the AI said so. 

For example, a bank’s so-called state of the art machine learning model rejects all the loan applications from applicants belonging to a particular ethnicity and there is no sure way to detect how biased the model is as it has already given a very high accuracy. This sort of bias usually comes from the data the model was trained on. So as the model noticed that usually the loan applicants from a particular background are always being rejected it learned that and continued to do so. 

Before there was no sure way of learning about these biases and making the ML model trustable and interpretable and hence, it became necessary that we opened these black boxes and replaced them with a more transparent box where it is clear why the model has come up with certain decisions and so that the stakeholders can trust the model and make further conclusions based on them. 

XAI helps in giving out these explanations regarding how the model came up with its decisions and RAI also helps in ensuring that the model is trustable and reliable and most importantly not biased while making predictions. 

How are these concepts implemented? 

RAI is more of a concept that helps in the right and just development of an AI product. So, it is mostly utilized during the development phase of a machine learning model. Some things that are kept in mind while implementing RAI is removing the biases in the data set during the data collection process and having a goal of ultimately helping society as a whole and creating a positive impact through the AI. 

For implementing XAI, there are various ways to incorporate it into a product and take advantage of the benefits that it brings. Again, as the primary objective of XAI is to make AI models explainable, there are numerous ways of doing so. 

Techniques to make an AI more explainable: 

1. Simply using more interpretable models: 

In the machine learning domain, there are various types of ML models ranging from highly complex to much simpler. Using the models whose results can be easily understood by a human mind are preferred here. Models such as Linear Regression, Decision Trees, etc. are mostly used to make the trained model more interpretable and easier to explain. 

2. Using Model agnostic methods: 

Model agnostic methods have a more model-independent approach to generating explanations which helps the creators focus more on the performance of the model than on worrying about the interpretability. The interpretation method can work with any machine learning model, even the more complex ones such as random forests and deep neural networks.  

It doesn’t matter what the type of the data is, i.e., tabular or images, a user can still gain explanations and understand the reasoning. Techniques such as LIME (Local interpretable model-agnostic explanations) and Shapley Values come under Model agnostic methods. 

3. Using Model specific methods: 

Although, some complex models such as Convolutional Neural Networks (used for images) and language-based models are highly efficient while extracting patterns from complex data and have always given out a higher accuracy, almost all of them are very less interpretable due to their hierarchical internal structure and functioning. Hence, to understand these models, model-specific methods are also being developed. They include techniques such as Learned features, Pixel Attribution, etc. and they help in generating explanations for such high-level models. 

These techniques give out explanations and helps us in building more reliable machine learning models. On the user front these explanations can help derive insights on how a particular application when rejected can be changed to get it accepted. Obviously, the logic should prevail that some things cannot be changed or reversed such as changing ethnicity/gender or reducing age, etc. But logical solutions can be derived from such explanations to help the consumers get more value out of the product. 

Where do we come in? 

The world is becoming more heavily AI dependent as we move on into the future and the AI models are given more important duties than they were ever given before. Hence, to keep this growth in check and to avoid misuse of this technology it is important to invest in Explainable and Responsible AI to create a more equal, just, and most importantly smarter society as we move forward. 

At Revca, we are building our product with Explainability and Responsibility at the forefront of our development and our solutions are currently one the best in the entire domain.  

ANAI is Revca’s next generation XAI enabled solution that helps you: 

  • Explain your ML models with ease 
  • Operationalize with full transparency and trust in the decisions of your ML models 
  • Understand the models already deployed in production 
  • AI/ML assisted Project Management for your entire organization 

Feel free to connect with us anytime by visiting www.anai.io. 

Explainable Artificial Intelligence, AI Explained and Interpretable, Ethical AI, Responsible AI, Trust in AI

Leave a Reply

Your email address will not be published. Required fields are marked *

ANAI - AI for AI

We offer a comprehensive ecosystem that blends Data Engineering and Automated Machine Learning that enables the ‘Democratizing of AI and ML’. 

 info@anai.io

+1 646 699 8676

Subscribe To Our newsletter

It’s The Bright One, It’s The Right One, That’s Newsletter.



© 2022 ANAI All Rights Reserved.

ANAI - AI for AI

 info@anai.io

+1 646 699 8676

We offer a comprehensive ecosystem

that blends Data Engineering and Auto-

mated Machine Learning that enables the

‘Democratizing of AI and ML’.