top of page
  • Writer's pictureFayaz Ahmed

Explainable Artificial Intelligence (AI) and Energy Sector



What if you allow a machine to make a decision on your behalf and after execution of the decision it fails to explain you the rationale behind the decision? How likely are you going to trust the outcome of such a decision?

When you run machine learning models, particularly complex deep learning models which are often referred to as non-explainable – that is, they act as a ‘black box’ you are going to end up in a situation like:

· Why did you do that?

· Why not something else?

· When do you succeed?

· When do you fail?

· When can I trust you?

· How do I correct an error?

It proves that there is a need to make AI more Explainable meaning developing machine learning techniques that produce a high level of learning performance by generating models easily understood by human experts. Above all, human operators need to trust AI decisions and results in order for AI to be effective.

The importance of Explainable AI is even greater in the energy industry, due to its criticality to welfare and economic importance. For instance, If AI shuts down your power generation plant resulting significant economic fallout then there should be a way to justify that magnitude of decision.


The good news is that there are substantial research efforts taking place to produce explainable AI. The US Defence Advanced Research Project Agency (DARPA) is currently funding a programme to research different methods of constructing explainable AI (DARPA, 2017). IBM launched an open-source toolkit to aid AI explainability in August 2019 (IBM, 2019).



References:

16 views0 comments

留言


bottom of page