Articles

Learning Artificial Intelligence...

by Global E learning Industry FREE Recommendation Request/Connection Now!

Sponsor Ads


About Global E learning Industry Committed   FREE Recommendation Request/Connection Now!

320 connections, 42 recommendations, 1,557 honor points.
Joined APSense since, March 8th, 2016, From Online Education Marketplace, United States.

Created on Nov 24th 2021 22:54. Viewed 222 times.

Comments

Global E learning Industry Committed  FREE Recommendation Request/Connection Now!
Explainable AI (XAI) is artificial intelligence (AI) in which the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific

decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done,

what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

The algorithms used in AI can be differentiated into white-box and black-box machine learning (ML) algorithms. White-box models are ML models that provide results that are understandable for experts in the domain. Black-box models, on the other hand, are

extremely hard to explain and can hardly be understood even by domain experts. XAI algorithms are considered to follow the three principles of transparency, interpretability and explainability. Transparency is given “if the processes that extract model parameters from training data and generate labels from testing data can be described and motivated

by the approach designer”. ] Interpretability describes the possibility to comprehend the ML model and to present the underlying basis for decision-making in a way that is understandable to humans. Explainability is a concept that is recognized as important, but a joint definition is not yet available. It is suggested that explainability in

ML can be considered as “the collection of features of the interpretable domain, that have contributed for a given example to produce a decision (e.g., classification or regression)”. If algorithms meet these requirements, they provide a basis for justifying decisions, tracking and thereby verifying them, improving the algorithms, and exploring new facts.

Sometimes it is also possible to achieve a result with high accuracy with a white-box ML algorithm that is interpretable in itself. This is especially important in domains like medicine, defense, finance and law where it is crucial to understand the decisions and build up trust in the algorithms.

AI systems optimize behavior to satisfy a mathematically-specified goal system chosen

by the system designers, such as the command "maximize accuracy of assessing how positive film reviews are in the test dataset". The AI may learn useful general rules from the test set, such as "reviews containing the word 'horrible' are likely to be negative". However, it may also learn inappropriate rules, such as "reviews containing 'Daniel Day-

Lewis' are usually positive"; such rules may be undesirable if they are deemed likely to fail to generalize outside the test set, or if people consider the rule to be "cheating" or "unfair". A human can audit rules in an XAI to get an idea how likely the system is to generalize to future real-world data outside the test-set.

This is especially important for AI tools developed for medical applications because the cost of incorrect predictions is usually high. XAI could increase the robustness of the algorithms as well as boost the confidence of medical doctors.
Nov 24th 2021 23:05   
Apsense.netboard.me Blogs4edu Innovator  Writing & Education Family Home Finance & Real E
Artificial intelligence and machine learning are part of the computer science field. Both terms are correlated and most people often use them interchangeably. However, AI and machine learning are not the same and there are some key differences that I will discuss here.
ezinearticles.com/ezinepublisher/?id=10326088
Nov 25th 2021 08:26   
Please sign in before you comment.