The National Institute of Standards and Technology has issued a set of draft principles for “explainable” artificial intelligence and is accepting comments until October 15, 2020. The authors of the draft principles outline four ways that those who develop AI systems can ensure that consumers understand the decisions reached by AI systems. The four principles are:

  1. Explanation: Delivering evidence and reasons for the decisions, which will vary depending on the consumer and may include (a) user benefit explanations, (b) those that attempt to garner support by society, (c) those that assist with compliance with laws, regulations, and safety standards, or (d) those that explain a benefit to the system operator (recommending a list of movies to watch).
  2. Meaningful: Having systems that provide meaningful and understandable explanations to users, which will vary by context and by user.
  3. Explanation Accuracy: Those explanations being correct reflections of the system’s process for creating its outputs, which the authors analogize to an explanation by an individual that shows the mental processes the person took to reach the decision.
  4. Knowledge Limits: Having the AI system work only when the conditions for which it was designed exist, and thus avoids giving results that are not reliable.

These principles follow similar guidance issued earlier this year by the FTC, as well as the European Parliament. As a non-regulatory federal agency (which sits within the US Department of Commerce), NIST’s goal is to promote US commerce by advancing standards such as those set out in these principles. For this draft, NIST indicates that it is seeking to improve the level of trust users have in AI so that the systems are more easily and readily adopted and used.

Putting it Into Practice: Companies that are developing AI systems will find these principles a helpful preview of what may become industry standard, and may want to submit comments (by email to explainable-AI@nist.gov) prior to the October 15, 2020 deadline. In the meantime, companies should keep in mind the existing direction from the FTC and the EU, which include human oversight and transparency of how AI systems reach their decisions.