artificial intelligence

The National Institute of Standards and Technology has issued a set of draft principles for “explainable” artificial intelligence and is accepting comments until October 15, 2020. The authors of the draft principles outline four ways that those who develop AI systems can ensure that consumers understand the decisions reached by AI systems. The four principles are:
Continue Reading NIST Seeking Comments on Draft AI Principles

The FTC recently issued comments on how companies can use artificial intelligence tools without engaging in deceptive or unfair trade practices or running afoul of the Fair Credit Reporting Act. The FTC pointed to enforcement it has brought in this area, and recommended that companies keep in mind four key principles when using AI tools. While much of their advice draws on requirements for those that are subject to the Fair Credit Reporting Act (FCRA), there are lessons that may be useful for many.
Continue Reading FTC Provides Direction on AI Technology

The European Parliament recently issued a resolution directed at the European Commission on its concerns with automated decision-making processes and artificial intelligence. While the EU Parliament addresses several areas of automated decision-making, the underlying theme of this resolution is that the Commission should ensure that there is transparency and human oversight of these processes. In particular, the EU Parliament stresses that consumers should be properly informed about how the automated decision-making functions, be protected from harm, and, particularly with automated decision-making in professional services, that humans are always responsible and able to overrule decisions.  Additionally, this resolution stresses the need for a risk-based approach to regulating AI and automated decision-making and for the availability of large amounts of high quality data, while at the same time protecting any personal data under GDPR.
Continue Reading European Parliament Weighs in on Automated Decision-Making