The FTC recently provided guidance to companies on how to use artificial intelligence with an aim for “truth, fairness and equity.” The FTC reminded companies of three laws it enforces which have lessons for those in the AI space: Section 5 of the FTC Act (which would prohibit unfair algorithms, for example); the Fair Credit Reporting Act (which would prohibit algorithms that might deny housing, as an example); and the Equal Credit Opportunity Act (which would prohibit algorithms that might result in credit discrimination on the basis of race, as an example).

These comments come almost a year after the FTC’s recommendations about AI, and show that the topic remains a priority for the FTC. In these recent comments, the FTC now provides more detail for developers of AI. These include:

  • Start with the right foundation. For example, is your data set missing information from particular populations? If so, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.
  • Watch out for discriminatory outcomes. Testing your algorithm before and during use is needed to make sure that it doesn’t discriminate on the basis of race, gender, or another protected class.
  • Embrace transparency and independence. Transparency frameworks and other actions such as publishing the results of independent audits or opening data or source code can help increase transparency.
  • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Be careful not to overpromise what your algorithm can deliver, or else, risk running into FTC Act territory.
  • Tell the truth about how you use data. Pay attention to the statements made about how data is used and the control users will have over that data.
  • Do more good than harm. If your model causes more harm than good – i.e., if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.
  • Hold yourself accountable – or be ready for the FTC to do it for you. As an example, if your algorithm results in credit discrimination against a protected class, a company may face a complaint alleging violations of the FTC Act and Equal Credit Opportunity Act.

Putting it Into Practice. With these new comments, the FTC provides companies with more concrete examples of ways to meet its transparency, fairness, and accuracy guidance from last year. It also signals the focus the FTC will give to AI under the new administration.  The FTC is not the only regulator focusing on AI. For example, federal financial agencies recently requested comments about the use of AI, and the European Commission has just issued a proposed Artificial Intelligence Act (following a white paper and resolution on the topic issued last year).