Artificial intelligence continues to remain a focus in 2021, as we predicted at the start of the year. From the FTC, to the EU, to others, regulators of all kinds are paying attention to companies’ use of these tools. In the latest, five US federal agencies are seeking input on how financial institutions are using AI tools. Comments from stakeholders are due by June 1, 2021.
Continue Reading Federal Financial Agencies Seek Comments on Use of Artificial Intelligence

Artificial intelligence continues to be a focus and concern for businesses, regulators, and lawmakers alike. As we recently wrote, there was much activity and focus on artificial intelligence and the impact on privacy laws. In addition to legal developments, there have been advancements in AI business technologies by major multinational technology firms, something focused on this post in our sister Intellectual Property Law Blog. There has been an arms race underway by the world’s leading economies to win the estimated $13 Trillion of GDP this field stands to award the winner.  In a recent podcast episode, partners Siraj Husain and Michael P.A. Cohen discuss these developments, risks, and solutions that businesses are experiencing.
Continue Reading What to Watch in Artificial Intelligence in 2021

Many have been watching facial recognition law developments closely, and saw that Portland became the first US city to regulate the use of such technology by private entities operating “places of public accommodation” within the city. Of particular concern for the Portland city council was the use potentially discriminatory use of these technologies, and its impact on “children, Black, Indigenous and People of Color, people with disabilities, immigrants, refugees, and other marginalized communities and local businesses.”
Continue Reading Portland’s Facial Recognition Law: Impact on National Companies

The National Institute of Standards and Technology has issued a set of draft principles for “explainable” artificial intelligence and is accepting comments until October 15, 2020. The authors of the draft principles outline four ways that those who develop AI systems can ensure that consumers understand the decisions reached by AI systems. The four principles are:
Continue Reading NIST Seeking Comments on Draft AI Principles

The FTC recently issued comments on how companies can use artificial intelligence tools without engaging in deceptive or unfair trade practices or running afoul of the Fair Credit Reporting Act. The FTC pointed to enforcement it has brought in this area, and recommended that companies keep in mind four key principles when using AI tools. While much of their advice draws on requirements for those that are subject to the Fair Credit Reporting Act (FCRA), there are lessons that may be useful for many.
Continue Reading FTC Provides Direction on AI Technology

The European Parliament recently issued a resolution directed at the European Commission on its concerns with automated decision-making processes and artificial intelligence. While the EU Parliament addresses several areas of automated decision-making, the underlying theme of this resolution is that the Commission should ensure that there is transparency and human oversight of these processes. In particular, the EU Parliament stresses that consumers should be properly informed about how the automated decision-making functions, be protected from harm, and, particularly with automated decision-making in professional services, that humans are always responsible and able to overrule decisions.  Additionally, this resolution stresses the need for a risk-based approach to regulating AI and automated decision-making and for the availability of large amounts of high quality data, while at the same time protecting any personal data under GDPR.
Continue Reading European Parliament Weighs in on Automated Decision-Making