The White House recently released its Blueprint for an AI Bill of Rights in an effort to guide the discussion on the design, use and deployment of AI in systems that impact the American public. The Blueprint outlines the following five guiding principles:
Beginning January 1, 2023, New York City will restrict employers from using artificial intelligence to make employment decisions unless they follow certain guidelines. The local law applies to employment decisions made “within the city” regarding job applicants and promotion decisions.…
Artificial Intelligence is here to stay and New York City has enacted legal guidelines for employers who use it. NYC’s Automated Employment Decision Tools (AEDT) law will, effective January 1, 2023, set new standards for employers using AI tools in making employment decisions.
Continue Reading Silver Lining in New York City? New Requirements For Using A.I. in Employment Decisions
The FTC recently provided guidance to companies on how to use artificial intelligence with an aim for “truth, fairness and equity.” The FTC reminded companies of three laws it enforces…
Continue Reading Wondering How To Use AI? The FTC Has Some Thoughts
Artificial intelligence continues to remain a focus in 2021, as we predicted at the start of the year. From the FTC, to the EU, to others, regulators of all kinds are paying attention to companies’ use of these tools. In the latest, five US federal agencies are seeking input on how financial institutions are using AI tools. Comments from stakeholders are due by June 1, 2021.
Continue Reading Federal Financial Agencies Seek Comments on Use of Artificial Intelligence
Artificial intelligence continues to be a focus and concern for businesses, regulators, and lawmakers alike. As we recently wrote, there was much activity and focus on artificial intelligence and the impact on privacy laws. In addition to legal developments, there have been advancements in AI business technologies by major multinational technology firms, something focused on this post in our sister Intellectual Property Law Blog. There has been an arms race underway by the world’s leading economies to win the estimated $13 Trillion of GDP this field stands to award the winner. In a recent podcast episode, partners Siraj Husain and Michael P.A. Cohen discuss these developments, risks, and solutions that businesses are experiencing.
Continue Reading What to Watch in Artificial Intelligence in 2021
Many have been watching facial recognition law developments closely, and saw that Portland became the first US city to regulate the use of such technology by private entities operating “places of public accommodation” within the city. Of particular concern for the Portland city council was the use potentially discriminatory use of these technologies, and its impact on “children, Black, Indigenous and People of Color, people with disabilities, immigrants, refugees, and other marginalized communities and local businesses.”…
Continue Reading Portland’s Facial Recognition Law: Impact on National Companies
The National Institute of Standards and Technology has issued a set of draft principles for “explainable” artificial intelligence and is accepting comments until October 15, 2020. The authors of the draft principles outline four ways that those who develop AI systems can ensure that consumers understand the decisions reached by AI systems. The four principles are:…
Continue Reading NIST Seeking Comments on Draft AI Principles
The FTC recently issued comments on how companies can use artificial intelligence tools without engaging in deceptive or unfair trade practices or running afoul of the Fair Credit Reporting Act. The FTC pointed to enforcement it has brought in this area, and recommended that companies keep in mind four key principles when using AI tools. While much of their advice draws on requirements for those that are subject to the Fair Credit Reporting Act (FCRA), there are lessons that may be useful for many.
Continue Reading FTC Provides Direction on AI Technology
The European Parliament recently issued a resolution directed at the European Commission on its concerns with automated decision-making processes and artificial intelligence. While the EU Parliament addresses several areas of automated decision-making, the underlying theme of this resolution is that the Commission should ensure that there is transparency and human oversight of these processes. In particular, the EU Parliament stresses that consumers should be properly informed about how the automated decision-making functions, be protected from harm, and, particularly with automated decision-making in professional services, that humans are always responsible and able to overrule decisions. Additionally, this resolution stresses the need for a risk-based approach to regulating AI and automated decision-making and for the availability of large amounts of high quality data, while at the same time protecting any personal data under GDPR.
Continue Reading European Parliament Weighs in on Automated Decision-Making