The dust is beginning to settle from the raft of AI-related bills Governor Newsom signed last month in California. (See for example, our post about neural data.) Most of the provisions will not go into effect for another few months. Before they do, it is worth examining the impact they will have on companies’ privacy and data security practices. Most, as we outline below, may not change fundamental practice, but instead serve as a reminder to take into account privacy and data security considerations when assessing and implementing AI tools:Continue Reading The Privacy and Data Security Impact of California’s Recent AI Bills

The Department of Health & Human Services through the Office of the National Coordinator for Health Information Technology recently updated the process for certification of health information technology. Some of the modifications are intended to address use of artificial intelligence in health IT systems. ONC’s certification is required for certain programs, such as where the health IT will be used for Medicare and Medicaid Incentive programs. It is optional for others. Those who are already certified will need to update their certifications. Those seeking new certifications will be subject to the new process.Continue Reading Out in the Open: HHS’s New AI Transparency Rule

Earlier this month, accompanying an update to a rule prohibiting the impersonation of businesses and governments, the FTC sought comments on extending the rule to prohibit impersonation of individuals. The agency indicated that it is considering expanding the rule as the result of rising complaints around “impersonation fraud,” especially those generated by AI. Comments are due by April 30, 2024.Continue Reading FTC Seeks Comments on AI Impersonation Rules

Biden’s sweeping AI Executive Order sought to have artificial intelligence used in accordance with eight underlying principles. The order, while directed to government agencies, will impact businesses as well. In particular, the order has privacy and cybersecurity impacts on companies’ use of artificial intelligence. Among other things, companies should keep in mind the following:Continue Reading What Is the Privacy Impact of the White House AI Order for Businesses?

The FTC continues its focus and concern on use of technologies that integrate artificial intelligence, this time turning to potential consumer harm with voice cloning technology. Today the commission announced a challenge looking for solutions to help monitor and prevent malicious voice cloning. In the announcement, the FTC pointed to current scams where threat actors use cloned voices -created using AI tools- to conduct scams. For example, money requests from a person’s “relative.” The winner will receive a $25,000 prize, and entries will be accepted in the first weeks of January.Continue Reading FTC Vocalizes AI Voice Cloning Challenge

X Corp., the company formerly known as Twitter, recently sued Bright Data over its site scraping activities. Bright Data is a data collection company and advertises—among other services—its “website scraping” solutions. Scraping is not new, nor are lawsuits attempting to stop the activity. We may, though, see a rise in these suits with the rise in companies using them in conjunction with generative AI tools.Continue Reading Scraping the Bottom of the Barrel: X Corp. Sues Bright Data Over Site Scraping

Artificial Intelligence is here to stay and New York City has enacted legal guidelines for employers who use it. NYC’s Automated Employment Decision Tools (AEDT) law will, effective January 1, 2023, set new standards for employers using AI tools in making employment decisions.
Continue Reading Silver Lining in New York City? New Requirements For Using A.I. in Employment Decisions

There has been much scrutiny of artificial intelligence tools this year. From NIST to the FTC to the EU Parliament, many have recommendations and requirements for companies that want to use AI tools. Key concerns including being transparent about the use of the tools, ensuring accuracy, and not discriminating against individuals when using AI technologies, and not using the technologies in situations where it may not give reliable results (i.e., for things for which the  was not designed). Additional requirements for use of these tools exist under GDPR as well.
Continue Reading 2020 In Review: An AI Roundup

The European Parliament recently issued a resolution directed at the European Commission on its concerns with automated decision-making processes and artificial intelligence. While the EU Parliament addresses several areas of automated decision-making, the underlying theme of this resolution is that the Commission should ensure that there is transparency and human oversight of these processes. In particular, the EU Parliament stresses that consumers should be properly informed about how the automated decision-making functions, be protected from harm, and, particularly with automated decision-making in professional services, that humans are always responsible and able to overrule decisions.  Additionally, this resolution stresses the need for a risk-based approach to regulating AI and automated decision-making and for the availability of large amounts of high quality data, while at the same time protecting any personal data under GDPR.
Continue Reading European Parliament Weighs in on Automated Decision-Making