Frontpage special

How The EU AI Act Will Impact US Companies

I am excited for 2024 and the changes it will bring for U.S. Companies!  The EU AI Act, litigation trends and mitigation discussed below will make a difference for U.S. companies.

1. The EU AI Act is a Significant Development Worthy of Our Attention in the U.S. ASAP.

Every U.S. company needs to pay attention. The EU AI act will necessarily transform AI governance for U.S. companies using AI in products and/or services directed to EU residents when those companies become subject to a sweeping set of new governance obligations. Though the Act will not go into effect until two years after the final text of the law is published, most likely in 2026, U.S. companies need to work proactively now to build AI tools and governance programs in accordance with the forthcoming expectations.

The Act will apply to both providers (e.g., tech companies licensing AI models or companies creating their own AI models) and deployers (companies that use or license) AI models. 

The Act also prescribes a number of governance steps:

●  ​Step one: Risk rank AI – The EU AI Act divides AI into different categories:  prohibited uses (e.g., continuous remote biometric monitoring in public places or social scoring, along with another 13 specific examples), high risk (e.g., financial, health, children, sensitive, critical infrastructure and about 100-plus other use cases), low/limited risk AI (e.g., chatbots on retailer websites), and minimal or no risk  (AI-enabled video games or spam filters). To risk rank effectively, companies will need to know the specific examples implicated by the Act, and review the laws that are referenced in the EU AI Act to come up with a complete list of high-risk AI. The bulk of the governance obligations are reserved for high-risk AI. 

●  ​Step two: Confirm high-quality data use – Next, U.S. companies will need to confirm that their high-risk AI use cases are trained with “high-quality” data. This means that accurate and relevant data is fed into the company’s high-risk model. If a company is licensing a commercial large language model and then building its own application on top for a specific high-risk use case, the licensee company will need to understand whether it has rights (e.g., IP and privacy) to use the data.

●  ​Step three: Continuous testing, monitoring, mitigation and auditing – The EU Act calls for testing, monitoring, and auditing – pre- and post-deployment – of high-risk AI, in the following areas: (1) algorithmic impact; or fairness/bias avoidance; (2) IP; (3) accuracy; (4) product safety; (5) privacy; (6) separate from privacy, a separate testing on cyber; and (7) antitrust. Because the capacity to test involves adding code to the application, this is an area where U.S. companies would do well to adopt protective parameters now, rather than build models without the capacity to test.

●  ​Step four: Risk assessment – The EU AI Act calls for a risk assessment based on the pre-deployment testing, auditing and monitoring, consisting of a continuous and iterative process of:

  • identifying and analyzing the known and foreseeable risks associated with each high-risk AI system;
  • estimating and evaluating the risks that may emerge when the high-risk AI system is used;
  • evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system; and
  • adopting suitable risk-management measures.

This risk assessment, along with all mitigation efforts should be reflected in both the logging and the metadata of the AI system itself.

●  ​Step five: Technical documentation – U.S. companies should ensure they are generating and maintaining all required technical documentation of the tests conducted, the mitigation steps that have been taken, and the continuous monitoring process that is expected to be present in the AI system itself. This step is especially helpful if implemented proactively, because the expense to generate technical documentation is relatively small where it is built into the AI tool from the beginning. 

●  ​Step six: Transparency – Licensors and licensees of high-risk AI will be expected to be transparent with end-users regarding the capabilities and limitations of the AI. In addition, their systems will need to be explainable to a third-party auditor or regulator. This transparency reporting should explain how the model is supposed to work and what the model is and is not good for.

●  ​Step seven: Human oversight – In addition to the technical processes described above, the EU AI Act also calls for human intervention to correct deviations from expectations as close as possible to the time they occur. This human oversight can protect the brand in real time and prevent things like product safety issues from festering for months before an annual audit occurs.

●  ​Step eight: Failsafe – In the event that the AI cannot be restored to the approved parameters set out in the testing phase, trusted legal AI frameworks share a clear intention to make sure that there is a failsafe in place to kill the AI use if remedial mitigation steps cannot be effectuated.

Note that the fines for violating the EU AI Act will be up to 7% of gross revenue, so that steps taken proactively to conform to the EU AI Act now could dramatically minimize costs and risk for U.S. companies impacted by it.

II.  Key challenges U.S. companies may face regarding bias in AI and IP violations can be mitigated with the proactive adoption of AI governance.

We have already read a number of headlines concerning AI and IP violations and AI and bias in 2023 that are harbingers for risk to come. Companies can avoid these issues by proactively testing, monitoring and auditing their AI to ensure these issues are not present for high-risk AI use cases. With regard to general use case and IP, this will be a battle for the providers of generative AI models.

With regard to bias, this is a particularly important area for high-risk use cases (e.g., financial, healthcare, children). The draft regulatory and legislative trustworth AI frameworks identify nearly 140 areas of high risk. In these areas companies will want  to continuously test, monitor, mitigate (if necessary) and audit their AI to ensure positive and not brand-tarnishing results.

For example, one retail pharmacy was accused of not testing their AI biometric system and wrongly accused their customers of shoplifting. The Federal Trade Commission imposed a consent decree, forcing the company to go into bankruptcy. Another company was accused of bias in providing health services.

The primary takeaway for companies is to ensure that (a) continuous testing, monitoring and auditing is in place to catch bias; and (b) diverse teams are involved in the product development. This last piece is critical as diverse perspectives and teams are integral to the company’s use of AI, and that they play key roles in algorithm development, training the data, selecting the data used to train models, along with governance, privacy, and data protection.

With regard to IP issues, in the U.S. alone, there are currently myriad class-action lawsuits pending against AI tech companies involving copyright and patent infringement claims. Most of these cases are in their early stages and it is too soon to tell how far the litigation will go, but their outcome will certainly dictate the future of AI-related IP lawsuits. For companies that are not dependent on these activities for their business models, it might be advisable to take a wait-and-see approach and think through the vendor that they are working with and ensure that commitments to address IP violations are included in your license.

III. Is it critical for companies to implement AI governance standards to avoid legal repercussions?

Regulatory enforcement threats, along with current and forthcoming legislation arising around the globe, create an urgent need for companies deploying AI to proactively implement AI governance standards. This need is intensified by the dynamics driven by developments in recent years: in a post-pandemic world, every company is a data company – there is now no such thing as a company not driven by data. Officers and directors should be thinking about AI governance proactively, and cannot afford to lose time or take a reactive approach.

Put simply, boards should already be taking steps now to understand the legal landscape, review recent proposed guidance and litigation trends, and implement AI governance standards. If they haven’t done so, they should start now. To mitigate AI bias risks specifically, boards should maintain human oversight, which is key to AI success, and ensure that a diverse team works on the development of the algorithms, the training of the data, and the decisions about what data should be used to train models. Diverse teams should be deployed to protect the company’s systems, including technical controls and governance. This will go far in helping insulate the company from claims that it has not considered and embraced diverse perspectives.

With respect to mitigating the risk of copyright and other IP violation lawsuits, boards and CEOs can ensure that companies have rights to use IP for high-risk use cases and they are continuously testing, monitoring and auditing use cases to ensure that IP rights are respected in the output for high-risk use cases. Note that high-risk use cases are specific and include almost 140 specific areas. They are different from general use cases that are present for most commercially available AI uses that are not for any specific high-risk use case.

IV.  Businesses have a substantial role to play in shaping public policy.

It is important for the business community to engage with legislators and regulators to find workable solutions for governance. 

In the US, the Senate Judiciary Committee, Subcomittee on Privacy & Technology held numerous hearings on AI stating in May 2023.  Senator Schumer held separate closed door sessions with tech leaders, governmental officials and community groups to discuss workable solutions for society, the public and private sectors.

Companies licensing large language models have been largely missing from the dialogue.

They should get involved.  Legislators and regulators are looking for substantive feedback. Generalized resistance to AI legislation is not likely to carry the day, anywhere around the world, and will not stop AI regulation.  

To provide additive feedback companies should attempt to use the steps discussed in Section I of this article and isolate any specific inabilities or difficulties in applying them in specific use cases.

V.  Leaders are Gathering to Find Solutions in AI

The stakes could not be higher for boards’ obligation to remain current concerning generative AI, and it is critical that CEOs and board members understand the impact of technology and trust on business success, and prioritize trust through the responsible and equitable integration of technology into the company.

Shareholders have taken note. Recently, the pension trust for the AFL-CIO introduced shareholder proposals to scrutinize the AI use cases and governance for one of the largest public entertainment companies and a 3 trillion dollar market cap tech company.  Specifically, “The Proposal requests that the Company prepare a transparency report that explains the Company’s use of artificial intelligence in its business operations and the board’s role in overseeing its usage, and sets forth any ethical guidelines that the Company has adopted regarding its use of artificial intelligence.” The companies argued that the proposals were improper for shareholder consideration because they were part of “ordinary business operations.” The SEC rejected the companies’ position stating that “In our view, the Proposal transcends ordinary business matters and does not seek to micromanage the Company.” The proposals’ focus on board oversight is significant and means that boards may wish to consider getting engaged in oversight of AI.

The good news is that CEOs and Board Members are stepping forward to assume leadership in navigating governance surrounding AI.

For example, the theme of Davos this year was “rebuilding trust.”  And at Davos, the World Economic Forum announced that AI made the list of top ten risks in its annual Global Risks Report 2024.  The reason?  “Downstream risks could endanger political systems, economic markets and global security and stability.” Global Risks Report 2024 at 50.

The Committee for Economic Development has issued CEO Perspective briefings, and discussing the role of leadership and governance as it relates to AI.  of the Conference Board has held CEO briefings on the topic of AI.

Board service companies have begun offering programming.  Diligent, servicing a community of 700,000 board members, recently launched its AI Ethics & Board training and certification for board members.  The National Association of Corporate Directors is hosting a master class on artificial intelligence this spring.

The annual Digital Trust Summit was founded last year.  The primary goal of the summit is to bring together CEOs, board members, and business leaders to proactively address the opportunities and risks posed by emerging technologies such as generative AI, to build a culture of digital trust. This discussion was and continues to be a critical one, because boards have a governance obligation that extends to responsible data leadership. 

These venues and many others like them are focused on how CEOs and Board Members can ensure that they are contributing to achieving trust in technology.

VI.  The Role of Diversity in Artificial Intelligence.

Diversity of backgrounds, points of view, ethnicities and genders are very important in the development of AI.  For example, a diverse group of research scientists will lead to more accurate products.  In her book, “Unmasking AI: My Mission to Protect What Is Human in a World of Machines Hardcover,” Dr. Joy Buolamwini details how as a research scientist, with dark skin, she had to put on a white mask in order to be recognized by the AI powered computer to conduct research.  A diverse  talent pool in the creation of Artificial Intelligence will catch things like AI not recognizing dark skin tones, and correct the model.

Similarly, diversity is important at the leadership (C-Suite and Board leve) to ensure appropriate questions are asked.  Organizations like NxtWork are dedicated to diversifying the C-suite and board community. NxtWork coined the term “meaningful engagement” to describe the process of introducing  diverse leaders to non-diverse board members to discuss common areas of strategy and business developments – thereby developing rapport and respect on the key business questions at hand.  NxtWork recently placed the Chief Privacy Officer of Linkedin on a board of a company that was looking for privacy expertise.  The connection occurred because the candidate was best in class, period.  It so happened that she was also diverse and a woman.  But the meaningful engagement occurred with the CEO of Radar First because the company had a need and the NxtWork member was a perfect fit. 

VII. The role of non-profits to moving change.

Non-profits are very important to achieving trustworthy Artificial Intelligence.  The AI Governance Center is at the forefront of training the shaping the future of AI governance and privacy practices globally, significantly. An organization need not be a for-profit company to have extraordinary returns!

Print Friendly, PDF & Email
Dominique Shelton Leipzig
Privacy & Cybersecurity Partner at Mayer Brown | Website | + posts

Dominique Shelton Leipzig (https://dominiquesheltonleipzig.com) is the author of Trust: Responsible AI, Innovation, Privacy and Data Leadership, her fourth book. She is currently a privacy and cybersecurity partner at Mayer Brown, where she is the leader of the firm’s global data innovation team, counseling CEOs and board members on smart digital governance. Her policy op-ed calling for an executive order to solve the cross-border privacy EU standoff was recently followed by the Biden Administration. She founded the Digital Trust Summit for CEOs and board members to reimagine data governance. She co-founded NxtWork, dedicated to diversifying the C-Suite and boardroom. She’s a board member of the AI Governance Center and the International Association of Privacy Professionals and is certified in privacy and board governance.

Dominique Shelton Leipzig

Dominique Shelton Leipzig (https://dominiquesheltonleipzig.com) is the author of Trust: Responsible AI, Innovation, Privacy and Data Leadership, her fourth book. She is currently a privacy and cybersecurity partner at Mayer Brown, where she is the leader of the firm’s global data innovation team, counseling CEOs and board members on smart digital governance. Her policy op-ed calling for an executive order to solve the cross-border privacy EU standoff was recently followed by the Biden Administration. She founded the Digital Trust Summit for CEOs and board members to reimagine data governance. She co-founded NxtWork, dedicated to diversifying the C-Suite and boardroom. She’s a board member of the AI Governance Center and the International Association of Privacy Professionals and is certified in privacy and board governance.

Leave a Reply

Your email address will not be published. Required fields are marked *