This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Compliance & Legal, Biotech & Pharma
search
AI

Report: The future of artificial intelligence in compliance

Posted by on 24 February 2025
Share this article

In 2024, the Informa Connect Life Sciences compliance team surveyed industry delegates at our Compliance & Legal events to learn more about their perspectives on AI compliance. Here are the results of what they had to say.

The survey revealed that a majority of delegates feel that their organization is unprepared to handle AI governance, monitoring, and compliance. Only 2% feel “very prepared,” while 44% said they are “not prepared.” Another 36% are only “somewhat prepared,” and 18% are “confident but need more preparation.

The survey also found that while 43% of organizations use AI in other capacities, none are currently using it for compliance monitoring/transparency.

When asked about concerns surrounding in-house built AI solutions, 37% reported lack of knowledge, while 33% said too many unknowns. Other concerns include expenses at 12%, and 14% believing that in-house built AI will become obsolete rapidly. A small percentage (4%) expressed confidence in their IT team’s ability to build compliant AI solutions.

Moreover, 41% of delegates said their organization is utilizing generative AI for authorized uses, such as chatbots, for responding to inquiries from employees or stakeholders. Another application, supporting development of reports or presentations, accounted for 29% of usage. Seventeen percent said it is used in drug development and research activities, while only 4% said it’s utilized in generating insights on HCP or patient behaviors, including prescribing habits of HCPs. The remaining 9% said other uses.

Overall, 21% of delegates named AI as a compliance topic that keeps them up at night. While this issue is keeping pharma compliance officers up at night, the U.S. Department of Justice (DOJ) has been working on its AI guidance as well. The agency’s recently introduced questions for prosecutors that could also help guide pharma compliance officers on its own AI compliance efforts.

ECCP updates: AI compliance

In September 2024, the U.S. Department of Justice (DOJ) updated its Evaluation of Corporate Compliance Programs (ECCP) guidelines, adding a section about compliance risks of emerging technology such as AI.

The ECCP is a guide for prosecutors when evaluating a compliance program of a company facing prosecution for wrongdoings. Companies also use it to measure the effectiveness of their compliance programs and assess regulatory risks.

At the Society of Corporate Compliance and Ethics in September, in her remarks about the updates, Principal Deputy Assistant Attorney General Nicole M. Argentieri stated, “Under the ECCP, prosecutors will consider the technology that a company and its employees use to conduct business, whether the company has conducted a risk assessment of the use of that technology, and whether the company has taken appropriate steps to mitigate any risk associated with the use of that technology.” For example, she said “prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology, such as false approvals and documentation generated by AI.”

Prosecutors will also take into consideration whether the company is monitoring and testing its technology to ensure its intended design and follows its code of conduct.

The ECCP was also updated with AI-related questions for prosecutors to ask when assessing compliance programs. Here are the questions listed in the guidance:

  • Does the company have a process for identifying and managing emerging internal and external risks that could potentially impact the company’s ability to comply with the law, including risks related to the use of new technologies?
  • How does the company assess the potential impact of new technologies, such as AI, on its ability to comply with criminal laws?
  • Is management of risks related to use of AI and other new technologies integrated into broader enterprise risk management (ERM) strategies?
  • What is the company’s approach to governance regarding the use of new technologies such as AI in its commercial business and in its compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
  • To the extent that the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company’s code of conduct?
  • Do controls exist to ensure that the technology is used only for its intended purposes?
  • What baseline of human decision-making is used to assess AI?
  • How is accountability over use of AI monitored and enforced?
  • How does the company train its employees on the use of emerging technologies such as AI?


Hear more about the DOJ’s updates and AI at our upcoming conference.


Unsplash/Igor Omilaev

Share this article