As teams plan for 2026, in December 2025, we surveyed our customers to identify key issues, concerns, threats, and opportunities shaping regulatory intelligence (RI) teams in life sciences for 2026. This article summarizes our findings.

  1. Increased AI adoption in regulatory intelligence

Our survey revealed that AI is viewed as the most significant opportunity for RI teams in 2026, with all respondents planning to increase their AI usage. The primary goal is to maximize resource efficiency, such as time saved and capacity freed to focus on higher value tasks, at both team and organizational levels.

Implications for RI Teams:

  • AI will streamline regulatory workflows, including horizon scanning, data summarization, impact assessments, and document reviews.
  • Adoption will shift from ad hoc experimentation to systematic use for specific tasks.

Key use cases include the following emerging priorities:

  • Surveillance, research, screening, triage, and analysis of regulatory information and data.
  • Automated compliance mapping to link new regulations directly to internal SOPs.
  • Handling routine administrative tasks that currently are time-consuming (e.g. write emails and produce presentations).
  • Curating and validating AI outputs.
  • First drafts, reviewing documents for changes.
  • Teams will likely own or co-own internal regulatory intelligence copilots.

Despite the enthusiasm, caution remains regarding AI adoption. Respondents emphasized the necessity of a human-in-the-loop approach to ensure quality and accuracy, as generic AI tools lack the specificity and reliability needed in regulatory contexts. Some teams are exploring custom AI solutions, but developing these requires significant time and resources. This question of build vs. buy will come up more often during the year as RI professionals turn to partners and vendors who specialize in developing AI features custom-built for RI professionals.

  1. Regulatory authorities’ AI adoption

The trend of increasing AI adoption is mirrored by regulatory authorities like the EMA and FDA, which are investing in AI and big-data analytics to enhance assessment processes.

Implications for RI teams:

  • Adoption rates will vary among regulators, affecting review cycles and feedback efficiency.
  • Faster review cycles, more efficient reviews and real-time feedback as Health Authorities adopt the practice of using AI to support the review of an application.
  • RI teams must monitor regulatory technology and governance developments.
  • Increased use of real-world data will provide more insights but also introduce complexity.
  1. AI Regulation as a new content stream and regulatory domain

With the emergence of regulations like the EU AI Act, RI teams will need to adapt to new requirements surrounding AI lifecycle controls, transparency, and security.

Implications for regulatory intelligence teams:

  • RI will increasingly include AI-related regulatory compliance.
  • Collaboration with digital and IT security teams will be essential to translate AI risks into regulatory impacts.
  • New specialization opportunities in AI regulatory intelligence will arise.
  1. Skills development for regulatory intelligence

To embrace the AI opportunity 2026 will be a year which will emphasize skills development. This will mean transitioning RI professionals from exploratory to experimental stages in AI usage. Our survey showed when our customers assess their current AI in RI capability on a maturity scale they benchmark themselves today as being in an early exploration stage. This means teams have been taking almost an entrepreneurial approach which has been ad-hoc and opportunistic. Our survey showed they believe they will develop in 2026 into a more mature, experimental stage. This means teams will have a clearer understanding of the use cases and problems they are trying to solve with AI and can specify the exact situations where they will use it and use becomes nascent in teams. All of our survey responders anticipate that they will become ‘leaders’ in their maturity using AI in RI by the end of 2026, but to do this will require a lot of skills development during the year ahead.

Key skills to develop:

  • AI prompt design and understanding limitations.
  • Data quality oversight and explaining AI decisions.
  • Shifting the RI role towards strategic advisory functions.
  • As the landscape evolves, RI professionals will become orchestrators of AI-enabled intelligence ecosystems.
  1. Threats from geopolitical disruption

While AI presents opportunities, our survey showed that geopolitical disruption and regulatory divergence pose significant threats. The core of these are seen to be the political environment in the US and China and the impact of these on authorities such as the FDA and pharma industry, and the EU pharma legislation.

Challenges Include:

  • Balancing global standards with local regulations.
  • Navigating differing AI and data regulations across regions.
  • Enhancing tools for harmonizing global positions while documenting local deviations.

Stronger collaboration with sourcing and supply-chain teams will be necessary to manage these complexities.

Summary for the year ahead

As regulatory professionals in life sciences navigate these trends and changes in 2026 they are on an evolutionary path of developing their own personal and team capabilities. This capability development will be a large feature of 2026 and we expect organizations such as RAPS to fully embrace and support this shift in the profession to embrace the AI opportunity. Other trusted sources to help RI professionals develop their AI capability will be trusted partners and vendors who offer AI solutions.

More resources

If you liked this article you may also find these interesting