January 27, 2026

Last Updated on January 29, 2026

The Financial Industry Regulatory Authority (FINRA) recently released its 2026 FINRA Annual Regulatory Oversight Report for US investment firms. The report includes a major new section on generative AI (GenAI) that underscores agentic AI risks and informs Registered Investment Advisors (RIAs) of their current compliance responsibilities for AI governance. It also enumerates top GenAI use cases within the financial sector.

What should RIAs do today to bring their AI governance and risk management up to speed with regulations? This article shares the critical points from the report for business and technical leaders.

Key takeaways

  • All RIAs must have in place a written policy and procedures governing the use of AI tools. These should cover critical areas like audit trails, data privacy protections, and vendor risk management for third-party AI systems.
  • RIAs need controls to identify and address AI cybersecurity risks associated with deepfake content, polymorphic malware, and other AI-powered attacks that financial firms are increasingly experiencing.
  • RIAs also need controls to address inherent AI risks like bias, discrimination, hallucinations, and rogue behavior.
  • The 2026 report emphasizes that RIAs must hold AI systems to the same compliance standards as their communications, governance/management, and documentation in other business areas.
  • Emerging AI risks of greatest concern to FINRA include AI agents acting autonomously with no “human in the loop,” agent permission/access issues, and AI independently misusing sensitive data in unauthorized ways.
  • The top AI use case among FINRA members is “Summarization and Information Extraction,” where AI processes large text volumes within unstructured documents to extract specific data types or relationships.
  • FINRA notes fifteen AI use cases that its members are leveraging, most of which aim for process efficiency gains.

What is FINRA?

FINRA is a nonprofit, self-regulatory organization that is responsible under US law and the authority of the Security and Exchange Commission (SEC) to regulate its member brokerages and enforce industry guidelines. It regulates the trading of corporate bonds, equities, securities futures, and options.

Formed in 2007, FINRA’s main purpose is to protect investors and support market integrity. Its BrokerCheck database helps investors check brokers’ and advisors’ backgrounds and choose a broker.

FINRA regulates over 3,400 investment firms and over 600,000 brokers, making it a vital independent securities industry overseer. Its functions include registering companies, licensing brokers, conducting audits, monitoring member activities, mediating disputes, and creating industry regulations. Currently, FINRA has about 4,200 employees and had a budget of $1.46 billion in 2025.

FINRA is empowered to take disciplinary action against its members, such as levying fines, requiring restitution to investors, and barring, expelling, or suspending member firms and individuals. It also refers hundreds of insider trading and fraud cases to the SEC and other government agencies every year.

What does the 2026 FINRA report say about AI?

FINRA’s annual regulatory oversight report gives members essential guidance about its regulatory findings and observations. The 2026 report introduces a new section on GenAI, which emphasizes that RIAs need robust governance frameworks to address AI behavioral and cybersecurity risks.

FINRA advises members to make sure their Written Supervisory Procedures (WSPs) cover AI governance, AI vendor risk management, and monitoring for AI agents. AI should be treated as a core operational function that requires oversight just like typical, human-centric functions. AI’s rapid adoption for efficiency reasons must be tempered with effective risk management to protect the interests of investors and markets.

Specific AI considerations the 2026 report highlights include:

  • Use of autonomous AI agents requires businesses to establish guardrails on behavioral risks, appropriately limit agents’ system access, and monitor agents to block unauthorized or “out of bounds” actions.
  • Members must institute company-wide governance to test, approve, and monitor AI tools, including tracking logged output and AI risk management.
  • Members’ AI governance programs must protect data privacy, reduce potential hallucination impacts, and ensure fair, unbiased AI communication with clients.
  • Members are responsible for the results of using third-party AI systems, which requires them to institute comprehensive vendor due diligence practices.
  • Members must collect and archive all AI-driven communications and decisions to enable transparency and support recordkeeping compliance.

Other best practices FINRA recommends for AI users include:

  • Revising WSPs as needed to account for AI-related risks.
  • Revising vendor/partner agreements to cover third-party AI use and risk management.
  • Updating threat models to cover AI-enabled fraud and cyberattacks.
  • Conducting mock audits to test AI oversight.
  • Tracking the evolving regulatory, enforcement, and advisory activity by FINRA, the SEC and other entities pertaining to GenAI use.

What AI risks does FINRA call out for member firms?

Emerging AI risks currently of greatest concern to FINRA and its members include:

  • AI agents acting autonomously without “a human in the loop” for validation/approval of results.
  • AI agents overstepping the bounds of a user’s permissions and authority.
  • AI agents storing, processing, disclosing, or otherwise misusing sensitive data due to cybercriminal activity or software problems.
  • Complex, multi-step agent reasoning tasks that make outcomes hard to explain, trace, or audit.
  • Problematic reward functions that could lead an AI agent to make decisions that negatively impact investors, organizations, and/or markets.
  • Inappropriate use of general-purpose AI agents that lack specific domain training required to execute complex, sector-specific tasks.

RIAs are accountable to put governance systems in place to address these growing risks. The 2026 report states, “Using GenAI can implicate rules regarding supervision, communications, recordkeeping, and fair dealing.”

FINRA warns members not to become complacent about AI-driven automation. Human oversight is key to quality control and risk reduction. For example, where AI is part of the supervisory system itself, “Policies and procedures may consider the integrity, reliability, and accuracy of the AI model.”

FINRA also recommends that members’ cybersecurity programs directly target AI-related risks, including both internal and third-party/partner AI usage.

What are the top AI use cases among FINRA members?

A five-fold increase in the number of AI uses cases reported—from three in 2025 to fifteen in 2026—illustrates the explosion in AI use among FINRA members.

The top three most common AI use cases remain the same from 2025:

  • Summarization/extraction of information from multiple sources into one document
  • Conducting analysis across diverse data sets or source documents
  • Retrieving relevant sections of policies and procedures for employees

The 2026 report adds twelve more use cases, including:

  • Software coding support
  • Workflow automation and business process optimization
  • Conversational AI and question answering systems (e.g., chatbots and virtual assistants)
  • Linguistic or audio/text translation tools
  • Content generation, including reports, online content, etc.
  • Personalization and recommendation systems
  • Automated data classification and categorization
  • Automated modeling, forecasting, and simulation
  • Data transformation and conversion from unstructured to structured formats
  • Generating synthetic/artificial data for AI modeling and testing
  • Sentiment analysis
  • Natural language querying of structured databases

FINRA notes that many of its member firms have implemented GenAI systems to achieve efficiency gains within internal processes and for information retrieval.

Another sweeping AI usage trend is the proliferation of AI agents, which can autonomously perform tasks and make decisions on behalf of users. AI agents offer huge potential productivity benefits, but come with significant and unique risks to investors, companies, and/or markets. These risks include unintended disclosure of sensitive data, privilege/access violations, and negative decision impacts stemming from issues like bias and hallucinations.

Next steps

The 2026 FINRA report sends a strong warning to members that AI governance needs to catch up with AI adoption in financial services. Deploying AI systems without the controls, monitoring, and recordkeeping that are expected of regulated businesses is a major and immediate compliance concern.

For organizations looking to establish robust parameters governing AI use, CBIZ Pivot Point Security offers comprehensive AI governance and advisory services. We work closely with client teams to evaluate control effectiveness, validate compliance with evolving regulations, and implement systems that empower your organization to realize AI benefits while proactively managing associated risks.

Contact us to discuss your GenAI usage and how we can help you establish clear governance to reduce AI risk.

Back to Blog