- Prompt injection attacks manipulate AI guardrails using natural language, exploiting the semantic gap to get models to ignore developer instructions.
- AI social engineering scales faster and lowers attacker skill barriers, enabling automated, targeted campaigns like deepfakes and credential theft.
- Primary harms include data exfiltration, unauthorized transactions, and malicious or biased outputs that damage reputation and operations.
- Defenses are immature; require layered controls: human in the loop, prompt firewalls, input sanitization, least privilege, fuzz testing, patching, and user training.
Last Updated on May 8, 2026
Shadow AI—the AI-specific version of shadow IT—is a problem for many organizations, not just law firms. It refers to the unauthorized use of AI that is not formally approved or monitored by IT or security roles.
Because of AI’s autonomous capabilities, powerful data processing, and nondeterministic outputs, shadow AI represents a major business risk. In the legal vertical, this risk is compounded by “the equity partner problem,” where senior leadership uses unapproved AI tools to enhance personal productivity while potentially exposing some of the firm’s most confidential, highest-value data.
How can law firms raise AI risk awareness among senior leaders and build “tone at the top” to empower robust AI governance? This article outlines the key points and issues.
Key takeaways
-
Shadow AI is a massive, global problem, with up to 90% of organizations facing risk exposure from unknown AI systems in their environments, and up to 40% of employees admitting they share confidential data with AI platforms without authorization.
-
Shadow AI is even more risky for law firms because their proprietary client data and intellectual property constitute their primary business asset. Shadow AI puts sensitive data, client confidentiality, and legal ethics all under direct, immediate threat.
-
Paradoxically, a firm’s most senior leaders, who are meant to drive policy enforcement and exemplify professional best practices, may be the ones creating the greatest reputational and financial risk through using ungoverned AI on highly sensitive data.
-
Using unsanctioned AI could also jeopardize fundamental attorney ethical responsibilities, such as the requirements for competence and to safeguard client data from unauthorized disclosure.
-
Eliminating shadow AI risks in law firms calls for a sensitive approach that combines education and practical governance with meeting users’ productivity needs.
What is shadow AI?
Shadow AI is the ad hoc usage of unsanctioned and ungoverned AI tools, including public/open-source AI models like ChatGPT or Llama, in an effort to streamline workflows, process documents, analyze data, or generate software code, images, and other work products.
The shadow AI problem is massive, with surveys showing that 80% to 90% of organizations have unknown AI agents in their environments. Likewise, up to 40% of employees share confidential data with AI platforms without authorization.
Shadow AI injects major risks into any corporate IT environment, such as:
-
Data leakage into the public domain from feeding sensitive data (e.g., customer proprietary information) into AI tools, where it may be used to train public AI models.
-
Hallucinations, biases, and other unverified or invalid outputs that skew business decisions, violate personal rights, or create discriminatory outcomes along with potential legal liability.
-
Increased data breach costs and risks due to poor visibility into data exposure and incident root cause.
-
Potential compliance risk from failure to meet cybersecurity, governance, and/or reporting requirements.
-
Reputational risk associated with data breaches, client data exposure, regulatory penalties, and financial or other damages to stakeholders.
Why is shadow AI especially risky for law firms?
Shadow AI is even more dangerous for law firms and other professional services firms where confidential or proprietary data constitutes the “crown jewels” or the primary business asset. Shadow AI compromises and directly threatens data security, client confidentiality, and attorney ethical responsibilities.
Efficiency gains from using shadow AI tools for legal work are invariably offset by the critical vulnerabilities and attack surface blind spots from unauthorized AI use. This includes:
-
Breach of client confidentiality and attorney-client privilege when lawyers dump case strategy, client data, or personal data into public AI models where it may be used for training—that is, stored on external servers, shared with unauthorized parties, and/or included in public output.
-
Risk from unrecognized hallucinations and other inaccurate output, such as bogus case citations and fabricated quotations. Filing false information with courts can lead to sanctions and reputational damage.
-
Exposure to AI-specific cybersecurity attacks like prompt injection and “jailbreaking” internal guardrails to manipulate AI behavior.
-
Loss over control over eDiscovery, holds, and data retention obligations, such as situations where AI prompts and outputs are relevant to a legal matter.
-
Regulatory noncompliance exposure (e.g., GDPR, US state-level privacy laws, HIPAA, EU AI Act) and policy violations.
What is the equity partner problem with shadow AI in the legal vertical?
The phrase “equity partner problem” highlights the paradoxical situation where a firm’s senior leadership—who are meant to drive enforcement of AI, cybersecurity, privacy, and client confidentiality policy—are instead circumventing these guidelines. This often stems from a lack of basic AI risk awareness, and/or a misinformed belief that personal productivity benefits outweigh the risks.
Partners using shadow AI is a high-stakes gamble with minimal upside. For example:
-
When a partner uses a public AI model like ChatGPT or Claude in an unapproved manner to analyze a contract or draft a legal document, they may be inadvertently exposing highly sensitive strategic information, intellectual property (IP), trade secrets, proprietary client data, etc.
-
Inputting privileged information into shadow AI tools can be a breach of attorney-client privilege.
-
A company can be liable for breach of contract because a partner ran confidential data through their personal AI tool.
To navigate legal and ethical responsibilities along with business risk, law firms need effective AI governance. But without senior leadership buy-in that is unlikely to materialize, limiting AI’s scalability and ROI while hampering risk treatment. This puts the equity partner problem at the heart of AI strategy and implementation.
What are shadow AI ethical concerns for lawyers?
All legal professionals, especially partners and others with access to the most sensitive data, must be mindful of potential AI ethics concerns, notably breaches of client confidentiality and data privacy. Another widely voiced but less tangible concern is the potential for AI’s output to problematically influence a legal professional’s objective expert judgement.
To raise awareness and clarify issues around violations of ethics and professional obligations from improper AI use, in July 2024 the American Bar Association’s Standing Committee on Ethics & Professional Responsibility issued Formal Opinion 512, Generative Artificial Intelligence Tools.
This opinion offers AI-specific ethics guidance that references the Model Rules of Professional Conduct. It covers confidentiality, responsibility of supervisory lawyers, fee implications, and other topics:
-
Competence to use AI—Lawyers must understand AI’s capabilities and limitations and keep up with AI innovations and emerging legal use cases.
-
Confidentiality––Lawyers are responsible to know how their AI uses data and to ensure that robust safeguards are in place to eliminate the risk of accidental or unauthorized data disclosure. Firms should get explicit, informed client consent before entering sensitive client data into an AI system.
-
Communication with clients about AI use—Attorneys are obligated to know when and to what extent they must disclose their AI use to clients.
-
Impacts on fees— AI costs should typically be viewed as office overhead. Law firms may not charge clients for time spent learning new technology to help with client matters in general, except when a client requests use of a specific AI tool. Charging clients for AI use should be preceded by full explanation and informed client consent.
-
Meritorious claims—Lawyers are responsible for validating AI output so that hallucinations do not distort or invalidate claims and arguments.
-
Candor toward the tribunal—To avoid making false statements in a court or other legal context, attorneys must review AI output, especially citations and analysis, to find and correct any errors or misstatements.
-
Supervisory responsibilities—Partners and other lawyers acting in a managerial capacity must establish guidelines around sanctioned AI uses and supervise lawyers and nonlawyers to ensure compliance. This includes ensuring that staff are sufficiently trained in AI use, risks, and ethical concerns.
Using ungoverned shadow AI without adequate guardrails and risk management procedures would make compliance with many of these guidelines extremely difficult. For example, ABA Model Rule 1.6 requires lawyers to make reasonable efforts to prevent unauthorized disclosure of client data. Using ungoverned AI would arguably violate this precept. Similarly, using AI to draft work products or conduct research without verifying its output could violate an attorney’s competence responsibilities.
How can law firms reduce the risk from senior leaders using shadow AI?
Shadow AI usage often starts with users feeling their current tools are too limited compared with generative or agentic AI. This is why AI bans and restrictions often fail in the legal vertical, where AI efficiency gains are becoming a competitive necessity.
A recommended approach is to “sanction the shadow” by meeting users’ productivity needs with secure, governed AI alternatives:
-
Identify all the AI tools currently in use at your firm, both approved and unapproved.
-
Create comprehensive policies on AI usage along with AI governance processes and professional oversight.
-
Make available an approved choice of governed AI tools that address key partner use cases while mitigating associated risks.
-
Educate partners, other lawyers, and staff on AI risks, especially data leakage and unexpected AI behavior like hallucinations.
Next steps
CBIZ Pivot Point Security helps law firms get the most from AI investments and build a governance foundation that supports innovation while addressing the spectrum of AI risk. We have longstanding experience with AI and cybersecurity in the legal vertical and can act as a central resource for your strategic and operational AI needs.
Contact us to schedule an introductory conversation with an AI expert.