- Executive pressure to accelerate AI adoption pushes employees to deploy agents quickly, increasing unmanaged operational and security risk.
- Nontechnical staff building agents introduces credential exposure, identity sprawl, shadow automation, and lack of logging or audit trails.
- Treating governance as friction delays controls; first incidents prompt costly regulation, insurance changes, and operational pauses.
- Organizations that build AI guardrails early gain speed and resilience; those that rush risk larger blast radius and cleanup disruption.
Last Updated on March 17, 2026
Jack Dorsey’s decision to cut 4,000 jobs (40% of Block’s workforce) as AI transforms its operations will be remembered less for the magnitude of the layoff than for the signal it sent. The message echoing across boardrooms is clear:
Move faster. Replace more. Show AI-driven efficiency now.
Within a week of the news, I had two clients tell me their senior leadership had mandated that:
- Every employee must be leveraging AI.
- Even non-technical personnel should use advanced tools like Claude CoWork, Cursor, and agent frameworks.
- Individuals should build internal AI tools to drive operational efficiency.
This is how unmanaged risk escalates.
The Reality of What “Move Faster” Looks Like
On a recent call, the CISO at a client shared the story of a non-technical marketing employee who proudly demonstrated an AI agent he built. It:
- Contained hardcoded Salesforce API keys
- Contained hardcoded Gmail API keys
- Had no logging or monitoring
- Had not considered risk
- Was unclear whether it ran under his credentials or a service account
He was following leadership’s directive.
But from a security standpoint, this created:
- Credential exposure risk from hardcoded keys in code repos or prompts
- Privilege escalation risk if running under a user account with broad access
- Data exfiltration pathways, such as LLM toolchains sending sensitive data to external APIs
- Non-repudiation gaps (no clear attribution of agent actions)
- Incident response blind spots (no audit trail of AI-generated actions)
Multiply this unmanaged risk by the many hundreds of well-meaning employees building similar tools.
That is the real impact of the extensive press coverage Jack Dorsey’s decision received: AI acceleration, without guardrails.
Governance Has Historically Been Framed as Friction
I recently presented to a private equity–backed board on implementing AI governance ahead of—or at a minimum alongside—their major AI initiatives.
The response was polite. But the subtext was clear: I was a speedbump.
We are in a dangerous phase with AI:
- Boards demand/reward speed.
- CXOs fear being perceived as slow.
- AI metrics become performance indicators.
- Security review is seen as drag.
In short, governance takes a back seat until the first major incident.
The Technical Risk Is Not Theoretical
AI agents are not static SaaS tools. They are:
- Programmatic actors
- Connected to APIs
- Operating across identity boundaries
- Capable of chaining actions autonomously
- Informing/making decisions
- Acting on a human’s part
Without AI governance and guardrails, you get:
- Identity Sprawl at Machine Speed – Agents authenticated via user OAuth tokens or improperly scoped service accounts inherit excessive privileges. Least-privilege controls collapse.
- Secrets Proliferation – API keys end up embedded in prompts, local scripts, GitHub repos, Slack threads, and AI tool configs. Secrets management is bypassed in the name of speed.
- Shadow Automation – Employees deploy agents that send emails, modify CRM records, generate contracts, access financial systems, etc. All without security review, change control, or logging.
- Data Boundary Erosion – Sensitive HR, financial, customer, or healthcare data gets piped into LLM contexts without a clear understanding of data retention, model training implications, cross-border transfer exposure, or regulatory classification impact review.
- Attribution & Forensics Failure – When an AI agent executes an action: Who approved it? What data did it access? What decision logic did it use? Can you reproduce the output? Most organizations cannot answer those questions today.
AI Incidents Yield Regulations and Insurance Provisions
The next wave of AI-related incidents will drive:
- Additional regulatory guidance
- Increased enforcement scrutiny
- Cyber liability underwriting changes
Clients, regulators, and insurers will ask:
- Do you have an AI acceptable use policy?
- Are AI agents governed under identity & access management (IAM) controls?
- Have logging and auditability been implemented?
- Are third-party LLM providers risk-assessed?
- Do you prohibit hardcoded credentials?
Organizations that rushed ahead may discover their cyber liability coverage excludes AI-driven losses. That will be the inflection point.
Déjà vu?
We’ve seen this before:
- Cloud workloads before cloud security
- SaaS before identity governance
- DevOps before secure SDLC
- APIs before API security
The pattern:
- Adoption outpaces risk understanding
- Incidents force awareness
- Regulation follows damage
Only AI will be faster, and the blast radius will be larger. Because now the automation layer can think, decide, and act.
Media Hype
The media attention around high-profile AI-driven workforce reductions will intensify pressure on CXOs.
In response to that pressure, many organizations will:
- Skip control design
- Allow shadow AI ecosystems
- Ignore identity sprawl
- Underestimate third-party LLM risk
- Treat AI governance as a “phase two” problem
The companies that implemented governance early will not only be safer, but also faster, because they won’t have to pause operations to clean up preventable damage.
The press cycle will push companies to move quickly. The market will eventually force them to move responsibly. The only question is whether they will survive the damage that happens in between.

