August 9, 2023

Last Updated on January 13, 2024

Originally offered to researchers who requested access and accepted a noncommercial license, Meta’s LLaMA, a highly advanced large language model (LLM) AI framework, was quickly leaked as open source. Anyone anywhere can modify and redistribute this LLaMA for their specific purposes without oversight. Already the AI community has significantly extended the original distribution, especially by creating versions that run on consumer hardware and can be personalized in a few hours. Meta has even updated the model to LLaMA 2 while expanding its open-source availability to the Amazon Web Services (AWS) and Microsoft Azure cloud platforms.

 

Giving hackers the advantage

This democratic level of access to such a sophisticated AI model, while enabling rapid innovation, inevitably gives cybercriminals a big leg up on the security community. For one thing, there is currently no way to establish the provenance and trustworthiness of AI models, including the data and algorithms used to train them. Consumers in this proliferating LLaMA-based AI supply chain have no way to know about cyber risks like poisoned datasets, hidden backdoors, or modifications to disseminate disinformation.

Another concern, as voiced by US senators Hawley and Blumenthal, is that LLaMA was released with “seemingly minimal” protections to prevent the model from being misused for cybercrime, privacy violations, fraud, spamming, harassment, spreading misinformation, and “other wrongdoings and harms” we haven’t even thought of yet. In contrast, two other leading LLMs—OpenAI’s ChatGPT-4 and Google Bard—are “closed source” and accessible to developers only through APIs.

ChatGPT-4 and Bard also have built-in ethical guidelines, whereas LLaMA apparently does not. This is perhaps why OpenAI CEO Sam Altman agreed at a US Senate subcommittee hearing with calls for a new AI regulatory agency within the US government.

AI-driven malware is coming

Concerns about AI-enabled malware are growing as AI innovation explodes and more and more LLMs become available for repurposing. A recent report from CyberArk highlights the use of AI to expand the identity security threat landscape. 93% of security professionals surveyed expect AI-powered attacks to impact their organization in 2023.

Concern is rife that AI-driven malware vectors will increase the number of attacks while making threats more complex and harder to detect. Projected AI use cases and tactics include identifying new vulnerabilities for zero-day attacks, combining attack vectors within an exploit, and hiding malware on victims’ networks. For example, hackers could leverage AI and machine learning (ML) tools to learn what security solutions were on a targeted network and thus automatically steer attacks—effectively enabling them to “do more with less” and increase their profitability.

It’s highly likely that sophisticated cybercriminals will adapt the LLaMA code to increase the effectiveness and financial return within existing business models, such as by creating novel and compelling ransomware attack vectors. “Get ready for loads of personalized spam and phishing attempts,” Tweeted one security researcher. “Open sourcing these models was a terrible idea.”

 

What’s next?

The best defense against AI-enabled malware or any other form of cyber threat is a robust and comprehensive information security program that aligns with a “trusted framework” like ISO 27001 or NIST 800-171 to reduce information-related risk from all attack vectors.

Contact Pivot Point Security to speak with an expert about how best to reduce your company’s cyber risk from AI and conventional threats.

Interested in a checklist to see how ready you are for an ISO 27001 certification audit?

It's a little more complicated than just checking off a few boxes.
To learn more, download our ISO 27001 Un-Checklist now!