CrowdSrike is partnering with AI stalwart Nvidia, an initiative that gives enterprises a tool for protecting large language models (LLMs) from cyberthreats and opens a new service possibility for MSSPs and MSPs.
The Austin, Texas-based cybersecurity vendor announced this week that it is integrating its Falcon Cloud Security platform with Nvidia’s universal LLM NIM – prebuilt inferencing microservices to accelerate the deployment of AI models – and NeMo, a platform for building custom generative AI tools, like LLMs, retrieval models, and vision language models (VLMs).
The integration of Falcon Cloud Security with Nvidia expands the protection CrowdStrike already offers for Nvidia Enterprise AI Factories – validated designs to help organizations deploy AI infrastructure on-premises – and ensure that enterprises run and scale LLMs applications across hybrid and multicloud environments at every stage, from building them to runtime to posture management.
It offers full lifecycle protection for AI and more than 100,000 LLMs, according to CrowdStrike. It’s an important step in the rapidly evolving AI world, according to CrowdStrike Chief Business Officer Daniel Bernard.
“LLMs running in the cloud face new risks, like prompt injection, data leakage, and API abuse, that can occur even without a traditional breach,” Bernard told MSSP Alert, adding that the expanded partner extends CrowdStrike’s runtime protection to Nvidia’ AI infrastructure “so organizations can secure LLMs where they run – in the cloud, in production – with the same unified platform already protecting workloads, identities, and endpoints.”
LLMs Under Threat
LLMs are foundational to modern AI operations and are under a rising number of cyberattacks. Securing them “is an incredibly
complex and dynamic challenge,” according to the experts team at Wiz, the cloud cybersecurity company that Google is buying for $32 billion.
“Unlike traditional systems, LLMs belong to a rapidly evolving field where attackers and defenders are in a constant race,” the group wrote. “Given that LLMs process vast amounts of data – often from unknown sources – and are designed to interact with the world in flexible, unpredictable ways, the attack surface is incredibly wide, and the attack vectors are extremely diverse.”
Adding to the challenge is that “AI and machine learning security require deep expertise that’s still emerging, which requires constantly updating SecOps defenses given the fast pace of innovation,” they wrote.
The OWAP (Open Worldwide Application Security Project) Foundation last year published its list of the
top 10 risks facing LLMs, highlighting issues such as prompt injection attacks that bypass safeguards, training data poisoning that undermines accuracy, potential disclosure of sensitive or proprietary data, and organizational failures to vet maliciously crafted outputs that could disrupt downstream systems.
AI vs. AI
There also is the ongoing challenge of using AI to protect against bad actors that also are using the emerging technology.
“AI is now powering both sides of the fight,” CrowdStrike’s Bernard said. “Adversaries are moving faster and targeting new surfaces, especially in the cloud. We have conviction that AI-based threats will outpace legacy tools.”
A New Service to Offer
MSSPs for years have been working with organizations to deliver outsourced cybersecurity protections and now have another service to offer, he said.
“MSSPs are on the front lines helping customers implement AI in their businesses and secure AI in the cloud, especially as internal teams struggle to keep up,” Bernard said. ”MSSPs already are in the business of protection across endpoints, cloud workloads, and identities. With CrowdStrike, MSSPs can now extend that coverage to AI, on the same platform, with the same workflows. No bolt-ons. No retraining. Just a faster, more efficient way to turn AI security into a foundational MSSP service.”
Falcon Cloud Security offers a range of pre-deployment protections, such as AI-SPM (security program management), and scanning AI models for malware, trojanized models, and backdoors, and detecting shadow AI, both of which were introduced into the platform in April. Such capabilities also deliver threat intelligence integrated into Nvidia NeMo Safety workflows to help secure foundations models as they bring on new AI applications.