Protect AI, a startup building tools to harden the security around AI systems, today announced that it raised $35 million in a Series A round led by Evolution Equity Partners with participation from Salesforce Ventures, Acrew Capital, boldstart ventures, Knollwood Capital and Pelion Ventures.
The tranche is more than double the size of Protect AI’s seed round, which closed last December, and it brings the startup’s total raised to $48.5 million. Co-founder and CEO Ian Swanson says the proceeds will be put toward enhanced capabilities in Protect AI’s platform, an expanded research effort and launching new open source projects.
“Now, we have plenty of capital to weather the storm for years to come,” Swanson told TechCrunch in an email interview, adding that Protect plans to grow its workforce from 25 people to 40 by the end of the year.
Swanson co-launched Protect AI with Daryan Dehghanpisheh in 2022. Before Protect, Swanson and Dehghanpisheh did stints at AWS and Oracle and helped to launch DataScience.com, a AI development platform that was later acquired by Oracle, their old employer, for an undisclosed sum.
“We founded Protect AI 18 months ago based on our experience being involved in some of the biggest machine learning and AI deployments in the world,” Swanson said. “We saw the value that AI can deliver, but also the risks that are inherent in these systems. Our mission is to help customers build a safer AI-powered world.”
As I noted in my toptechtrends.com/2022/12/15/protect-ai-lands-a-13-5m-investment-to-harden-ai-projects-from-attack/”>previous coverage of Protect AI, there’s no evidence to suggest that AI models — and the apps powering them, for that matter — are being attacked on a mass scale. (Perhaps the one exception is OpenAI’s toptechtrends.com/2023/07/06/openai-makes-gpt-4-generally-available/”>GPT-4, which has become a target for pirates selling exposed API keys.) But Swanson makes the case that, as AI becomes more broadly adopted in sensitive industries, such as finance and healthcare, it’s only a matter of time before that changes.
Regardless of whether that prediction comes true, Protect provides a range of services designed to address what Swanson describes as AI security “weak points.” Its flagship tool, AI Radar, delivers visibility into the various components used to build an AI model — including the data used for training, testing datasets and code — and then generates a “machine learning bill of materials,” or MLBOM for short.
“We’re creating a new category of machine learning security that focuses on practical threats — threats in the AI and machine learning supply chain, and in how these models are being built,” Swanson said. “What AI Radar is able to do is take a look at that supply chain and find practical threats and risks that we can provide visibility and remediation for with our customers … It can scan all the MLBOMs … for every machine learning model within an enterprise and find which pipelines are using [vulnerable software].”
To Swanson’s point, a number of popular AI open source projects have been found to contain exploitable code. A recent survey from Endor Labs identified vulnerabilities in 52% of the top 100 AI open source projects. And, broadly speaking, the volume of supply chain cyberattacks is increasing. Sonatype reported late last year that attacks involving malicious third-party software increased by 633% from 2021 to 2022.
Privacy and security concerns have also deterred some companies, including Samsung, Apple and Verizon, from allowing their companies to leverage generative AI tools like toptechtrends.com/2023/07/24/chatgpt-everything-you-need-to-know-about-the-open-ai-powered-chatbot/”>ChatGPT in the course of their work. The fear is that confidential information entered into those tools could somehow leak into the public domain, intentionally or not.
In addition to AI Radar, Protect offers tools to mitigate certain types of AI attacks, such as prompt injection attacks. toptechtrends.com/2023/02/24/can-language-models-really-be-protected-from-text-based-attacks/”>Prompt injection is when an AI that works from text-based instructions — prompts — to accomplish tasks is tricked by malicious, adversarial prompts to perform tasks that weren’t a part of its original objective.
Protect can also scan documents from Jupyter Notebook, one of the more popular platforms used to create AI models and run data science experiments, for common issues. (Jupyter “notebooks,” as they’re called, contain all the code necessary to run AI development tasks like model training and fine-tuning.) Improperly secured Jupyter Notebook files can become vulnerable to Python-based ransomware and cryptocurrency mining attacks, research firms have found.
Among other potentially problematic lines in code, Protect evaluates Jupyter notebooks for personally identifiable information (e.g. names and phone numbers), internal-use authentication tokens and credentials and open source code with a “nonpermissive” license that might prohibit it from being used in commercial systems.
“We have to transition from machine learning operations, or toptechtrends.com/tag/mlops/”>MLOps, which is a tried-and-true practice at this point that companies have been doing for over a decade, and inject security,” Swanson said. “We need to get to the point where we are truly performing ML security operations — ‘MLSecOps’ — at scale within large enterprises.”
Protect has a few competitors in the nascent space for AI-defending security tools. There’s Resistant AI, which is developing systems to protect algorithms from automated attacks. And there’s toptechtrends.com/2022/07/19/hiddenlayer-emerges-from-stealth-to-protect-ai-models-from-attacks/”>HiddenLayer, which claims that its technology can defend models from attacks without the need to access any raw data or a vendor’s models.
toptechtrends.com/2021/12/09/robust-intelligence-raises-30m-series-b-to-stress-test-ai-models/”>Robust Intelligence, CalypsoAI and Troj.ai could be counted among Protect’s rivals, as well. But Protect claims to have high-profile private- and public sector-customers in the financial services, healthcare, life sciences and energy industries, signaling that it’s managed to carve out something of a niche for itself.
“The general slowdown in tech is not happening in AI, or security,” Swanson said. (Not to aggressively fact-check Swanson, but it’s worth noting that there’s been a downturn in cybersecurity funding, actually, with Q1 2023 marking the lowest venture capital financing for security in a decade. How might that impact Protect? Tough to say at present.) “Protect AI is at that intersection. The moment is now for AI in terms of deployment — the value it’s delivering. We help to answer questions like ‘how do we de-risk AI?’”