Timed to coincide with SIGGRAPH, the annual AI academic conference, Nvidia this morning announced a new platform designed to let users create, test and customize generative AI models on a PC or workstation before scaling them to a data center and public cloud.
Dubbed AI Workbench, the service can be accessed through a basic interface running on a local workstation. Using it, developers can fine-tune and test models from popular repositories like Hugging Face and GitHub using proprietary data, and access cloud computing resources when the need to scale arises.
Manuvir Das, VP of enterprise computing at Nvidia, says that the impetus for AI Workbench was the challenge — and time-consuming nature — of customizing large AI models. Enterprise-scale AI projects can require hunting through multiple repositories for the right framework and tools, a process further complicated where projects have to be moved from one infrastructure to another.
Certainly, the success rate for launching enterprise models into production is low. According to a poll from KDnuggets, the data science and business analytics platform, the majority of data scientists responding say that 80% or more of their projects stall before deploying a machine learning model. A separate estimate from Gartner suggests that close to 85% of big data projects fail, due in part to infrastructural roadblocks.
“Enterprises around the world are racing to find the right infrastructure and build generative AI models and applications,” Das said in a canned statement. “Nvidia AI Workbench provides a simplified path for cross-organizational teams to create the AI-based applications that are increasingly becoming essential in modern business.”
The jury’s out on just how “simplified” the path is. But to Das’ point, AI Workbench allows developers to pull together models, frameworks, SDKs and libraries, including libraries for data prep and data visualization, from open source resources into a unified workspace.
As the demand for AI — particularly generative AI — grows, there’s been an influx of tools focused on fine-tuning large, general models to specific use cases. Startups like toptechtrends.com/2023/03/30/fixie-wants-to-make-it-easier-for-companies-to-build-on-top-of-language-models/”>Fixie, toptechtrends.com/2023/06/27/reka-emerges-from-stealth-to-build-custom-ai-models-for-the-enterprise/”>Reka and toptechtrends.com/2023/05/15/together-raises-20m-to-build-open-source-generative-ai-models/”>Together aim to make it easier for companies and individual developers to customize models to their needs without having to shell out for costly cloud compute.
With AI workbench, Nvidia’s pitching a more decentralized approach to fine-tuning — one that happens on a local machine as opposed to a cloud service. That makes sense, given Nvidia and its product portfolio of AI-accelerating GPUs stand to benefit; Nvidia makes not-so-subtle mentions of its RTX lineup in the But Nvidia’s commercial motivations aside, the pitch might appeal to developers who don’t wish to be beholden to a single cloud or service for AI model experimentation.
AI-driven demand for GPUs has propelled Nvidia’s earnings to new heights. In May, the company’s market cap toptechtrends.com/2023/05/30/welcome-to-the-trillion-dollar-club-nvidia/”>briefly reached $1 trillion after Nvidia reported $7.19 billion in revenue, up 19% from the previous fiscal quarter.