Others · June 14, 2023

OctoML launches OctoAI, a self-optimizing compute service for AI

When OctoML launched in 2019, its primary focus was optimizing machine learning (ML) models. Since then, the company has added features that make it toptechtrends.com/2022/06/22/octoml-makes-it-easier-to-put-ai-ml-models-into-production/”>easier to deploy ML models (and toptechtrends.com/2021/11/01/octoml-raises-85m-for-it-for-its-machine-learning-acceleration-platform/”>raised $132 million). Today, the company is launching the latest iteration of its service — and while it’s not quite a pivot, it does shift the company’s emphasis from optimizing models to helping businesses use existing open-source models and fine-tune them with their own data or use the service to host their own custom models. The new OctoML platform — dubbed OctoAI — is a self-optimizing compute service for AI, with a special emphasis on generative AI, that helps businesses build ML-based applications and put them into production without having to worry about the underlying infrastructure.

“The previous platform was focused on ML engineers and optimizing and packaging the models into containers that could be deployed across different sets of hardware,” OctoML co-founder and CEO Luis Ceze explained. “We learned a ton from that, but the next natural evolution is to have a fully managed compute service that abstracts all of that [ML infrastructure] away.”

toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png”>toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png” alt=”” width=”1024″ height=”557″ srcset=”https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png 3448w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=150,82 150w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=300,163 300w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=768,417 768w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=680,370 680w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=1536,835 1536w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=2048,1113 2048w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=1200,652 1200w, https://toptechtrends.com/wp-content/uploads/2023/06/Screenshot-2023-06-13-at-6.49.13-PM.png?resize=50,27 50w” sizes=”(max-width: 1024px) 100vw, 1024px”>

Image Credits: OctoML

With OctoAI, users simply decide what they want to prioritize (think latency vs. cost) and OctoAI will automatically choose the right hardware for them. The service will also automatically optimize these models (leading to additional cost savings and performance gains) and decide whether it’s best to run them on Nvidia GPUs or AWS’s Inferentia machines. This takes away a lot of the complexity of putting models into production, something that is still often a roadblock for many ML projects. Users who want to get full control over how their models run can, of course, also set their own parameters and decide which hardware they should run on. Ceze, however, believes that most users will opt to let OctoAI manage all of this for them.

toptechtrends.com/wp-content/uploads/2023/06/1683671576-build-with-any-model-flexible-endpoints-octoml-compute-service-illustration.webp”>toptechtrends.com/wp-content/uploads/2023/06/1683671576-build-with-any-model-flexible-endpoints-octoml-compute-service-illustration.webp” alt=”” width=”300″ height=”225″ srcset=”https://toptechtrends.com/wp-content/uploads/2023/06/1683671576-build-with-any-model-flexible-endpoints-octoml-compute-service-illustration.webp 570w, https://toptechtrends.com/wp-content/uploads/2023/06/1683671576-build-with-any-model-flexible-endpoints-octoml-compute-service-illustration.webp?resize=150,112 150w, https://toptechtrends.com/wp-content/uploads/2023/06/1683671576-build-with-any-model-flexible-endpoints-octoml-compute-service-illustration.webp?resize=300,225 300w, https://toptechtrends.com/wp-content/uploads/2023/06/1683671576-build-with-any-model-flexible-endpoints-octoml-compute-service-illustration.webp?resize=50,37 50w” sizes=”(max-width: 300px) 100vw, 300px”>

Image Credits: OctoML

It also helps that OctoML offers accelerated versions of popular foundation models like Dolly 2, Whisper, FILM, FLAN-UL2 and Stable Diffusion out of the box, with more models on the way. OctoML managed to make Stable Diffusion run three times faster and reduce the cost by 5x when compared to running the vanilla model.

It’s worth noting that while OctoML will continue to work with existing customers who only want to use the service for optimizing their models, the company’s focus going forward will be on this new compute platform.

toptechtrends.com/2021/11/01/octoml-raises-85m-for-it-for-its-machine-learning-acceleration-platform/”>OctoML raises $85M for it for its machine learning acceleration platform

toptechtrends.com/2023/06/14/octoml-launches-octoai-a-self-optimizing-compute-service-for-ai/”>OctoML launches OctoAI, a self-optimizing compute service for AI by toptechtrends.com/author/frederic-lardinois/”>Frederic Lardinois originally published on toptechtrends.com/”>TechCrunch

About The Author