AI vendor finds opportunity amid AI computing problem – TechTarget

With the growth of generative AI, a big problem enterprises and vendors are concerned with is computing power.

Generative AI systems such as ChatGPT suck up large amounts of compute to train and run, making them costly.

One AI vendor trying to address the massive need for compute is Lambda.

The GPU cloud vendor, which provides cloud services including GPU compute as well as hardware systems, revealed it had achieved a valuation of more than $1.5 billion valuation after raising $320 million in a Series C funding round.

The vendor was founded in 2012 and has focused on building AI infrastructure at scale.

As a provider of cloud services based on H100 Tensor Core GPUs from its partner Nvidia, Lambda gives AI developers access to architectures for training, fine-tuning, and inferencing generative AI and large language models (LLMs).

One of the early investors and a participator in the latest funding round is Gradient Ventures.

Gradient Ventures first invested in the AI vendor in 2018 and then did so again in 2022.

The investment fund became interested in Lambda during a time when the vendor faced the challenge of trying to build AI models without having the workstation and infrastructure it needed. This led Lambda to start building AI hardware that researchers can use.

"That's why we were excited is that we saw this sort of challenge to the development," said Zachary Bratun-Glennon, general partner at Gradient Ventures. "Since then, we've been excited as the product has developed."

Lambda grew from building workstations to hosting servers for customers that had bigger compute needs and budgets and then to offering a cloud service with which users can point and click on their own desktop without needing to buy a specialized workstation.

"Our excitement is just seeing them meet the developer and the researcher where they are with what they need," Bratun-Glennon said.

Lambda's current fundraising success comes as the vendor continues to take advantage of the demand for computing in the age of generative AI, Futurum Group research director Mark Beccue said.

"I really think the fundraise ... has got to do with that opportunistic idea that AI compute is in high demand, and they're going to jump on it," he said.

As a vendor with experience building on-premises GPU hardware for data centers, Lambda appeals to investors because of the options it brings to enterprises, he added.

Lambda also enables enterprises to get up and running quickly with their generative AI projects, Constellation Research founder R "Ray" Wang said.

"GenAI on demand is the best way to look at it," Wang said. "Lambda labs basically says, 'Hey, we've got the fastest, the best, and not necessarily the cheapest but a reasonably priced ability to actually get LLMs on demand.'"

"What people are rushing to deliver is the ability to give you your compute power when you need it," he continued.

However, as generative AI evolves, the compute problem could ease somewhat.

Over the past year and a half, generative AI systems have evolved from large models that run on up to 40 billion parameters to smaller models that run on as few as 2 billion parameters, Beccue said.

"The smaller the language models are, the less compute you have to use," he said.

Moreover, while Nvidia is known for providing powerful AI accelerators like GPUs, competitors including Intel and AMD have also released similar offerings in the last few months, Beccue added.

For example, Intel's Gaudi2 is a deep-learning processor comparable to Nvidia's H100.

In December, AMD introduced MI300X Accelerators. The chips are designed for generative AI workloads and rival Nvidia H100s in performance.

"The models are getting better, and the chips are getting better and we're getting more of them," Beccue said. "It's a short-term issue."

For Lambda, the challenge will be how to extend beyond solving the current AI computing challenge.

"They're not necessarily going to be competing head-to-head with the cloud compute people," Beccue said. He noted that the major cloud computing vendors -- the tech giants -- are deep-pocketed and have vast financial resources. "I'm sure what they're thinking about is, 'Okay, right now, there's kind of a capacity issue that we can fill. How do we extend over time?'"

As an investor in AI companies, Bratun-Glennon said he thinks generative AI will produce thousands of language models, requiring different amounts of compute.

"Even if there are models that have lower compute requirements, the more use cases people will find to apply them to, the lower the cost that creates so the more ubiquitous that becomes," he said. "Even as models get more efficient, and more companies can use them that expands the amount of compute that is required."

AI compute is also a big market, helping Lambda serve developers -- a different audience than what other cloud providers target, he added. Hyper-scale cloud providers focus on selling to large enterprises and getting large workloads.

"Lambda is the AI training and inference cloud," Bratun-Glennon said. "The thing that has carried through the six years I've been working with them is the AI developer mindset."

Lambda is not the only vendor working to meet the demand of AI compute.

On February 20, AI inference vendor Recogni revealed it raised $102 million in series C funding co-led by Celesta Capital and GreatPoint Ventures. Recogni develops AI inference systems to address AI compute.

The latest Lambda round was led by Thomas Tull's U.S. Innovative Technology fund, with participation, in addition to Gradient Ventures, from SK Telecom, Crescent Cove and Bloomberg Beta.

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

Link:
AI vendor finds opportunity amid AI computing problem - TechTarget

Related Posts

Comments are closed.