Platform now serves 500K developers, demonstrating soaring demand for flexible AI infrastructure that reduces compute costs by as much as 90%
MT. LAUREL, N.J., Jan. 20, 2026 /PRNewswire/ — Runpod, the platform that empowers developers to build and run custom AI systems at scale, today announced it has surpassed $120 million in annual recurring revenue (ARR), marking a significant milestone in its mission to make flexible GPU infrastructure accessible and affordable for AI builders.
Runpod now serves more than 500,000 developers building custom AI systems – ranging from individual innovators to Fortune 500 enterprise teams spending millions annually. The company’s rapid growth stems from delivering speed to value: developers can spin up training, fine-tuning, testing, and inference environments in seconds, scale instantly with demand, and iterate faster than traditional cloud infrastructure allows. This agility, combined with low-cost, flexible, burstable compute – as much as 90% lower than traditional providers – is enabling teams of any size to build production-grade AI systems. Moreover, revenue grew 90% YoY showcasing exceptional operational efficiency as the platform generates more value from existing compute resources.
“Runpod is building the launchpad for the next generation of Fortune 100 companies,” said Zhen Lu, co-founder and CEO of Runpod. “The breakthrough AI companies of the next decade won’t need hundreds of millions in funding or massive teams, like today’s titans. They’ll start with a few developers, a transformative idea and infrastructure that lets them compete from day one. That’s the future we’re creating at Runpod.”
Runpod’s core offerings – GPU Instances, Serverless GPUs, and Instant Clusters – enable users to develop, train, and scale custom, full-stack AI systems in one cloud within seconds. The platform’s serverless endpoints auto-scale from zero to thousands of concurrent GPUs with sub-500 millisecond cold start times, while instant clusters enable developers to spin up multi-node environments of up to 64 H100s on demand – capabilities that typically require lengthy enterprise contracts and onerous negotiations.
Runpod’s $120M ARR milestone comes alongside additional compelling growth metrics:
- Signups surged 155% YoY, reflecting accelerating adoption as more developers choose Runpod to build custom AI applications.
- Net dollar retention reached 120%, well above the 110% threshold that analysts consider world-class for an organization, and a crucial indicator that existing customers are significantly expanding their usage.
- Runpod delivers over 8 exabytes of global network traffic annually, equivalent to streaming over 1.1 billion hours of 4K video.
- For training large AI models, Runpod supports over 20 terabits per second of internal Infiniband and Ethernet network capacity at multiple global data centers.
What’s Next
The momentum Runpod has seen to date is reflective of deep developer trust in the company’s ability to power training new AI models and deliver production AI inference at scale. As Runpod enters its next phase of growth, the company will focus on enabling developers to use Runpod alongside their hyperscaler workloads and on-premises infrastructure in one unified interface. The goal is to give developers freedom to run compute wherever it makes sense by eliminating the need to manage multiple dashboards or choose one platform over another.
“Runpod’s journey epitomizes product-market fit – Zhen and Pardeep launched Runpod via a Reddit post, got immediate validation from developers, and scaled from there without losing Runpod’s developer-first DNA,” said Radhika Malik, Partner at Dell Technologies Capital. “Reaching $120M ARR while maintaining that ethos is remarkable. They’ve proven that flexible infrastructure built by developers, for developers, can compete at enterprise scale. We’re proud to back founders who understand that listening to the community and solving real problems is what drives sustainable growth.”
“Glam is built for creators, which means our infrastructure has to keep up with trends, not slow them down,” said Olek Rybalko, CTO, Glam Labs. “Runpod lets us spin GPU workloads up on demand, handle sudden spikes, and scale to zero, all at a fraction of the cost of traditional cloud providers.”
“We train over 800,000 LoRAs monthly on Runpod using 500+ concurrent GPUs,” said Justin Maier, Founder & CEO at Civitai. “Our workload spikes unpredictably when trends go viral, but Runpod’s infrastructure easily scales to help us meet evolving demands. For an open-source platform serving creators globally, the flexibility and cost-efficiency of Runpod has been essential – we’re able to support our community’s creativity without the overhead of traditional cloud platforms.”
“We’re building infrastructure that integrates with hyperscalers, enabling diverse compute stacks optimized for the job,” said Justin Mongroo, COO, Runpod. “Developers should have freedom to use any model, any dev tool, and connect their compute wherever it lives. That’s the vision we’re building toward.”
To learn more about Runpod and get started on the platform, visit the website here.
About Runpod
Runpod is a globally distributed GPU cloud platform that empowers developers at any organization to deploy custom full-stack AI applications – simply, globally, and at scale. With Runpod’s key offerings – Cloud GPUs, and Serverless GPUs – developers can develop, train and scale AI applications in one cloud within seconds. Runpod is making cloud computing accessible and affordable without compromising control, customization or cost-efficiency. To learn more, visit https://www.runpod.io/
SOURCE Runpod

