Banana

Serverless GPUs for Inference

Host your machine learning models on serverless GPUs with ease.

When should you use Serverless GPUs?

Ready to Scale

When you need to scale bi-directionally based on demand and keep a great customer experience.

Cost Savings

When you need to gain cost efficiency and your spend for “always-on” GPUs is too expensive.

Speed to Market

When you need a reliable hosting solution quickly and/or prefer moving fast over building in-house.

Why use Serverless GPUs?

Cost Savings


Only pay for the GPU resources you use (utilization time) rather than “always-on” GPU costs.

In other words, if your product only needs the compute of 1/4 of a GPU you are only paying for 1/4 of a GPU.

Contrast this to an “always-on” GPU, you need to pay for the entire GPU to be running 24/7, regardless of what percentage of the GPU compute power you actually use.

We are seeing companies experience upwards of 90% cost savings on cloud compute by going serverless.

serverless-gpu-cost-savings.png
serverless-gpu-speed-to-market.png

Speed to Market


Say goodbye to multiple months of building hosting infrastructure. With just two lines of code you get scalable inference hosting for your models.

Instead of having to hire or invest engineering resources to build hosting infrastructure, this is a fast-track to cutting-edge production infrastructure. Your engineering team's highest value is working on your product, not infrastructure.

Going serverless with Banana is generally less than 7 days of implementation time with very low engineer investment needed. Compare this to multiple months of in-house, dedicated engineer time.

Autoscaling


We will automatically allocate additional GPU compute to handle increased traffic load as you scale in real-time. The reverse is also true. If usage slows down your allocated GPU compute will decrease to save you costs.

Latency doesn’t spike and queueing is much less likely to happen, maintaining a quality user experience for customers.

This eliminates the guessing game of how many “always-on” GPUs you need to run when planning for traffic spikes or downtime.

serverless-gpu-autoscaling.png

By the numbers

Serverless GPUs: easy to implement, fast, and cost effective.

10xCost Savings
99.4%Reduction in model Cold Start time
2Lines of code to implement

Serverless GPU Inference Hosting