Serverless GPUs, for AI.

Scale from zero to the moon (and back) in seconds. Only pay for what you use.

BananaDashboard Screenshot.png

Deploy AI models with ease.

Banana is built for custom model deployment.

Build your Application

Use our simple Python framework to build your API handlers.

You can run inference, connect to data stores, call third-party APIs, whatever you need to get the job done.

Push to GitHub

Banana has built in CI/CD, building your app into a Docker image, and deploying it to our serverless GPU infrastructure.

Scale. A lot.

Banana autoscales your app from zero, with minimal cold boot times.

Sleep soundly knowing any traffic patterns will be handled quickly and cost-effectively.

Browse 70+ Model Templates

You want it, chances are we've got it. 🍌

Serverless Pricing

Only pay for the resources you use. That's the power of Banana.

Usage PricingTeam Pricing
Get started in secondsDedicated support, increased scale & discounts based on volume + commitment length.
A100 40 GB$.00207968/secondFlexible
Egress Costnone
Number of Seats5Unlimited
ML Model Size16GB16+ GB
Instant Deploys (Templates)100Unlimited
Custom Images50Unlimited
Secrets (build args)UnlimitedUnlimited
Log Retention7 days7+ days
RAM (A100 40GB)256 GB256 GB
GPU RAM (A100 40GB)40 GB40 GB
vCPUs (A100 40GB)1616
GPU Concurrency525+
CPU Concurrency525+
Spike Toleranceup to 5 replicas, more on request25+ replicas
p99 Cold BootP99 is the 99th cold boot percentile. This means 99% of requests will be faster than the given cold boot number.~50 sec~10 sec
p50 Cold Boot14.18 sec14.18 sec
p99 System LatencyP99 latency is the 99th latency percentile. This means 99% of requests will be faster than the given latency number.~13 sec~13 sec
p50 System Latency0.63 sec0.63 sec
Network Payloadup to 50MBup to 50MB
SupportCommunity Discord + Support InboxPrivate Slack channel w/ ~12hr SLA
Deploy NowContact Us

You'll be in good company

Banana users are builders and shippers, just like you.

Banana for scale.

Enjoy 1 hour of free hosting on us 🍌