In this tutorial you'll learn the easiest method to deploy HuggingFace's GPT-J model to production on serverless GPUs.
We will take you step-by-step from setting up your dev environment all the way to actually using your production GPT-J model for inference.
This tutorial has been broken up into four videos and in total will take around 45 minutes of your time. Let's begin!
Refer to our GPT-J Serverless Repository (referenced in video).
Did you know Banana offers a 100% free API for GPT-J with unlimited usage? It's true! If you are a hacker or hobbyist that wants to try GPT-J head over to the free APIs on our website and follow the usage instructions.
Part 1 of this tutorial series is focused on setting up your dev environment, cloning the Banana serverless template repo from GitHub, and verifying the hello world BERT model is working before we swap it with the GPT-J model.
Part 2 of this tutorial series is focused on loading the GPT-J model from HuggingFace into your template repo and making code changes to test that the model works proper locally.
Part 3 of this tutorial series is focused on dockerizing your GPT-J model repo, testing the docker build, and pushing your GPT-J repo to production.
Part 4 of this tutorial series is the wrap up where we run your GPT-J model in production on Banana's serverless GPUs! Wrap Up:
Hopefully you enjoyed this tutorial. If you have any questions, send us a message on our Discord. Let us know on Twitter what machine learning model you'd like to see a deployment tutorial on next and we'll make it happen!