How to Deploy & Run StableLM

How to Deploy & Run StableLM

graphic of a parrot and our blog post title "StableLM deployment tutorial".

Let's walkthrough how you can deploy and run the StableLM language model on serverless GPUs. Minimal AI knowledge or coding experience is required to follow along and deploy your own StableLM model.

This tutorial is easy to follow because we are using a Community Template. Templates are user-submitted model repositories that are optimized and setup to run on Banana's serverless GPU infrastructure. All you have to do is click deploy and in a few minutes your model is ready to use on Banana. Pretty sweet! Here is the direct link to the StableLM model template on Banana.

What is StableLM?

StableLM is the first open source language model developed by StabilityAI. It is available for commercial and research use, and it's their initial plunge into the language model world after they developed and released the popular model, Stable Diffusion back in 2022.

StableLM Deployment Tutorial

StableLM Code Sample

Copy and paste the code below. Make sure your formatting matches the screenshot shown here:

Screenshot 2023-04-19 at 3.18.13 PM.png

import banana_dev as banana api_key = "INSERT API KEY"

model_key = "INSERT MODEL KEY" print("this is your StableLM chatbot! What do you want to talk about?") def call_model(prompt):

out = banana.run(api_key, model_key, prompt)

return out

while True:

print("type your message:")

user_prompt = input('')

if user_prompt == "stop":

exit()

model_inputs = {

"prompt": "<|USER|>" + user_prompt + "<|ASSISTANT|>",

"max_new_tokens": 64,

"temperature": 0.7

}

model_response = call_model(prompt=model_inputs)

result = model_response["modelOutputs"][0]["output"] print(result.replace(user_prompt,""))


Wrap Up

Reach out to us if you have any questions or want to talk about StableLM. We're around on our Discord or by tweeting us on Twitter. What other machine learning models would you like to see a deployment tutorial for? Let us know!