How to Deploy & Run InstructPix2Pix Model (edit images w/ TEXT!)

February 23, 2023How to Deploy & Run InstructPix2Pix Model (edit images w/ TEXT!)

Deprecated: This blog article is deprecated. We strive to rapidly improve our product and some of the information contained in this post may no longer be accurate or applicable. For the most current instructions on deploying a model like InstructPix2Pix to Banana, please check our updated documentation.

Have you ever wanted to edit or alter an image by talking to it? Maybe not...that is a pretty odd circumstance to be talking to images. BUT you can now do this, and it's a lot more useful than you may think.

By using the machine learning model InstructPix2Pix, you can provide a written instruction to change or alter portions of an image. That is freaking amazing!

In this tutorial we demo how to deploy InstructPix2Pix to production on serverless GPUs. This tutorial is 5-minutes long, and welcoming to all levels of AI and coding experience. We are using a Community Template that is optimized and ready for Banana's serverless deployment framework.

What is InstructPix2Pix?

InstructPix2Pix is a model that enables you to edit images with human instructions. As mentioned earlier, you give the model an image and a written instruction for what to edit within the image and InstructPix2Pix will follow your instructions to make the edit.

You'll be very impressed with the level of sophistication of the editing when you see it. We highly recommend you take a look at the original paper to see editing examples and learn more about InstructPix2Pix.

VIDEO - InstructPix2Pix Deployment Tutorial

Video Notes & Resources:

We mentioned a few resources and links in the tutorial, here they are.

In the tutorial we used a virtual environment on our machine to run our demo model. If you are wanting to create your own virtual environment use these commands (Mac):

Add this code to your Python file. PART1:import banana_dev as banana import base64 from io import BytesIO from PIL import ImagePART2:image_byte_string = out["modelOutputs"][0]["image_base64"] image_encoded = image_byte_string.encode('utf-8') image_bytes = BytesIO(base64.b64decode(image_encoded)) image = Image.open(image_bytes) image.save("output.jpg")

Wrap Up

Reach out to us if you have any questions or want to talk about Blenderbot. We're around on our Discord or by tweeting us on Twitter. What other machine learning models would you like to see a deployment tutorial for? Let us know!