Deploying and using Stable Diffusion XL 1.0
TL;DR
Stable Diffusion XL 1.0 is a highly capable text-to-image model by Stability AI that was released on July 26, 2023 under their CreativeML Open RAIL++-M license.
Deploy Stable Diffusion XL 1.0
You can deploy Stable Diffusion XL 1.0 in 2 clicks from Baseten’s model library. It’s also available packaged as a Truss on GitHub.
Hardware requirements
Stable Diffusion XL requires an A100 for invocation. In our testing, it takes 8-12 seconds to generate an image.
Manual deployment
Sign up or sign in to your Baseten account and create an API key. Then run:
pip install --upgrade truss
git clone https://github.com/basetenlabs/truss-examples
cd stable-diffusion/stable-diffusion-xl-1.0
truss push
Paste your API key when prompted.
Use Stable Diffusion XL 1.0
This model is capable of generating stunningly detailed and accurate images from simple prompts.
To invoke the model, run:
truss predict -d '{"prompt": "A tree in a field under the night sky"}' | python show.py
The output will be a dictionary with a key data
mapping to a base64 encoded image. The provided bash script runs the output through the following Python code to save the generated image:
1import json
2import base64
3import os, sys
4
5resp = sys.stdin.read()
6image = json.loads(resp)["data"]
7img=base64.b64decode(image)
8
9file_name = f'{image[-10:].replace("/", "")}.jpeg'
10img_file = open(file_name, 'wb')
11img_file.write(img)
12img_file.close()
13os.system(f'open {file_name}')
The Stable Diffusion Refiner model
The Stable Diffusion Refiner model adds accuracy to difficult-to-generate details like facial features and hands. You can choose whether or not to use the refiner model in an invocation with the use_refiner
parameter.
truss predict -d '{"prompt": "A tree in a field under the night sky", "use_refiner": true}' | python show.py
Example outputs
Reach out to us at support@baseten.co with any questions!
Subscribe to our newsletter
Stay up to date on model performance, GPUs, and more.