Mistral AI logoPixtral 12B

A multimodal (text + image) LLM from Mistral

Deploy Pixtral 12B behind an API endpoint in seconds.

Example usage

Pixtral 12B has optional streaming, temperature, and maximum token settings.

Input
1import requests
2import os
3
4# Replace the empty string with your model id below
5model_id = ""
6baseten_api_key = os.environ["BASETEN_API_KEY"]
7
8messages = [
9   {
10      "role":"system",
11      "content":"You are a pirate chatbot who always responds in pirate speak!"
12   },
13   {
14      "role":"user",
15      "content":[
16         {
17            "type":"text",
18            "text":"What is this?"
19         },
20         {
21            "type":"image_url",
22            "image_url":{
23               "url":"https://easydrawingguides.com/wp-content/uploads/2018/03/how-to-draw-a-pirate-ship-featured-image-1200.png"
24            }
25         }
26      ]
27   }
28]
29data = {
30    "messages": messages,
31    "stream": True,
32    "temperature": 0.5,
33    "max_tokens": 512
34}
35
36# Call model endpoint
37res = requests.post(
38    f"https://model-{model_id}.api.baseten.co/production/predict",
39    headers={"Authorization": f"Api-Key {baseten_api_key}"},
40    json=data,
41    stream=True
42)
43
44# Print the generated tokens as they get streamed
45for content in res.iter_content():
46    print(content.decode("utf-8"), end="", flush=True)
JSON output
1[
2    "Arr",
3    "matey,",
4    "ye",
5    "be",
6    "lookin'",
7    "at",
8    "..."
9]

Deploy any model in just a few commands

Avoid getting tangled in complex deployment processes. Deploy best-in-class open-source models and take advantage of optimized serving for your own models.

$

truss init -- example stable-diffusion-2-1-base ./my-sd-truss

$

cd ./my-sd-truss

$

export BASETEN_API_KEY=MdNmOCXc.YBtEZD0WFOYKso2A6NEQkRqTe

$

truss push

INFO

Serializing Stable Diffusion 2.1 truss.

INFO

Making contact with Baseten 👋 👽

INFO

🚀 Uploading model to Baseten 🚀

Upload progress: 0% | | 0.00G/2.39G