New in April 2023
Open Source is here to stay for large language models
The best general-purpose, publicly available LLM at the time of writing, GPT-4, is closed-source. But ever since LLaMA was released in March, open-source large language models have been quickly gaining ground.
Camel is a new LLM by Writer specifically designed for completing instructions. Camel is designed to process complex instructions and generate specific, relevant, contextually accurate responses.
Deploy a 5 billion parameter version of Camel on Baseten with this GitHub repo.
Another new LLM, StableLM, was recently released by Stability AI, the company behind Stable Diffusion. StableLM, in its tuned form, behaves as a chatbot (like GPT-3.5) rather than as an instruction-based model like Camel or FLAN-T5. StableLM is also capable of generating fiction, poetry, and code.
Try StableLM on your Baseten account with this blog post by our CEO.
These new models aren’t the only news in LLMs. To create a new LLM, you need high-quality training data, and lots of it. For example, StableLM was trained on a dataset that builds on The Pile and has over 1.5 trillion tokens. RedPajama is a new effort to create an open-source LLM, and they’ve started by releasing a 1.2 trillion token dataset, along with data processing and analysis tools, that anyone can use to build their own LLM.
What’s next for open-source LLMs? Be on the lookout for ChatOSS.
AI community thrives in SF, NYC, hackathons
After meeting so many inspiring developers at our SF meetup in March, we hosted another set of AI meetups in April at our SF and NYC offices.
The energy in the AI space is amazing. Anyone can run an AI meetup, whether you’re a VC inviting a thousand people to a swanky venue or an independent developer crowding a dozen people into your apartment.
If you want to host your own AI meetup, here’s how in ten steps.
We’ll be hosting and attending more meetups of various sizes and formality over the next couple of months. We’re also sponsoring virtual and in-person AI hackathons. Follow us on Twitter for all the invites!
Usage-based pricing with $30 in free credits
Baseten has transitioned to purely usage-based pricing for all workspaces not on our enterprise plan. There is no monthly or annual platform fee for workspaces on the default startup plan.
With usage-based pricing, you only pay for the time when your model is deploying, active, or scaling down. Usage-based pricing also applies for fine-tuning runs.
Pricing depends on the instance type your model is running on:
Instances with an NVIDIA T4 GPU start at $0.01753 per minute
Instances with an NVIDIA A10 GPU start at $0.03353 per minute
CPU-only instances start at $0.00096 per minute
For details, see our usage-based pricing calculator.
Inside look: get to know the Baseten team
A new series on our YouTube channel, Inside Look, introduces members of the Baseten team in short, casual interviews. Recent interviews:
Next week will feature Julien, the mastermind behind Baseten’s AI meetups
See you next month!
— The team at Baseten
Subscribe to our newsletter
Stay up to date on model performance, GPUs, and more.