Manage models with the Baseten REST API
We’re excited to share that we’ve created a REST API for managing Baseten models! Unlock powerful use cases outside of the (albeit amazing) Baseten UI - interact with your models programmatically,...
See our latest feature releases, product improvements and bug fixes
Mar 20, 2024
We’re excited to share that we’ve created a REST API for managing Baseten models! Unlock powerful use cases outside of the (albeit amazing) Baseten UI - interact with your models programmatically,...
Mar 7, 2024
Every deployment of an ML model requires certain hardware resources — usually a GPU plus CPU cores and RAM — to run inference. We’ve made it easier to navigate the wide variety of hardware options...
Feb 23, 2024
You can now view a daily breakdown of your model usage and billing information to get more insight into usage and costs. Here are the key changes: A new graph displays daily costs, requests, and...
Feb 6, 2024
Baseten is now offering model inference on H100 GPUs starting at $9.984/hour. Switching to H100s offers a 18 to 45 percent improvement in price to performance vs equivalent A100 workloads using...
Jan 19, 2024
We’ve totally refreshed our model library to make it easier for you to find, evaluate, deploy, and build on state-of-the-art open source ML models. You can try the new model library for yourself...
Jan 11, 2024
You can now deploy models to instances powered by the L4 GPU on Baseten. NVIDIA’s L4 GPU is an Ada Lovelace series GPU with: 121 teraFLOPS of float16 compute 24 GB of VRAM at a 300 GB/s memory...
Jan 8, 2024
When deploying with Truss via truss push , you can now assign meaningful names to your deployments using the --deployment-name argument, making them easier to identify and manage. Here's an example:...
Dec 15, 2023
Autoscaling lets your deployed models handle variable traffic while making efficient use of model resources. We’ve updated some language and default settings to make using autoscaling more intuitive....
Nov 10, 2023
You can now retry failed model builds and deploys directly from the model dashboard in your Baseten workspace. Model builds and deploys can fail due to temporary issues, like a network error while...
Oct 31, 2023
We've made some big changes to the model management experience to clarify the model lifecycle and better follow concepts you're already familiar with as a developer. These changes aren't breaking...