Baseten Webinars
How to deploy low-latency compound AI systems at scale with Baseten Chains
Learn how to deploy ultra-low-latency compound AI with seamless model orchestration, custom autoscaling, and optimized hardware.
How to run DeepSeek-R1 in production
Learn what sets DeepSeek-R1 apart from other LLMs, why running it in production is challenging, and how to get a dedicated and secure DeepSeek-R1 deployment.
How to build function calling and JSON mode for LLMs
In this webinar we'll dive deep into how to implement function calling and JSON mode for LLMs: defining schemas and tools, building a state machine, and more.
Why you need async inference in production
Join our live webinar to learn how to leverage asynchronous inference on Baseten!
How to run multi-model inference in production with Baseten Chains
On-demand webinar: learn how you can orchestrate inference across multiple models and machines using Baseten Chains