vLLM reposted this
I just ran batch inference on a 30B parameter LLM across 4 GPUs with a single Python command! The secret? Modern AI infrastructure where everyone handles their specialty: 📦 UV (by Astral) handles dependencies via uv scripts 🖥️ Hugging Face Jobs handles GPU orchestration 🧠 Qwen AI team handles the model (Qwen3-30B-A3B-Instruct-2507) ⚡ vLLM handles efficient batched inference I'm very excited about using uv scripts as a nice way of packaging fairly simple but useful ML tasks in a somewhat reproducible way. This combined with Jobs opens up some nice oppertunities for making pipelines that require different types of compute. Technical deep dive and code examples: https://lnkd.in/e5BEBU95