
Exla
Winter 2025An SDK to run transformer models anywhere
Edge Computing SemiconductorsComputer VisionAI
About Company
Exla aggressively quantizes AI models to minimize memory usage and maximize inference speed. Whether you're deploying LLMs, VLMs, VLAs, or custom models, Exla reduces memory footprint by up to 80% and accelerates inference by 3–20x - all with just a few lines of code. https://cal.com/exla-ai/schedule

