Europe, we’re here. Groq’s newest data center is live in Helsinki, Finland. This expansion strengthens our global footprint to meet the growing infrastructure demands of European AI builders. It brings local capacity, lower latency, and cost-efficient inference directly to the region. Built for builders. Ready for production. Link in comments.
Groq
Semiconductor Manufacturing
Mountain View, California 153,824 followers
Groq builds the world’s fastest AI inference technology.
About us
Groq is the AI inference platform delivering low cost, high performance without compromise. Its custom LPU and cloud infrastructure run today’s most powerful open AI models instantly and reliably. Over 1 million developers use Groq to build fast and scale with confidence.
- Website
-
https://groq.com/
External link for Groq
- Industry
- Semiconductor Manufacturing
- Company size
- 201-500 employees
- Headquarters
- Mountain View, California
- Type
- Privately Held
- Founded
- 2016
- Specialties
- ai, ml, artificial intelligence, machine learning, engineering, hiring, compute, innovation, semiconductor, llm, large language model, gen ai, systems solution, generative ai, inference, LPU, and Language Processing Unit
Locations
-
Primary
400 Castro St
Mountain View, California 94041, US
-
Portland, OR 97201, US
Employees at Groq
-
Peter Bordes
CEO Collective Audience CAUD: OTC, Founder, Board Member, Investor, managing partner Trajectory Ventures & Trajectory Capital
-
Michael Mitgang
Growth Company Investor / Advisor / Consultant / Banker / Coach
-
Ofer SHOSHAN
Entrepreneur, Tech. investor
-
John Barrus
Product Manager, Entrepreneur - ML, Robotics, Cloud, IoT
Updates
-
Igor Arsovski, Head of Hardware at Groq, joined Bill Dally (NVIDIA), Ramine Roane (AMD), Chuan Li (Lambda), and Jared Quincy Davis (Foundry) on stage at the Berkeley Agentic AI Summit to talk about what it takes to build infrastructure for the agent era. Igor shared how Groq is approaching inference differently, with a stack designed for real-time performance, scalability, and efficiency. Thanks to the organizers and fellow panelists for a great conversation.
-
-
Ever wondered how Groq runs massive models in production so fast? Andrew Ling, Head of ML Compilers at Groq, breaks it down. Link in comments.
-
Yesterday was National Intern Day — and we couldn’t let it pass without a big shoutout to our amazing interns! Over the past few years, our intern program has grown into a key part of how we nurture emerging talent — and this summer marked a new milestone: we welcomed 31 interns, our largest cohort yet! These interns have brought fresh ideas, energy, and real impact across teams — from engineering to operations and beyond. We're proud of everything they've accomplished and grateful for the curiosity and collaboration they bring to the table every day. As we look ahead, we're excited to continue growing and evolving our program — opening doors for more future innovators, builders, and leaders to join us. To all of our interns: thank you. You’re an important part of our story — and we can’t wait to see where you go next.
-
-
Groq reposted this
Look forward to a great panel at the Agentic AI Summit at UC Berkeley tomorrow. Excited to share how Groq HW supercharges Agentic AI. https://lnkd.in/etUBDvWd
-
Groq reposted this
Christopher Stephens, VP and Field CTO at Groq, an AI chip maker who develops technology that enables customers to run AI models flexibly. Groq is building global infrastructure to deliver AI applications with minimal latency. We hope to see you next year Jonathan Ross Sunny Madra !! #KPMGTechandInnovation25 #Innovation #Groq #KPMGEnterpriseInnovation #EmergingTechnology
-
-
Groq just open-sourced the eval framework the AI community has been missing for years. If you’ve ever struggled to compare benchmarks or reproduce results, OpenBench is built for you.
Introducing OpenBench 0.1: Open, Reproducible Evals Evaluating large language models today is messy—every eval framework has its own way of prompting, parsing responses, and measuring accuracy. This makes comparisons impossible. How do you know Anthropic and OpenAI evaluate MMLU the same way? Even perfect documentation won't save you from small implementation quirks. Benchmark results end up practically irreproducible. We hit this wall at Groq repeatedly, so we built our own solution: OpenBench. It worked wonders internally, and now we're releasing it publicly, with an MIT license. Here's how OpenBench helps: ➡️ One official implementation per benchmark ➡️ Easy integration with any model/API using Inspect ➡️ Reliable, reproducible results every time If you'd like to try it out, contribute or just follow along, check it out on GitHub and give the project a star! We would love feedback, feature requests and more. Link in comments!
-
This solo founder built an open-source competitor to Perplexity with no team, no funding, no permission. What started as a weekend project now powers over 1M searches, with 60K+ monthly users. This is the story of Scira and how Zaid Mukaddam built what others wouldn’t. Like many great dev stories, it started with frustration. Zaid wanted a better way to search across the web using AI. Something fast, clean, and customizable. Perplexity was powerful, but felt closed and out of reach. So he started building. He called it “MiniPerplx” at first: ✅ A lightweight, open-source alternative ✅ No login ✅ No paywall Then he added Research Groups which allowed search across YouTube, Reddit, and X, all at once, grouped by source. It actually worked. That’s when it took off. In December, Scira’s daily traffic jumped from 500 to 16,000 overnight. His GitHub repo surged from 1K to 6K stars in just a few weeks. People weren’t just curious. They were using it. He renamed it Scira. And just like that, it had momentum. Scira grew fast, but it wasn’t built to scale affordably. Running large models like Anthropic was burning over $1,000/month. Every search chipped away at Zaid’s savings. He nearly pulled the plug. Then Groq stepped in with compute that could finally keep up. With Groq’s efficient inference and Qwen3‑32B’s pinpoint citations, Scira transformed. It wasn’t just fast. It was sustainable. Add in Vercel AI SDK, Next.js, and a crisp Shadcn UI and you had a stack users loved. They even preferred it to GPT‑4o. Today, Scira is a full-stack AI search engine used by 60,000+ people each month, with more than 1 million searches served. The GitHub repo has earned 8,000+ stars and continues to climb. Built in public. Maintained solo. Proven by use, not hype. Zaid didn’t set out to build a company. He just saw a problem, started building, and refused to quit. That’s the part devs forget: You don’t need a launchpad. You don’t need a roadmap. You just need to Build Fast.
-