Groq’s cover photo
Groq

Groq

Semiconductor Manufacturing

Mountain View, California 153,824 followers

Groq builds the world’s fastest AI inference technology.

About us

Groq is the AI inference platform delivering low cost, high performance without compromise. Its custom LPU and cloud infrastructure run today’s most powerful open AI models instantly and reliably. Over 1 million developers use Groq to build fast and scale with confidence.

Website
https://groq.com/
Industry
Semiconductor Manufacturing
Company size
201-500 employees
Headquarters
Mountain View, California
Type
Privately Held
Founded
2016
Specialties
ai, ml, artificial intelligence, machine learning, engineering, hiring, compute, innovation, semiconductor, llm, large language model, gen ai, systems solution, generative ai, inference, LPU, and Language Processing Unit

Locations

Employees at Groq

Updates

  • View organization page for Groq

    153,824 followers

    Europe, we’re here. Groq’s newest data center is live in Helsinki, Finland. This expansion strengthens our global footprint to meet the growing infrastructure demands of European AI builders. It brings local capacity, lower latency, and cost-efficient inference directly to the region. Built for builders. Ready for production. Link in comments.

    • No alternative text description for this image
  • View organization page for Groq

    153,824 followers

    Igor Arsovski, Head of Hardware at Groq, joined Bill Dally (NVIDIA), Ramine Roane (AMD), Chuan Li (Lambda), and Jared Quincy Davis (Foundry) on stage at the Berkeley Agentic AI Summit to talk about what it takes to build infrastructure for the agent era. Igor shared how Groq is approaching inference differently, with a stack designed for real-time performance, scalability, and efficiency. Thanks to the organizers and fellow panelists for a great conversation.

    • No alternative text description for this image
  • View organization page for Groq

    153,824 followers

    Yesterday was National Intern Day — and we couldn’t let it pass without a big shoutout to our amazing interns! Over the past few years, our intern program has grown into a key part of how we nurture emerging talent — and this summer marked a new milestone: we welcomed 31 interns, our largest cohort yet! These interns have brought fresh ideas, energy, and real impact across teams — from engineering to operations and beyond. We're proud of everything they've accomplished and grateful for the curiosity and collaboration they bring to the table every day. As we look ahead, we're excited to continue growing and evolving our program — opening doors for more future innovators, builders, and leaders to join us. To all of our interns: thank you. You’re an important part of our story — and we can’t wait to see where you go next.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Groq reposted this

    View profile for Richard Entrup

    Global Technology & Innovation Leader | Managing Director, Emerging Solutions, Enterprise Innovation @ KPMG US | Startup & VC Advisor | CIO | CTO | CDO

    Christopher Stephens, VP and Field CTO at Groq, an AI chip maker who develops technology that enables customers to run AI models flexibly. Groq is building global infrastructure to deliver AI applications with minimal latency. We hope to see you next year Jonathan Ross Sunny Madra !! #KPMGTechandInnovation25 #Innovation #Groq #KPMGEnterpriseInnovation #EmergingTechnology

    • No alternative text description for this image
  • View organization page for Groq

    153,824 followers

    Groq just open-sourced the eval framework the AI community has been missing for years. If you’ve ever struggled to compare benchmarks or reproduce results, OpenBench is built for you.

    View profile for Aarush Sah

    Head of Evals @ Groq

    Introducing OpenBench 0.1: Open, Reproducible Evals Evaluating large language models today is messy—every eval framework has its own way of prompting, parsing responses, and measuring accuracy. This makes comparisons impossible. How do you know Anthropic and OpenAI evaluate MMLU the same way? Even perfect documentation won't save you from small implementation quirks. Benchmark results end up practically irreproducible. We hit this wall at Groq repeatedly, so we built our own solution: OpenBench. It worked wonders internally, and now we're releasing it publicly, with an MIT license. Here's how OpenBench helps: ➡️ One official implementation per benchmark ➡️ Easy integration with any model/API using Inspect ➡️ Reliable, reproducible results every time If you'd like to try it out, contribute or just follow along, check it out on GitHub and give the project a star! We would love feedback, feature requests and more. Link in comments!

  • View organization page for Groq

    153,824 followers

    This solo founder built an open-source competitor to Perplexity with no team, no funding, no permission. What started as a weekend project now powers over 1M searches, with 60K+ monthly users. This is the story of Scira and how Zaid Mukaddam built what others wouldn’t. Like many great dev stories, it started with frustration. Zaid wanted a better way to search across the web using AI. Something fast, clean, and customizable. Perplexity was powerful, but felt closed and out of reach. So he started building. He called it “MiniPerplx” at first: ✅ A lightweight, open-source alternative ✅ No login ✅ No paywall Then he added Research Groups which allowed search across YouTube, Reddit, and X, all at once, grouped by source. It actually worked. That’s when it took off. In December, Scira’s daily traffic jumped from 500 to 16,000 overnight. His GitHub repo surged from 1K to 6K stars in just a few weeks. People weren’t just curious. They were using it. He renamed it Scira. And just like that, it had momentum. Scira grew fast, but it wasn’t built to scale affordably. Running large models like Anthropic was burning over $1,000/month. Every search chipped away at Zaid’s savings. He nearly pulled the plug. Then Groq stepped in with compute that could finally keep up. With Groq’s efficient inference and Qwen3‑32B’s pinpoint citations, Scira transformed. It wasn’t just fast. It was sustainable. Add in Vercel AI SDK, Next.js, and a crisp Shadcn UI and you had a stack users loved. They even preferred it to GPT‑4o. Today, Scira is a full-stack AI search engine used by 60,000+ people each month, with more than 1 million searches served. The GitHub repo has earned 8,000+ stars and continues to climb. Built in public. Maintained solo. Proven by use, not hype. Zaid didn’t set out to build a company. He just saw a problem, started building, and refused to quit. That’s the part devs forget: You don’t need a launchpad. You don’t need a roadmap.  You just need to Build Fast.

    • No alternative text description for this image
  • View organization page for Groq

    153,824 followers

    1 week ago: Kimi K2 launches 72 hours later: We launch it on Groq Now: Thousands of devs are building with it Kimi K2, now on Groq: ✅ 1T parameters ✅ Full context ✅ Built for agents ✅ Unmatched price-performance Build Fast. Link in Comments

    • No alternative text description for this image

Similar pages

Browse jobs

Funding