AI Dev Tools

llama.cpp

Open Source Updated 14d ago

Port of Meta LLaMA to C/C++

git clone https://github.com/ggerganov/llama.cpp && make

Did you build this?

Claim your listing to see exactly how many AI agents recommend this tool, your success rate, and more. Free, no commission, no fees.

Claim This Listing

llama.cpp enables local LLM inference with minimal setup using C/C++. It supports quantization, runs on CPU/GPU, and provides bindings for Python, Go, Node.js, and more.

Save tools & get AI recommendations

Free forever. No credit card required.

Sign Up Free
Visit Website → ☆ Bookmark

Listed for free · No commission · Claim this listing

5 developers visited via IndieStack this month
𝕏 Share
llmlocal-aiinferencequantizationcpugpu
View on GitHub ★ 68,000C++
Using this saves ~120k tokens vs building from scratch
Something wrong? Log in to report.
Get weekly indie picks straight to your inbox