— bash — netgod.dev — 80×24
guest@netgod.dev:~/blog$ cat rag-vs-fine-tuning-when-to-use-which.md
← cd ../blog
POST(AI)netgod.dev manualPOST(AI)
NAME

$ RAG vs Fine-Tuning: When to Use Which (and When to Use Both)

DESCRIPTION

Every AI project hits this fork. The honest answer is more nuanced than the Twitter takes suggest.

DATE
2025-04-08
DURATION
1 min read
TAGS
./assets/rag-vs-fine-tuning-when-to-use-which.pngcover
CONTENT

You have a domain-specific use case and a base LLM that mostly-but-not-quite knows your domain. Do you build RAG, fine-tune, or both?

Use RAG when…

  • The knowledge changes (docs, prices, inventory, news)
  • You need citations
  • You're under 10M tokens of source material
  • You need to ship this week

Fine-tune when…

  • You need a specific style or format the base model can't reliably produce
  • You're doing classification, extraction, or routing — not free-form generation
  • You want to shrink to a smaller, cheaper model
  • The behavior, not the knowledge, is what's wrong

Use both when…

You fine-tune a small model to follow your tool-calling format perfectly, then feed it RAG context for facts. This is how most production agents are actually built.

Don't fine-tune to teach facts

It mostly doesn't work. The model will hallucinate variations of what you taught it. Put facts in the context window.

netgod.dev manual2025-04-08END