# Optimizing Documents for RAG Indexing

**Retrieval Augmented Generation (RAG)** combines the power of **Natural Language Processing (NLP)** with the capabilities of **Large Language Models (LLMs)** to let applications understand natural speech and learn from ingested knowledge.

Even with the best LLM, the model is unlikely to be trained or optimized for your specific use case. Tog get the gest results, you you need to refine your inputs before documents are processed. You will still benefit from NLP and document chunking but some upfront preparation is required.

This section covers everything you need to know about document optimization:

* [When to Optimize Documents](https://docs.aisera.com/aisera-platform/adding-data-to-your-tenant/data-ingestion/optimizing-documents-for-rag-indexing/when-to-optimize-documents): How to identify when poor RAG results are a content problem and when to act on it.
* [How to Optimize Documents](https://docs.aisera.com/aisera-platform/adding-data-to-your-tenant/data-ingestion/optimizing-documents-for-rag-indexing/how-to-optimize-documents): Recommendations for structuring and preparing your documents before ingestion.
* [Re-indexing and Testing After Optimization](https://docs.aisera.com/aisera-platform/adding-data-to-your-tenant/data-ingestion/optimizing-documents-for-rag-indexing/re-indexing-and-testing-after-optimization): How to re-ingest your optimized documents and validate the results.
* [Document Optimization Examples](https://docs.aisera.com/aisera-platform/adding-data-to-your-tenant/data-ingestion/optimizing-documents-for-rag-indexing/examples-of-document-optimization): Real-world examples showing how optimization strategies apply across different content types.
