# LLM Cache

The **Settings > Configuration > LLM Cache** window allows you to set parameters for your tenant. These are settings that apply to any bot you create in your Aisera tenant.

<figure><img src="https://3281977978-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvBFXjH9S1CAy9f5hzg5Q%2Fuploads%2F4lQF4IMlwo0spVOO2Xeq%2Fconfig_llm_cache.png?alt=media&#x26;token=9e3e01fd-b4d9-48da-95e9-a972341eddf9" alt=""><figcaption><p>Tenant Configuration Options if Using an Large Language Model (LLM)</p></figcaption></figure>

You can change the following settings when configuring your tenant/platform instance.

<table><thead><tr><th width="176">Label</th><th width="109">Type</th><th width="110">Default</th><th>Description</th></tr></thead><tbody><tr><td>Cache Length</td><td>Integer</td><td>1000</td><td></td></tr><tr><td>Cache Temperature</td><td>Decimal</td><td>0.1</td><td></td></tr><tr><td>Semantic Similarity Threshold</td><td>Decimal</td><td>0.95</td><td></td></tr><tr><td><p>Lower Semantic</p><p>Similarity Threshold</p></td><td>Decimal</td><td>0.75</td><td></td></tr><tr><td>Keyword Similarity Threshold</td><td>Decimal</td><td>0.999</td><td></td></tr></tbody></table>
