# Request Analyzer - Conversation Status

The Aisera Gen AI platform includes a service that analyzes all conversations, including unresolved conversations, and applies auto-tags to all Requests so the models can sort them as related to Intents, Workflows, Knowledge Documents, or Internal Server/ Flow Execution Errors categories.  \
\
This conversation service is known as the **Request Analyzer**.

The resulting **Conversation Status** can be:&#x20;

* Resolved
* Unresolved
* Casual
* Assisted
* Abandoned
* Unhandled

Automatically and accurately analyzing all conversations especially unresolved conversations and applying tags to all conservation requests helps better with representation and analysis of conversations when reviewing a tenant on a daily/ weekly cadence.&#x20;

The **Request Analyzer** supports conversations with or without Fulfillments and also provides coverage for Requests from non-intent based fulfillment sources like RAG and Public Knowledge Base Articles.&#x20;

The list of the conversation tags and definitions are as follows:

<table data-header-hidden><thead><tr><th width="215"></th><th></th></tr></thead><tbody><tr><td>Tag Name</td><td>Tag Description</td></tr><tr><td>Correct Answer</td><td><p>Request was correctly answered by the Fulfillment.</p><p> What constitutes “correctly answered”? </p><p>Positive Feedback from User </p><p>OR</p><p> LLMs confirming that KB is relevant to the request </p><p>OR </p><p>Action Flow Finished without Negative Feedback</p></td></tr><tr><td>Incorrect Answer</td><td><p>The KB served from Neural/Search/Neural+ was Irrelevant (As decided by LLM) </p><p>OR</p><p> Received Negative Feedback on a KB (from Neural/Search/Neural+) (even if LLM states it’s correct answer)</p><p>OR </p><p>Incorrect Intent (by ICM) and Irrelevant KB (As decided by LLM) </p><p>OR </p><p>Incorrect Intent (by ICM) and Abandoned Flow </p><p>OR </p><p>Abandoned Flow from Any Intentless (RAG) Fulfillment Source</p></td></tr><tr><td>Incorrect Answer- Update Annotation</td><td><p>This tag satisfies the conditions for Incorrect Answer. </p><p>Additionally, Correct Intent exists but was not identified by ICM. </p></td></tr><tr><td>Incorrect Answer- Update Ontology</td><td><p>This tag satisfies the conditions for Incorrect Answer. </p><p>Additionally, the request contains some entities (as determined by LLMs) which were not identified by KGNER. </p></td></tr><tr><td>Incorrect Answer- Update Annotation and Ontology</td><td><p>This tag satisfies the conditions for Incorrect Answer. </p><p>Additionally, Correct Intent exists but was not identified by ICM </p><p>AND</p><p>The request contains some entities (as determined by LLMs) which were not identified by KGNER. </p></td></tr><tr><td>KB Gap</td><td><p>The KB served (KB can be from the Intent or the Fallback) was Irrelevant (As decided by LLM) but the Intent was correct </p><p>OR </p><p>Received Negative Feedback for a KB (KB can be from the Intent or the Fallback) but the Intent was correct</p></td></tr><tr><td>Flow Gap - Terminal Node</td><td><p>Negative Feedback at the end of the flow </p><p>OR</p><p> If the KB served at the end of the KB Flow (from ICM or Flow Search) is irrelevant (As decided by LLM)</p></td></tr><tr><td>Flow Gap - Non- Terminal Node</td><td>Abandonment of the flow (But intent is correct, as decided by LLM).</td></tr><tr><td>Flow Execution Error</td><td><p>Flow Execution Failure. - All errors that are ERR-007</p><p> Examples: ERR-007 : Flow_Execution_Failure : Fail to execute FlowNode:649512065 </p><p>ERR-007 : Flow_Execution_Failure : java.lang.NullPointerException</p></td></tr><tr><td>Something Else</td><td>Internal Error Message from the Conversation Server Example: ERR-002, ERR-006 Any error code other than ERR-007</td></tr><tr><td>Not Understood</td><td>No Fulfillment was Served</td></tr></tbody></table>

<figure><img src="https://3281977978-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvBFXjH9S1CAy9f5hzg5Q%2Fuploads%2Ft1xEPrMmuxtfBfZcXzGY%2FScreenshot%202024-06-04%20at%204.34.08%E2%80%AFPM.png?alt=media&#x26;token=156379dc-1729-4e30-9c0d-3d6bc299845b" alt=""><figcaption></figcaption></figure>

<figure><img src="https://3281977978-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvBFXjH9S1CAy9f5hzg5Q%2Fuploads%2FeQBYsYKeB1OS2fgW40Ft%2FScreenshot%202024-06-04%20at%204.34.40%E2%80%AFPM.png?alt=media&#x26;token=db509eba-9d02-482e-9688-52c5810c0bfe" alt=""><figcaption></figcaption></figure>

The **Request Analyzer** performs the auto-categorization (application of auto-tags) once or twice per week depending on the volume of requests for a given tenant instance. Each tenant has its own schedule.&#x20;

These tenant jobs are staggered automatically and run on the Aisera Gen AI server every few days depending on the average time it takes to run the job. For example, tenants with a high volume of requests will have the job auto-triggered once a week, lower volumes will run every few days.<br>
