Test Suite
The Aisera Test Suite gives you a way to validate that an application/bot still works the same for conversations after changes such as a product release or upgrade, hotfix patch, changes in your app configuration, milestone, or audit events. Now you can use a self-service interface at the admin level to run tests on your app that will lead to improved Key Performance Indicators (KPIs) and overall increased customer satisfaction.
The Aisera Test Suite replays entire audited Conversations (Requests) and Phrases generated from ingested KB Articles, not just one user utterance and its associated response, and compares the results of the conversation to the historical conversation (the original time the conversation occurred). If there are any differences in the conversation, the test case will fail and will be displayed as an “Unmatch”.
Target Persona(s)
Aisera Admins like CE, CSM & AISE
QA / Test Admins
Customer Admins
The Aisera Test Suite Resolves the Following Bot Testing Difficulties:
After any product release or hot fix, teams manually & sporadically test all tenants to ensure nothing is broken and no regression was caused during the event.
There was not an easy way to test whether Admin users introduced regressions when making edits in the application.
AI Lens is for single requests & doesn’t allow testing on historical conversation data.
No ability to test daily updates to a tenant - only complex use cases.
Admins did not have the ability to test onboarding accounts' webchat behavior if there were no requests in the system to validate against.
There was no straightforward way to verify if the web chat behavior remains consistent after a release.
Use Cases
I need the ability to validate that all the conversations and phrases generated from KB Articles are working as expected after any major product upgrade or release.
I need the ability to test how any configuration changes I make to the application will impact conversational results.
I want the ability to run automated tests in the app to ensure no regressions were introduced after a release or hotfix.
I want the ability to run automated tests in the product to ensure high results before going live.
I want to ensure the consistency in webchat responses after a release/ any updates to an app.
Components of a Test Suite
The Aisera Test Suite is a collection of test cases that are conversations between users and the chatbot.
Test Cases -
Request → Test Case - You can generate a test case from a conversation request between a user and a chatbot. This is a sequence of Requests and Responses with a Resolution (Fulfillment) provided by the chatbot with a status of Resolved, Unresolved, or any other Resolution Status.
Phrase generated from a Knowledge Base Article → Test Case - You can generate a test case / phrase from a Knowledge Base Article (KBA). If the bot does not have audited Request(s) present in the system, the you can generate phrases from already existing/ingested Knowledge Base Article(s) in an application for testing. This is helpful for onboarding accounts where Requests do not exist since the app has not gone live yet.
Matched Test Case -
For a Request or Conversation, a Matched Test Case means a historical conversation matches (contextually or syntactically) the original conversation replayed during the Test Suite execution.
For a phrase generated from a Knowledge Document, a Matched Test Case means, upon replaying the Test Suite, the returned KBA section/content chunk, (the chabot’s Response) matches one or more portions of the original KB Article.
Unmatched Test Case - This means there was a difference in the conversation when it was replayed during the test compared to the historical conversation (this difference can be in the number of steps in the conversation or the content of the bot response- such as different workflow, KB Article, or custom message)
NOTE: Whenever a Test Suite execution exceeds 10 minutes, the test case times out, triggering an error message displayed on the UI (as shown in the screenshot below).

When you chose a Test Case and open it you can see the Test Case Execution Details window.

Match Rate
The calculation for the Match Rate is [Matched Test Cases / Total Test Cases] * 100.
Please refer to the screenshot below. In this example, 14 Matched Test Cases / Total 15 Test Cases = 93.3% Match Rate.

Definition
The Definition is the list of test cases present in a Test Suite.

Executions
Last Executed Time - Records the timestamp of the most recent Test Suite run (see screenshot below). Note that you can track status of the Test Suite executions in the Aisera Admin UI by navigating to Settings > Jobs as shown below:

Execution Results Over Time - Displays the Match Rates of all Test Suite executions over time, visualized in a timeline plot (see screenshot below). Execution time varies based on the number of test cases in a suite. After execution, each Test Case's status will automatically update, indicating whether it matched or did not match with the historical conversation.

Baseline Response
This is the Response from the original conversation.
Set Up Your Test Suites
Test Case Creation - You can create a Test Suite using either Requests or Knowledge Base Articles (Knowledge Documents).
In the Aisera Admin UI, navigate to AI Workbench > Test Suites.

Select the + New Test Suite button.

Create a Name for your Test Suite.

Click OK. This will take you to the Test Suite Details window.

Select the Definition tab.
Choose + Add Test Cases.

Select Requests as the Test Case Type.

Requests Test Case Type
When you choose Requests, you'll see the Test Suite selection screen.

Choose Requests that you want to include in your Test Suite.

Select the Actions button and choose Add Selected to Test Suite.
Choose the Test Suite that you want to add the selected Requests to.

Click OK. You will see a success message that your Requests have been added to the Test Suite.

Now you can see the Requests you selected within the Definition (list) for your Test Suite.

To Run the Test Suite, switch to the Results tab.
Choose the Run Test Suite button.

A window asks you to acknowledge that you want to run the Test Suite.

After you acknowledge that you want to run the Test Suite, a Success window will appear to let you know that you Test Suite job has been started.
Click on the View Jobs twirling icon to see the status of your Test Suite. In the diagram below, you can see that the Test Suite is in the Scheduled status. You can also get to this page using the Settings > Jobs command in the Aisera Admin UI.

Knowledge Base Article Test Case Type
Make sure you have ingested Knowledge Base Article data from a tenant data source associated with your Aisera Gen AI tenant instance.
Select Knowledge Documents from the + Add Test Cases button, described in Step 8 from the beginning of the Set Up Your Test Suite section.

In the Knowledge Documents screen, you first generate phrases, allow the user to manually review the generated phrases, and then redirect to the Test Suite selection screen.

Select the Knowledge Documents that you wish to automatically generate phrases from, validate the phrases.

Select the Generated Phrases you want to use and choose Add Selected to Test Suite from the Actions menu.

Add the Selected Phrases to a Test Suite.

You can add the selected Test Cases to an existing Test Suite or create a new Test Suite. When adding to a new test suite, user is required to enter a unique Test Suite Name:

To view all Test Suites, navigate to AI Workbench > Test Suites.

After you’ve created a Test Suite, you can run the Test Suite from the Suites List View page as show below:

Or from the Test Suite Details page as shown below:

Execution Chat Simulation
For Matched & Unmatched Test Cases, you can view more details on the Execution Chat Simulation page to get visibility into what the historical/ original conversation looked like and if it was a match or a no-match.
For test cases that are Matched, the Execution Chat Simulation will indicate that the historical conversation matches (contextually or syntactically) the original conversation replayed during the test suite execution, with a Matching Score.

For test cases that are Unmatched, the Execution Chat Simulation will indicate that there was a difference in the bot’s response when it was replayed during the test compared to the historical conversation along with a corresponding low matching score:

Update the Baseline if the Response is not your expected response.
When replaying a Test Case, the chatbot might technically respond correctly, but the system still marks it as unmatched because it doesn’t exactly match the original response or the response that was served when the original conversation happened. In these cases, the expected response is shown separately below for reference.
You have the ability to set this newer, correct response as the new baseline. This means future test runs will recognize it as a valid match, and subsequent executions will result in a MATCH situation. This reduces any false negatives and saves you from unnecessary debugging. It keeps your Test Suite clean and more accurate.



Run As Option
The Definitions page includes a Run As option that allows you to choose who you want to run the test case as. For instance, you can run a Test Case as the Original User, as a Custom User, or as a Selected User.
Test Case-Level Configuration:
Use the Run as column is introduced on the Definitions page to select the user you would like to run each test case.
Modify the Run as column to switch between Run as - Original User and Run as - Custom User which in turn lets them enter the email address of the custom user they want to run the test case as.
Error handling:
Invalid email address entries default to [email protected].
Valid email entries are used to execute the test case as the specified user.
Default setting if no changes are made: Run as Original User. Please note that the Original User varies for each Test Case, depending on how it is created.
If the Test Case is created from an existing Conversation Request, then the Original User is the User who originally initiated that Conversation Request.
If the Test Case is created from a KB Article Phrase, then the Original User is the Test User, such as, [email protected].
Test Suite-Level Configuration:
You have the option to configure and execute the Test Suite without modifying individual Test Cases.
Options during execution:
Run as Original User: Overrides all test cases to use the original user setting.
Run as Custom User: Overrides all test cases to use a custom user.
Run as Selected: Retains the Run as settings defined for each test case by the users.
Default setting if no changes are mad: Run as Selected.
Suite vs Case Configuration
You can configure and save execution settings at the Test Suite level.
A Test Suite’s Configuration takes precedence over the Test Case(s) Configuration.
Steps for Run As Option
Step 1: View Test Cases and make necessary changes:
Admin navigates to the Test Suite's Definitions page, where all test cases are listed.
A new "Run as" column is visible, displaying default values (Run as Original User).
Admin modifies the Run as column for specific Test Cases (to Run as Original User or Run as Custom User).
When you're done making changes, choose Save.
Step 2: Configure Test Suite Execution Settings
Admin navigates to the Results page and clicks on the newly added Configuration button on the top right corner next to “Run Test Suite” (similar to the buttons we see on Knowledge and Ticket Learning pages.
Once the Admin clicks on Configuration, here are the options available:
Run as Original User: Ignores individual test case settings and runs all test cases as the original user.
Run as Custom User: Ignores individual test case settings and runs all test cases as a custom user.
Run as Selected: Executes test cases based on their individual "Run as" configuration.
Select an option (such as Run as Selected) and click Save.
Step 3: Execute Test Suite
Click the Run Test Suite Job button.
The Aisera Gen AI platform fetches your latest saved configuration and runs the Test Cases accordingly.
Summary of workflow:
Create a Test Suite:
Go to the AI Workbench > Test Suites.
Choose historical conversations from the Requests page and add them to a new Test Suite. These conversations become your Test Cases.
Choose Knowledge Documents that you wish to generate phrases from for testing and add those phrases as Test Cases for the Test Suite.
Execute a Test Suite:
With a single click, replay all the selected conversations.
The system evaluates the bot’s current responses against historical responses.
Analyze Results:
The Test Suite categorizes each Test Case as:
Matched: The bot’s response is consistent with historical data.
Unmatched: The bot’s response has deviated from historical behavior, flagging a potential regression.
Use the Execution Chat Simulation interface to inspect unmatched cases. It highlights specific differences, making troubleshooting easy and efficient.
Last updated