Troubleshooting Data Ingestion
Use the following sections as a guide if your Data Source Ingestion does not complete as you expected. If you're still having trouble after reviewing these sections, contact support.aisera.com or your Aisera contact.
What should I do ( or troubleshoot) if the DataSource Run fails?
What should I do if the Ingestion Run fails?
Check your Integration credentials
View the data source logs for obvious errors.
The Ingestion Run was Successful, but there are Missing Tickets that I See in the UI
Check the Date Range in the Data Source configuration.
Check the first mapped record in the logs to get a glimpse of the content that was ingested.
Check the job (in Settings > Jobs) and make sure that the pipeline has been completed and does not have any critical errors. Even if the ingestion status is complete, the pipeline may still be running.
Ticket Ingestion Completed in Incremental Mode, but some fields I need are missing from the Ticket content I see in the UI. What can I do?
Add field mappings for the missing fields.
Set a Date Range in the Data Source Configuration.
Re-run the Data Source Ingestion.
How do I connect to on-prem systems that aren't accessible outside the customer's environment?
Use the Remote Executor (RE).
Is there a list of all the supported integrated applications (with fields mapped to the Aisera platform) that I can use to connect to Aisera with my Data Source?
Yes. If the integration you're using is not working or you want to change it, use the Settings > Integrations command and then choose +New Integration to view the list of integration choices.
Integration fails immediately (Test Connection failure)
Check that Integration credentials are valid and you don’t have any typos. Type in the password instead of copy-pasting it.
Integration succeeds but ingests no entities
First you need to verify that Aisera Entities (objects) exist in the External System for the selected criteria (such as Date Range and Query). If they do, then this is probably an ACL related issue and the user authenticated does not have the required permission to access the resource. Most of the time requests will fail with 401 or 403 errors, but in some cases they are successful and return an empty response.
Finally, check that you have applied the correct field mappings. At a minimum you should have mappings for externalId or sourceURL. If both of these are missing, then no articles will be ingested.
Knowledge Base documents are deleted after execution
A very common case in a KB crawl is that sometimes KB documents are deleted accidentally.
This can happen for the following reasons:
Documents were deleted or retired from the External System, so they are correctly marked for deletion.
The user switches from Incremental to Date Range mode or vice-versa. if, for example, we switch from Incremental to Date Range, the next execution of the connector will only keep the documents updated in this Date Range. The rest of the documents that were ingested in previous crawls will be marked for deletion.
The user changes the configuration of the data source, such as, changes the query. Documents ingested from previous crawls that do not match the query will be deleted.
The user changes the mapping for external Id. Documents will have a new unique id, thus they will be considered New and the existing will be deleted.
Incorrect field mapping for external Id + versioning is enabled on the external system. In external systems like Salesforce and ServiceNow versioning may be enabled. It is crucial that you select a stable field for external Id that will not change when the document’s version is changed. We have revised most of the default mappings to prevent this scenario from happening.
If you see a large number of deleted documents in the Knowledge Review UI after a crawl, and you think it isn’t right, do not commit the changes. Instead, select Reject uncommitted changes (rollback) which will clear all uncommitted changes and restore the connector to its previous state. When you run the ingestion again after a rollback this will pick the last successful execution date (before rollback) to resume crawling from.
Cannot complete OAuth2 code flow
Check that credentials are valid (client id, client secret).
Make sure that redirect uri is correctly set in the External System’s app.
If you're using SSO (such as Okta), then you should create the integration from the standard cluster login URL.
Verify that all URLs are using the
https
protocol.In some systems, you have to include the scopes you want to authorize in the authorization URL. For most of the systems the scopes are defined during app creation in the external system. Finally, in some cases scopes must be declared both in app and authorization URL(such as Confluence).
Documents are ingested but I can’t see them in Knowledge UI
When you see that documents have been ingested in the Data Source UI but you can’t see them in Knowledge, make sure you have done the following:
Go to Knowledge Review and commit any uncommitted documents. This must be done even for New documents.
You have the correct field mapping for the article Body. If the body is empty, due to incorrect mapping, the metrics may show that a number of documents were ingested, but these documents were not persisted in the database (they went in but were not saved).
Mappings not copied over when a Data Source is cloned
If you clone a data source with custom field mappings, keep in mind that you have to update those field mappings manually in the cloned data source because currently field mappings are not cloned.
See the note above about giving custom fields a unique name so you can easily identify them after an upgrade or clone when you need to re-map them.
Last updated