Slack as a Data Source for Conversational and User Learning
This topic describes how to create and integrate your Slack application for Conversational Learning and User Learning when integrated with your Aisera bot. If you are looking for information for adding Slack as a channel for your bot without these Learning features, see Setting Up a Slack Channel.
See Conversational Learning and Gen AI Learning for more information about running these jobs.
Corporate Planning (Decision Making) Information
The Aisera Gen AI platform need to have a Service Account user for each system that it integrates with. This user needs to have authorization to log in and pull data from your Slack system. Retrieve the following information before you begin.
Can you create and share a Service Account for Slack that the Aisera bot can use to retrieve data?
Can you create and share service accounts for each of your Ticket and Knowledge systems? - For Ticketing systems, the Service Account should have the ability to view, create, update and resolve tickets. - For Knowledge systems, the service account should have the ability to view and publish the KB articles into the space where end-user KB articles are published.
What authentication mechanism(s) (such as Basic, OAuth, OAuth2) should be used to connect to your systems, mainly Ticket and Knowledge?
Can you share the Endpoint and Credentials for the Aisera Service Account user to connect to your systems, mainly Ticket and Knowledge? - For Basic - User Name and Password - For OAuth - Authorization URL, Client Id, Client Secret and Access Token URL
There are objects and fields on your Slack system that need to be accessed by the Aisera application or bot so it can return answers based on your corporate (tenant) data.
Create an Aisera Service Account
Use the Aisera Admin UI to create an Aisera Service Account User that can log into your Slack system. This user only needs Read permissions (with Export ability) to transfer data to the Aisera platform DB. If you plan to use Ticket Concierge, Knowledge Generation, or other features that write back to your Slack system, this user will need Read/Write permission (with Import/Export ability). This user does not need Execute or Delete permissions because all Aisera operations will be performed, tracked, and logged in the Aisera cloud.
Slack Object Data
The connection on the Slack side is implemented as a Generic connector that supports the following content types:
CONTENT_TYPE_USER
CONTENT_TYPE_USER_PROFILE
CONTENT_TYPE_CONVERSATION
CONTENT_TYPE_KNOWLEDGE_ARTICLE
Crawl Logic
The connector supports two content types.
Crawl Conversations
The connector fetches conversations in the following way.
First of all, the Data Source Configuration wizard in the Admin UI will make a request to an internal connector-apiserver endpoint to discover the available Conversations (Slack Channels).
This endpoint utilizes the selected Integration to make requests to the Fetch all channels endpoint:
/conversations.list
Slack Documentation: https://api.slack.com/methods/conversations.list.
When executing the Learn Conversations function, the connector will make requests to these endpoints:
Fetch all messages using
/conversations.history
endpoint (for each selected channel channel). Doc: https://api.slack.com/methods/conversations.historyFetch all replies for each message using
/conversations.replies
endpoint. Doc: https://api.slack.com/methods/conversations.replies
The logic is the same as the KB Learning section. The only difference between KB Learning and Conversations Learning is how we process and store the ingested data from the Aisera side. For KB Learning all the messages of a Slack Conversation (channel) are gathered into one Aisera KB. For Conversations Learning, every ingested message of a channel is stored separately using the new Model for Messages & Channels.
Important Notice: the UNIQUE KEY constrain in the msngr_messages table makes sure that the ingested entries of every DS are isolated in the database: uk_message_id_channel_id_data_source_id
(message_external_id
,channel_id
,data_source_id
)
This is different from other data models, like the tickets data model. Updating ingested entries should only happen from the same DS that ingested them. And also, having multiple DSs point at the same external system and channels can lead to having duplicate data in our db.
Field Mappings are important for Conversations Learning: when a message is ingested (msngrMessage), the proto message carries information about it’s channel as well. Sender Id can also be included.
Channels Association: when a conversation message is being ingested, it needs to have channel information (channel external id is required, and the name and workspace are optional). Our analytics service will make sure to associate the message with the existing channel in our db, and will first create the channel entry if it does not exist. Channel names will be updated by default if a message is ingested pointing to a channel with a different name in our db → the channel name in the ingested message will be persisted.
The channel.name and channel.workspace fields are updated only if a message has a not blank value for at least one of these fields. This means that if a message points to a valid channel (existing channel id), but has a null or empty string value for the channel name and the channel workspace, then the name and the workspace values in the corresponding channel entry in the database will not be updated. If at least one of the message’s values for workspace or name are not blank, then both of them will be written in the database. This logic aims to protect the database entries of channels from being updated due to ingested messages that (probably to due to some error) failed to map the channel name and/or workspace properly.
Sender Association: if the sender id is mapped, then analytics will attempt to look for the ingested user in our database. The sender id will be searched for in the user_identities table, using the external system id of the Integration behind the DS. If the user is found, then the internal user id of that user will be stored in the sender_id
column of the msngr_messages
table. This means that User Learning must happen first from a DS that uses the same Integration, before running the Conversations Learning job.
Parent Message Association: a Message in a Conversation (channel) can either be a top level message, or a reply (a message inside a thread). Top level message will have a NULL value in the parent_id
column of the msngr_messages
, and thread messages (replies) will have the internal id of the top-level message under which they belong to. Analytics service resolves this association by processing the mapped message. A mapped message can:
contain a not empty value for the
ParentId
field → this will instruct analytics to look for the conversation message that already exists in our db and has the external id that has been mapped, and use the internal user id of that user entry in thesender_id
column. If the user cannot be found, then thesender_id
column will remain NULL.contain a not empty list for the
replies
field → messages under the replies field will be extracted and stored as unique messages by analytics, and theparent_id
association will be created by default.
Crawl Users & User Profiles
The connector fetches users using the /api/users.list
endpoint in pages. Documentation: https://api.slack.com/methods/users.list
The logic is the same for User & User Profile Learning. User Profile is only needed when you need to ingest entries in the user_profile_attributes
table. Usually User Learning is enough to get entries in the users
table.
Crawl Knowledge Bases
Whether you’re using an API Token Authorization, or the latest addition to the Slack Integration: OAuth, you need to make sure that the User behind the token, or the Application behind the OAuth creds, is a member of the channels that you wish to ingest as KBs.
To add an App to a channel, simply mention it in a message in Slack:

Note: if you forget to do this, then you may see this error in the connector logs:
Error in slack response: not_in_channel
For Knowledge Base Learning, the Connector will ingest Channels as KBs. Every channel will be ingested as a unique document. During the DS set up in the Admin UI, the user will be prompted to select the Channels they wish to ingest from a dropdown list:

For KB Article Learning, we assume that the API Token will be a Workspace specific token (not an Org-Wide Token). The Admin UI makes an internal call to a connector-apiserver endpoint to fetch the available channels using the DS’s Integration, and render them in the drop down list for the user to choose.
For each channel, the content of each message (along with any replies) is converted to HTML content; all the HTML pieces are connected together to create the final KB. Images are downloaded and stored in Aisera cloud datastore, and other types of attachments are included in the Knowledge Base Article Body as a simple hyperlink.
API Endpoints:
Conversation History (Messages):
https://slack.com/api/conversations.history?channel=<ChannelId>&limit=500&include_all_metadata=true
Conversation Replies (Thread Messages) :
https://slack.com/api/conversations.replies?limit=999&channel=<ChannelId>&ts=<TopMessageTS>
Connector-ApiServer endpoint for Channel Discovery:
https://slack.com/api/conversations.list
Conversation Mapping
Slack Connector ingests every top-level message in a Slack Conversation separately. But before consuming it and applying the field mappings, it enhances it by injecting a JSON array (under the key: processed_replies
) inside the JSON record. Also, it creates the field: processed_ts
inside the record which is a unix timestamp that can be easily mapped. The following field mappings should be loaded by default when creating a new Conversation Learning DS, and they make use of both the original JSON fields from the Slack Rest API, as well as the Slack Connector’s generated fields.
The htmlBody field is also generated by the Slack Connector: the connector attempts to parse the blocks
data of every message, and create a single HTML Body. A transformation script can be designed if you wish to change the generation of the HTML Body for a specific use case.

Generic Config and Filtering Channels
In the Aisera internal DB, we have the following override datasource configuration for the Generic connector:
{"xml2json":false,"externalSystemTypeEnum":"SlackConnector","supportectContentTypes":["User","UserProfile","Conversation","KnowledgeArticle"],"contentTypeConfiguration":{"User":{"appliesToChildren":true,"httpContentType":"application/json","acceptContentType":"application/json","pagination":{"type":"cursor-param","cursorPath":"response_metadata.next_cursor","discardDot":false,"cursorParameter":"cursor","hasMoreField":false,"limitParam":"limit","defaultLimit":200,"defaultOffset":0,"dateFormat":"yyyy-MM-dd'T'HH:mm:ss'Z'","parameters":[],"tableEntriesArrayPath":"members.","tableEntryPath":"requester","isTableEntryJSONArray":false,"appendTopElement":false,"overrideWithDefaultLimit":false},"pathMappings":[{"path":"/api/users.list","pathByKey":"/api/users.list"}],"supportedOperations":["LIST"],"respectRequestPaginationInCount":false,"respectEndDate":false,"respectStartDate":false,"discardEntriesOutOfStartOrEndDate":false,"handleNullDatesConfiguration":false,"handleDefaultDatesConfiguration":false,"nullDateStartHoursBefore":0,"nullDateEndHoursAfter":0,"addElementAsArrayUnderTop":false,"downloadFiles":false,"timezone":"UTC","doNotFailOnNestedCall":false,"isAttachment":false,"isFileDownloadRequest":false,"prefetchConfigurations":[]},"UserProfile":{"appliesToChildren":true,"httpContentType":"application/json","acceptContentType":"application/json","pagination":{"type":"cursor-param","cursorPath":"response_metadata.next_cursor","discardDot":false,"cursorParameter":"cursor","hasMoreField":false,"limitParam":"limit","defaultLimit":200,"defaultOffset":0,"dateFormat":"yyyy-MM-dd'T'HH:mm:ss'Z'","parameters":[],"tableEntriesArrayPath":"members.","tableEntryPath":"requester","isTableEntryJSONArray":false,"appendTopElement":false,"overrideWithDefaultLimit":false},"pathMappings":[{"path":"/api/users.list","pathByKey":"/api/users.list"}],"supportedOperations":["LIST"],"respectRequestPaginationInCount":false,"respectEndDate":false,"respectStartDate":false,"discardEntriesOutOfStartOrEndDate":false,"handleNullDatesConfiguration":false,"handleDefaultDatesConfiguration":false,"nullDateStartHoursBefore":0,"nullDateEndHoursAfter":0,"addElementAsArrayUnderTop":false,"downloadFiles":false,"timezone":"UTC","doNotFailOnNestedCall":false,"isAttachment":false,"isFileDownloadRequest":false,"prefetchConfigurations":[]},"Conversation":{"appliesToChildren":true,"httpContentType":"application/json","acceptContentType":"application/json","pathMappings":[{"path":""}],"supportedOperations":["LIST"],"respectRequestPaginationInCount":false,"respectEndDate":false,"respectStartDate":false,"discardEntriesOutOfStartOrEndDate":false,"handleNullDatesConfiguration":true,"handleDefaultDatesConfiguration":true,"nullDateStartHoursBefore":0,"nullDateEndHoursAfter":0,"addElementAsArrayUnderTop":false,"innerAdapterClass":"com.aisera.externalsystems.slack.SlackAdapter","downloadFiles":false,"timezone":"UTC","doNotFailOnNestedCall":false,"isAttachment":false,"isFileDownloadRequest":false,"prefetchConfigurations":[]},"KnowledgeArticle":{"appliesToChildren":true,"httpContentType":"application/json","acceptContentType":"application/json","pathMappings":[{"path":""}],"supportedOperations":["LIST"],"respectRequestPaginationInCount":false,"respectEndDate":false,"respectStartDate":false,"discardEntriesOutOfStartOrEndDate":false,"handleNullDatesConfiguration":true,"handleDefaultDatesConfiguration":true,"nullDateStartHoursBefore":0,"nullDateEndHoursAfter":0,"addElementAsArrayUnderTop":false,"innerAdapterClass":"com.aisera.externalsystems.slack.SlackAdapter","downloadFiles":false,"timezone":"UTC","doNotFailOnNestedCall":false,"isAttachment":false,"isFileDownloadRequest":false,"prefetchConfigurations":[]}},"contextParameters":{},"basicTokenAuth":false,"startDate":true,"endDate":true,"sleepOnErrors":1200000,"retries":50,"rateLimitHeader":"x-ratelimit-remaining","rateLimitRespectNo":1500,"rateLimitEntriesSize":500,"sleepOnRateLimit":600000,"record":false,"useInterval":false,"interval":0,"disableFetchingServiceCatalogCategories":false,"multipleEntryPointIdsMap":{},"ingestGithubPullRequestCommits":false,"ingestGithubPullRequestReviewComments":false,"ingestGithubPullRequestComments":false,"externalSystemClient":{"configuration":{"host":"tenant-server","port":8088,"apiBasePath":"/tenant-server/v1"}},"secondReading":false,"hasImageProfile":true,"bypassNestedCallsForTestConnection":false,"useRawToken":false,"useRemoteExecutor":false,"useUserCredentials":false}
Data Source Set Up for KB Learning
For KB Article Learning (where channels are ingested as KBs), the user needs to select the channels they wish to ingest from the drop down list:

Important Notice: The date range settings are used by the Slack Connector to filter the top-level messages of each channel. You can use a start date far in the past to include all messages, or set a desired start date to make sure that only recent messages are included.
Having an “Incremental” setting for KB Learning is not suggested → KB Learning will bring the new KBs to the Knowledge Review, and since we have a KB per channel, recent crawls with this setting will bring up empty KBs.
The following field mapping should be loaded for DSs with the KB Article Learning function:

For each channel, the connector will process each unique message, and attempt to create an HTML Body out of it. The structure for each message will be:
Message: User: <userId> - Timestamp: <messageTimeStamp> - <HTML Content>
Data Source Set Up for Conversational Learning
For Conversations Learning (where every message of each channel is ingested separately using the new Conversation Data Model - MsngrMessage & MsngrChannel), the user needs to select the channels they wish to ingest from the drop down list:

Date Range Settings are used by the Slack Connector. The “Incremental” mode can be used along with a scheduler to create a DS that runs periodically to fetch only the updates from the external system.
There is a limitation from the Slack Rest API: edited messages get to keep their original timestamp. There is an edited.ts field present that we map to learn about the last updated date, but the filtering logic of the API does not check against this field. This means that a DS that is set to “Incremental” logic, will miss the edited messages, as they will be filtered out as “old” message by the Rest API.
The full Slack API reference is documented here: Slack APIs The Slack application documentation is here: Slack Product Documentation
Create a Slack application for your company (Tenant)
Before you can integrate with the Aisera GenAI platform, you need to create a Slack application for your company.
Login to the Slack workspace where you want to create your Slack Bot (Aisera bot will be managed).
Go to https://api.slack.com/apps.
Click Create New App and select From scratch


Set: a. The Application Name b. The Slack Development Workspace

Setup Display information:
App name
Short description
App icon → Needs to be 512x512 px PNG file
Background color
Once done, Save Changes

OAuth Authorization:
From the menu on the left, navigate to OAuth and Permissions.
In the Scopes section, add (at least) the following scopes to your user or bot token:
User learning:
users:read
Conversation learning:
channels:read
groups:read
im:read
mpim:read
channels:history
groups:history
im:history
mpim:history
Install the Tokens for the app. (you need to do this after adding the scopes, above). Go to OAuth Tokens and click Install to <APP_NAME>.
Consent in the next screen by clicking Allow. After this step is complete, you will see the User OAuth Token in OAuth Tokens:

Basic Authentication:
Go to Basic Information in the Zendesk left navigation menu. Copy:
Client ID
Client Secret
Verification Token
NOTE: You will need to enter these values into the Aisera Admin UI when you set up the Slack Integration on the Aisera application side.
Enable Events
From the left navigation menu, click Event subscriptions: a. Enable Events b. Expand Subscribe to bot events and click Add Bot User Events c. Add following events and Save changes d. From the left side navigation, go to Interactivity & Shortcuts and enable it.

Set the Chatbot Endpoint
Set the chatbot endpoint as the Request URL in the Events window (above) and also in the Interactivity & Select Menus window.
The syntax is:
<chatbot-server-uri>/slack/receive.

The chatbot endpoint is located at the bottom of the Details page for your Aisera application or bot. Add the modified URL as the Zendesk Request URL followed by “/slack/receive”. It is instantly verified.
Integrating Slack into your Aisera Platform
In the admin UI, navigate to Settings → Integrations and click + New Integration. From the list, pick Slack. Give a name and in the next step provide the access_token
.

Add Slack as a Data Source for your Aisera Platform
In the Aisera Admin UI, navigate to Settings → Data Sources and click + New Data Source.
From the list, pick Slack.

Give a name and an integration.
Give the data source a Name.

Choose the learning functions that you want to run and the schedule of the data source.

Keep clicking Next until you reach the end of the Configuration wizard.
User Object - Default Mapping
The Slack integration has the following default mappings.
FirstName
Aisera
$.profile.first_name
Identity
Aisera
id
LastName
Aisera
$.profile.last_name
UserEmail
Aisera
$.profile.email
UserStatus
Aisera
deleted
Field Type = Boolean Value Mappings: Active = false InActive = true
Upload a JSON file to import these values to a Data Source.
Install the Aisera application/bot into your Slack Workspace
The following steps need to be done by a Slack Workspace member who has permission to install the Channel for your Aisera application or bot using in Aisera Admin UI.
Open a browser and login into your Slack workspace.
On a different browser tab, login to the Aisera Admin UI.
Navigate to Settings -> Channels.
Click + Add New Channel.

Select Slack
Set values for:

Set values for (see Basic Authentication values above):
Channel Name (Name your channel as it appears in Admin Console)
Client ID (will be provided by Aisera)
Client Secret (will be provided by Aisera)
Verification Token (will be provided by Aisera)
Make sure you have installed the app on the correct Slack workspace and click Allow.

You will be redirected back to Aisera’s admin dashboard and see a success message.
Navigate to Settings -> AiseraGPT.
Select the App you would like to add the newly created channel.
Scroll down to the Channels sections of your application's Details page.
Click Add Channel.
Select your new Slack channel.
Click OK.
Aisera Virtual Assistant for Slack channel will be installed successfully.
Last updated