Call center agents often struggle with responding to challenging customer inquiries, especially during remote troubleshooting calls. To provide assistance to agents in real-time and improve customer satisfaction, call centers can implement AI-assisted troubleshooting. One of the first AI call center solutions like this improved agent performance by 34% and new technology can offer even greater gains.

In this tutorial, you will build an AI call center assistant using Symbl’s intelligence APIs. The solution will stream real-time audio from Amazon Connect to Symbl via Amazon Kinesis and use Trackers, Nebula LLM, and retrieval augmented generation (RAG) to provide agents with real-time troubleshooting tips during phone conversations with customers.

Prerequisites

With these prerequisites in mind, let’s build your AI assistant for call center agents!

Set up streaming for phone conversations

In this initial step, you will stream audio data from Amazon Connect to Symbl using Amazon Kinesis. This will allow you to capture real-time conversation data for tracking and analysis.

1.1 Create a Kinesis data stream

Log in to your AWS Management Console, navigate to Amazon Kinesis, and create a new Kinesis Data Stream with an appropriate name (e.g. symblai-kinesis-data-stream).

1.2 Configure Amazon Connect to stream audio to Kinesis

In the AWS Management Console, go to Amazon Connect and follow the setup wizard to create an Amazon Connect instance if you don’t have one. Then select your instance and go to Data streaming. Enable data streaming and select the Kinesis Data Stream you created.

1.3 Set up a Python script to consume Kinesis stream

Create a new Python file called audio_receiver.py with the following code.

import boto3
from botocore.config import Config

# Configure boto3 client
config = Config(
    retries = dict(
        max_attempts = 10
    )
)

kinesis_client = boto3.client('kinesis', config=config)

# Stream details
stream_name = 'symblai-kinesis-data-stream' # Replace with your stream name
consumer_name = 'my-local-consumer'

def process_record():
    # TODO: This will be completed later in this tutorial.

def register_consumer():
    try:
        response = kinesis_client.register_stream_consumer(
            StreamARN=get_stream_arn(stream_name),
            ConsumerName=consumer_name
        )
        print(f"Consumer {consumer_name} registered successfully.")
        return response['Consumer']['ConsumerARN']
    except kinesis_client.exceptions.ResourceInUseException:
        print(f"Consumer {consumer_name} already exists.")
        return get_consumer_arn()

def get_stream_arn(stream_name):
    response = kinesis_client.describe_stream(StreamName=stream_name)
    return response['StreamDescription']['StreamARN']

def get_consumer_arn():
    response = kinesis_client.describe_stream_consumer(
        StreamARN=get_stream_arn(stream_name),
        ConsumerName=consumer_name
    )
    return response['ConsumerDescription']['ConsumerARN']

def main():
    consumer_arn = register_consumer()
    shard_id = "" # Populate from kinesis
    starting_sequence_number = ""  # Populate from kinesis

    shard_iterator = kinesis_client.get_shard_iterator(
        StreamName=stream_name,
        ShardId=shard_id,
        ShardIteratorType='AT_SEQUENCE_NUMBER',
        StartingSequenceNumber=starting_sequence_number
    )['ShardIterator']

    response = kinesis_client.get_records(ShardIterator=shard_iterator, Limit=2000)

    for record in response['Records']:
        process_record(record['Data'])

if __name__ == "__main__":
    main()


This code registers a consumer for the Kinesis stream, then continuously fetches audio data from the Kinesis stream into your server code which is later sent to Symbl using the Streaming API.

1.4 Push data to Symbl using websocket

Now you will initiate a websocket connection to Symbl using the Streaming API. You can use the Python SDK to initiate the connection and send audio data to Symbl with the following code.

...

def process_record(data):
    connection_object.send_audio(data)

def main():
    ...

    response = kinesis_client.get_records(ShardIterator=shard_iterator, Limit=2000)

    connection_object = symbl.Streaming.start_connection(trackers=trackers)
    
    for record in response['Records']:
        process_record(record['Data'])


This ensures that the audio data is being sent to Symbl for processing. However, you need to be able to capture the response when Symbl detects a special event. For this, you need to set up Trackers which will act as a trigger for the AI assistant. You also need to set up a retrieval augmented generation (RAG) system to help with fetching contextually relevant text as the final response.

Determine when call center agents receive AI assistance

In this step, you will set up Symbl Trackers that automatically identify when assistance is needed by listening to conversations between call center agents and customers in real time.

2.1 Set up a custom tracker with Symbl

Trackers are part of Symbl’s intent detection engine. It identifies relevant events from any live conversation — in this case, where a call center agent might need AI-assisted troubleshooting help during a call.


To set up a tracker, log in to your Symbl account and create a custom tracker by navigating to Trackers Management > Create Custom Tracker.

  • Tracker Name: Troubleshooting Tracker
  • Description: This tracker identifies when a customer shares a problem they are facing.
  • Categories: Contact Center
  • Language: en-US (or any preferred language)
  • Vocabulary: unable to do, facing a problem, trouble, cannot do it, issue

On saving, you can locate this tracker under Trackers Management > Your Trackers

2.2 Fetch custom tracker details

When your custom tracker is detected in a streaming conversation, a tracker_response event is sent in the websocket connection. This event is created whenever the conversation contains any keyword or phrase in your customer tracker.

To ensure this event is triggered, you will need to send your tracker metadata when initiating a Streaming API connection. You can get your custom tracker details using the following code.

...
def get_troubleshooting_tracker():
    troubleshooting_tracker_url = f"https://api.symbl.ai/v1/manage/trackers?name={requests.utils.quote('Troubleshooting Tracker')}"

    headers = {
        'Authorization': f'Bearer {generate_token()}'
    }
    response = requests.request("GET", troubleshooting_tracker_url, headers=headers)
    troubleshooting_tracker = json.loads(response.text)['trackers']
    return troubleshooting_tracker

def main():

    ...
    response = kinesis_client.get_records(ShardIterator=shard_iterator, Limit=2000)

    trackers = get_troubleshooting_tracker()
    ...


Now generate a token using the following code.

import requests

def generate_token():
    APP_ID = "SYMBL_APP_ID" # Replace with your Symbl AppId
    APP_SECRET = "SYMBL_APP_SECRET" # Replace with your Symbl AppSecret
    url = "https://api.symbl.ai/oauth2/token:generate"

    payload = {
        "type": "application",
        "appId": APP_ID,
        "appSecret": APP_SECRET
    }
    headers = {
        "accept": "application/json",
        "content-type": "application/json"
    }

    response = requests.post(url, json=payload, headers=headers)
    return response.text


2.3 Subscribe to tracker_response event with the Streaming API

When starting the websocket connection using the Streaming API, you will have to subscribe to the special Symbl events and handle them when triggered.

In this case, you will capture and handle a single event: the tracker_response event (view list of other supported events). This event is triggered when a customer mentions any of the words you added in the tracker vocabulary.

def handle_tracker_response(tracker):
    # TODO: You will populate this in the next section.
...
def main():

    ...
    response = kinesis_client.get_records(ShardIterator=shard_iterator, Limit=2000)

    trackers = get_troubleshooting_tracker()

    connection_object = symbl.Streaming.start_connection(trackers=trackers)

    events = {
        'tracker_response': lambda tracker: handle_tracker_response(tracker)
    }
    connection_object.subscribe(events)

    for record in response['Records']:
        process_record(record['Data'])


You have now established a connection and the live audio stream chunk is being sent out over the websocket. When your custom tracker is detected, Symbl will send out the tracker_response event.

Provide agents with real-time AI assistance

In this step, you will leverage Symbl’s proprietary Nebula LLM to display AI-assisted troubleshooting guidance.

Before you can use the Nebula LLM to fetch relevant contextual responses, you need to preprocess and store your knowledge data for efficient querying.

3.1 Create vector embeddings from knowledge data 

Knowledge data is internal data specific to your organization. You can create embeddings from it using the Embedding API and Nebula embedding model that creates vector embeddings from conversations, documents, or text data. It is a vector representation of text used to compare and identify text with similar characteristics.

import requests
import json

NEBULA_API_KEY = "" # Replace with your Nebula API Key

def get_vector_embeddings(data):
    url = "https://api-nebula.symbl.ai/v1/model/embed"

    payload = json.dumps({
    "text": data # You’ll replace this in the next step
    })
    headers = {
    'ApiKey': NEBULA_API_KEY, # Replace with your value
    'Content-Type': 'application/json'
    }

    response = requests.request("POST", url, headers=headers, data=payload)

    return response.text


You can use the sample knowledge corpus defined in the Appendix at the end of this tutorial. This contains metadata and troubleshooting steps for a couple of issues for three imaginary products. It is structured in a way that the lines starting with “—-” (four hyphens) align with the intent in our situation (can act as key), and the values after that, until the next “—-” act as the relevant knowledge value.

3.2 Store vector embeddings in a vector DB for efficient search

You can now create vector embeddings for the “key” and store them along with the corresponding data in any vector DB. These vector embeddings will allow you to query the vector DB efficiently and fetch associated data.

You can use any vector DB, but for this implementation we’ll use MongoDB Atlas

After setting up a MongoDB and creating a database (mydb) and collection (mycollection), you can establish the connection using the MongoDB connection URI. Store that value in MONGODB_URI for use while storing and retrieving from the DB.

Using the code below, you can insert documents containing these embeddings and associated data into the MongoDB collection created above.

import pymongo

def get_vector_embeddings():
    # … Same as previously defined 


def parse_text(text):
    data = []
    current_key = None
    current_value = []

    for line in text.splitlines():
        line = line.strip()

        # Start of a new section
        if line.startswith("----"):
            if current_key and current_value:
                data.append({
                    "key": current_key,
                    "value": "\n".join(current_value)
                })
                current_value = []

            # Extract key from the same line as ----
            current_key = line[4:].strip() 
        
        # Handle lines within a section (not empty and not a key line)
        elif current_key and line and not line.startswith("----"):
            current_value.append(line)

    # Capture the last section if it exists
    if current_key and current_value:
        data.append({
            "key": current_key,
            "value": "\n".join(current_value)
        })

    return data

def open_mondo_db_connection():
    mongoclient = pymongo.MongoClient(MONGODB_URI) #replace with your MongoDB URI
    db = mongoclient['mydb'] #replace with your database name
    collection = db['mycollection'] #replace with your collection name
    return collection

def populate_knowledge_data():

    parsed_data = []
    with open('knowledge_data.txt') as f:
        parsed_data = parse_text(f.read())

    for data in parsed_data:
        key_embedding = get_vector_embeddings(data['key'])

        document = {
            'data': data['value'],
            'embedding': json.loads(key_embedding)['embedding']
        }
    collection = open_mondo_db_connection()
    collection.insert_one(document)


The parse_text is specific to the sample knowledge corpus defined in the Appendix. For your own knowledge corpus, modify the function to suit your needs.

3.3 Configure a vector DB search index

Follow the instructions in Create an Atlas Vector Search Index to create a search index. You will use this to efficiently do a similarity search over the stored embeddings. Provide a name for the index (my_index) and provide the following fields:

  • numDimensions: length of the vector embeddings; 1024 for the Nebula Embedding Model 
  • path: field over which the vector embedding similarity search is carried out 
  • similarity: metric used for calculating similarity
{
    "fields": [
        {
            "numDimensions": 1024,
            "path": "embedding",
            "similarity": "cosine",
            "type": "vector"
        }
    ]
}


3.4 Retrieve knowledge context with vector search on tracker detection

Whenever your custom tracker is detected, you want to fetch data relevant to the tracker. To do this, you need to run a vector index search on your vector DB using the detected tracker. This will fetch only the data related to your tracker. 

This data will be added as context for the Nebula LLM chat along with other details. It will also help you get a useful response from it. It’s here where RAG comes into play.

For the detected tracker, you need to create vector embeddings and do a vector search on the DB to retrieve the contextual knowledge. 

Use the following code to retrieve the knowledge context from the vector DB.

def vector_index_search(tracker):
    collection = open_mondo_db_connection()
    tracker_embedding = get_vector_embeddings(tracker)
    tracker_embedding_vector = json.loads(tracker_embedding)['embedding']
    retrieved_context = collection.aggregate([
    {
        "$vectorSearch": {
            "queryVector": tracker_embedding_vector,
            "path": "embedding",
            "numCandidates": 10, #total number of embeddings in the database
            "limit": 1, #number of closest embeddings returned
            "index": "my_index"
            }
        }])
    return next(retrieved_context, None)


3.5 Build prompt for Nebula using retrieved knowledge context and transcript

Now that you have access to the contextual knowledge, you can pass this to Nebula along with the transcript of the chat to get a response. You can set up a system prompt to specify their response behavior and give all the relevant context. 

To chat with Nebula, use the Nebula Chat API as described below. This will return a response from the Nebula LLM which can be shared with the customer support agent. 

def get_nebula_response(conversation, relevant_info):
    payload = json.dumps({
    "max_new_tokens": 1024,
    "system_prompt": f"You are a customer support agent assistant. You help the agents perform their job better by providing them relevant answers for their inputs. You are respectful, professional and you always respond politely. You also respond in clear and concise terms. The agent is currently on a call with a customer. Relevant information: {relevant_info} . Recent conversation transcript: {conversation}",
    "messages": [
        {
        "role": "human",
        "text": "Hello, I am a customer support agent. I would like to help my customers based on the given context."
        },
        {
        "role": "assistant",
        "text": "Hello. I'm here to assist you."
        },
        {
        "role": "human",
        "text": "Given the customer issue, provide me with the most helpful details that will help me resolve the customer’s issue quickly."
        }
    ]
    })

    headers = {
        'ApiKey': NEBULA_API_KEY, # Replace with your value
        'Content-Type': 'application/json'
    }

    response = requests.request("POST", NEBULA_CHAT_URI, headers=headers, data=payload)
    print(json.loads(response.text)['messages'][-1]['text'])


3.6 Handle tracker_response and transcript

Now that you have all the other functionalities in place, you can update your tracker_response event handler to extract the conversation transcript and the tracker value, query the knowledge database using the tracker value, and get relevant knowledge data. 

This is the final step of the integration.

def handle_tracker_response(tracker):

    tracker_value = tracker_response['trackers'][0]['matches'][0]['value']

    conversation_message = '\n'.join([x.text for x in connection_object.conversation.get_messages().messages])

    relevant_info = vector_index_search(tracker_value)['data']

    get_nebula_response(conversation_message, relevant_info)

def vector_index_search(tracker):
   # ... Already defined in Step 3.4

def get_nebula_response(tracker):
   # ... Already defined in Step 3.5

Testing

To test your AI call center solution using the sample input defined in the Github repository, run the following commands.

% source venv/bin/activate # Activate virtual environment

% python store_vector.py # To store sample knowledge data into vector DB

% python audio_streamer.py # To store sample audio data in kinesis

% python audio_receiver.py # To fetch data from kinesis and call Nebula for coherent response

Store the shardId and sequence number from the first response of python audio_streamer.py. Then use this shardId and sequence number when requested while running the command python audio_receiver.py.

You should see an output similar to this:

Sure, here are some helpful details to assist you in resolving the customer's issue:

1. Ask the customer to open the HomeSync app and go to the 'Automations' tab.
2. Have them review each automation rule to ensure all devices are online and connected to the hub.
3. Check that the trigger conditions and actions are correctly set for each automation rule.
4. If there are any problematic rules, suggest deleting them and recreating them.
5. If the issue persists, advise the customer to restart the hub by unplugging it for 10 seconds.
6. If the issue still isn't resolved, ask the customer to check if their Wi-Fi router is working properly and if there are any other devices experiencing connectivity issues.
7. If necessary, suggest contacting the router manufacturer or internet service provider for further assistance.

Conclusion

In this tutorial, you’ve learned how to integrate Amazon Connect with Symbl to create an AI-assisted call center solution. With an AI assistant for agents that leverages Symbl’s real-time speech analysis, custom trackers, and Nebula LLM, call center personnel can provide faster, more efficient support to customers in real-time.

You can take this solution further by integrating it with a customer relationship management (CRM) system to provide personalized assistance based on customer history. You can then use data gathered on calls to generate reports and insights with Symbl that can improve the overall performance of your call center.

Appendix

This is the sample knowledge corpus defined in the tutorial. You can store this in your knowledge_data.txt file.

---- SmartHome Hub X1 METADATA

Manufacturer: SmartHome Solutions

Category: Smart Home Hub

Wi-Fi Compatibility: 2.4GHz only

Key Features:

Voice control integration
Mobile app control
Compatible with 100+ smart devices
Energy monitoring

---- SmartHome Hub X1 Wi-Fi Connection Issues:

Ensure your smartphone is connected to a 2.4GHz Wi-Fi network.
Locate the reset pinhole on the back of the device.
Use a paperclip to press and hold the reset button for 10 seconds until the LED flashes blue.
Open the SmartHome app and select 'Add New Device'.
Choose 'SmartHome Hub X1' from the list.
Enter your Wi-Fi password when prompted.
Wait for the connection process to complete (LED will turn solid green when successful).

---- SmartHome Hub X1 Device Not Responding:

Check if the power cable is securely connected.
Verify that your Wi-Fi network is functioning properly.
Restart the hub by unplugging it for 30 seconds, then plugging it back in.
If issues persist, perform a factory reset using the pinhole button.

---- ConnectTech METADATA

Category: Smart Home Hub

Wi-Fi Compatibility: Dual-band (2.4GHz and 5GHz)

Key Features:

Alexa and Google Assistant integration
Z-Wave and Zigbee compatible
IFTTT support
Advanced automation rules

---- ConnectTech Wi-Fi Connection Issues:

Ensure your smartphone is connected to your home Wi-Fi network.
Press and hold the 'Connect' button on top of the device for 5 seconds until the LED blinks white.
Open the ConnectHome app and tap 'Add Device'.
Select 'ConnectHome Central 2000' from the list.
Choose your preferred Wi-Fi network (2.4GHz or 5GHz) and enter the password.
Wait for the connection process to complete (LED will turn solid blue when successful).

---- ConnectTech Device Pairing Issues:

Put your smart device into pairing mode (refer to device manual).
In the ConnectHome app, select 'Add Device' and choose the device type.
Follow the in-app instructions for your specific device.
If pairing fails, move the device closer to the hub and try again.
For stubborn devices, try resetting them to factory settings before pairing.

---- SyncTech Solutions METADATA

Category: Smart Home Hub

Wi-Fi Compatibility: 2.4GHz only

Key Features:

Local processing for faster response
Customizable automation rules
Open API for developers
Energy usage insights

---- SyncTech Solutions Wi-Fi Connection Issues:

Connect your smartphone to your 2.4GHz Wi-Fi network.
Power on the HomeSync Controller 500.
Wait for the LED to blink green, indicating it's ready to connect.
Open the HomeSync app and tap the '+' icon.
Select 'Add Hub' and choose 'HomeSync Controller 500'.
Follow the on-screen instructions to enter your Wi-Fi credentials.
The LED will turn solid green once successfully connected.

---- SyncTech Solutions Automation Issues:

Open the HomeSync app and go to the 'Automations' tab.
Review each automation rule to ensure all devices are online.
Check that trigger conditions and actions are correctly set.
Try deleting problematic rules and recreating them.
If issues persist, restart the hub by unplugging it for 10 seconds.
Avatar photo
Team Symbl

The writing team at Symbl.ai