Use Async Audio API in your React App

Use Async Audio API in your React App

We recently launched a new Async Audio API that you can use to process audio files and generate transcription and insights such as topics, action items, follow-ups and questions. In this blog, we will show you how to use the Async Audio App in your React application.

Requirement

Before we get started, you will need to make sure to have:

Setup React Project

To create a project with Material-UI CSS library, run

Go to ./src folder and delete all files except index.js, and App.js. Clean the import statements in both files and return Hello World in App.js to test the application as shown below:

./src/index.js


./src/App.js

If you go to http://localhost:3000 in the browser, you should be able to see “Hello World”.

Getting Started

We will be creating a React app that uploads an audio file and returns topics from that audio. We will be using Async Audio API to process an audio file, Job API to retrieve the status of an ongoing async audio request and Conversation API to return the topics.

We will stick with a good React folder structure, so create a components folder inside ./src with the following folders and files.


Add the following inside index.js files

.../Audio/index.js


.../Topics/index.js

In this tutorial, we will not be making the api request for authentication from a server – we will be hardcoding the access token inside the auth.json which can be used temporarily for the app. Replace [ACCESS_TOKEN] with your temporary access token from generated from POST Authentication request using your appId and appSecret from the Symbl Platform. You can use the Postman app to generate this access token as shown in this Youtube Video.

Building The App

Initialize Context Store

We are creating the .store.js to store our global state using Context API. The reason why use the Context API is that we want the Topics component to only to make an API request when there’s a new id state from the Audio component but we cannot pass the state in between components with props.

If we are not using context for global state management, we will have to place the Topics component under the Audio component and pass down the state using props. While this will also work, it will create an additional nested folder which is not a good practice.

Create idContext, loadingContext, and successContext like the following in our store.js file.

./src/store.js

Wrap <App/> with <Store/> inside index.js file.

./src/index.js

Creating UI for uploading Audio

Let’s create an input button in the Audio component to upload the audio file by writing the following code in theAudio.js file.

.../Audio/Audio.js


In the above code snippet, we are creating a <Button/> with onClick event listener that will call an <input/> form through React’s useRef so that we can upload our audio file. The <input/> form will only accept audio MIME type files such as “audio/mpeg”, “audio/wav” and “audio/wave”.

The <input/> form has an onChange event listener that will update the file state with the new audio file path. It is important to note that we are using useRef here in order to get the reference object of the uploaded audio file instead of a shallow copy of the file path which will appear as a string of “C://fakepath/audiofilename.wav”.

Upon uploading, when the loading state is true, it will render a <CircularProgress/> until the api call in the custom hook useAudioAsyncAPI is completed.

Fetch Async Audio API and Job API in useEffect

Let’s code the fetch request using fetch in react’s useEffect inside the .hooks.js file.

.../Audio/hooks.js

Creating UI for Topics Generated

Of course, we will need to create a UI to display the topics generated from the Conversation API. We will use <Chip/> from Material-UI to display each topic.

.../Topics/Topics.js

Fetch Conversation API in useEffect

Once the id state is updated or changed, we will make API request to Symbl’s Conversation API using the Conversation ID returned from the previous useEffect hook. Let’s code the fetch request to generates the Topics like the following:

.../Topics/hooks.js


In the code above, when the id state is true, we are making API calls using fetch to Symbl’s Conversation API specifically, GET Topics by specifying the Conversation ID in the API URL that will return the topics generated from the uploaded audio file.

We can delete .../Topics/style.js since we don’t need it.

Rendering Audio and Topics components

We only want to render Topics components after we uploaded the audio and topics generated, and only populate the UI with the topics. For this, we can again use the global state id and write a conditional rendering statement using &&. We can remove the previous “Hello World” statement.

./src/App.js


Let’s add some responsive layout styling with padding using Material-UI’s <Grid/> and <Box/>.

./src/App.js

Run The App

And that’s it! Now you have a fully functional react app that can take an audio file — in mono audio format —, process it, show loading progress while it is been processed, and display all the topics keywords within the conversation in the audio file. You can go to [<http://localhost:3000>](<http://localhost:3000>) to view the running app and test it.

You can find the full open source code for this app on our Github.

Check out our API docs if you want to customize this integration further using the APIs that Symbl provides.

Build your own Salesforce Conversational AI with Twilio Flex

Build your own Salesforce Conversational AI with Twilio Flex

Did you know you can capture real-time action items from conversations with customers, automatically push these items to a customer database, and never have to worry about missing another important task or feverishly scribble customer notes?

We will show you how to connect the Symbl voice API to Twilio Media Streams to access real-time raw audio and easily generate actionable insights and tasks from these audio streams. Take this workflow automation one step further—connect the Salesforce Opportunities dashboard to Twilio Flex to automatically send these valuable insights and tasks to the Salesforce CRM platform.

This blog post will guide you step-by-step through the workflow

Requirements

Before we can get started, you’ll need to make sure to have:

To start, you’ll need to configure Twilio Media Streams and a Symbl Websocket server. If you haven’t already done so, refer to this blog post first to set up your environment.

Getting Started

At this point, you should be able to stream audio through your Websocket server into Twilio Flex and generate insights.

In order to take those generated insights and push them to your Salesforce dashboard in real-time, we will start by updating the `index.js` file in your Websocket server.

In the code sample above, we are modifying the connection start block with a POST request that creates a new opportunity in your Salesforce dashboard with the provided json payload. You can configure this payload as needed.

  1. Once the request is successful, we want to save the opportunity id in a variable so that we can use it to push action items next. (Note: To get your Salesforce Authentication Token, refer to this guide.)
  2. Next, when action items are detected, we want to capture those insights and add them to our Salesforce dashboard under the opportunity we created.

Next, when action items are detected, we want to capture those insights and add them to our Salesforce dashboard under the opportunity we created.

To do this, we will modify the client_connection.on('message') handler.

And that’s it! With this integration, your Salesforce dashboard should now have opportunities dynamically created from yours calls and all action items generated will be logged as tasks within the opportunity.

If we head over to Salesforce, we can see an opportunity was created: Perspective Meeting with Magpie

If we dive into that opportunity, we can see that the action items that were generated by Symbl have been logged as tasks in the opportunity, automatically.

 

You can use other Salesforce APIs with Symbl to customize how your action items and topics are displayed on your opportunity dashboard. For example, you can add call logs like in the image above and show the Topics that were generated from your sales call directly in the Description field.

Read about the different Salesforce APIs.

Test out the integration

To test out the integration, navigate to the Twilio Flex tab and click on Launch Flex:

On your flex dashboard, locate your Twilio phone number and call that number from your cellular device.

When you accept the call from Flex, the audio will be streamed through Symbl’s WebSocket API and based on how you’ve configured your API calls for Salesforce, those insights will be logged in your dashboard. Open up your Salesforce dashboard and you’ll see the opportunity being created and insights logging in the opportunity in real-time.

Check out our API docs if you want to customize this integration further using the APIs that Symbl provides.

Wrapping up

Congratulations! You can now harness the power of Symbl to empower your sales team to focus on having a great conversation experience with customers and be free of any distracting activities while on the call.

Sign up to start building!

Need additional help? You can refer to our API Docs for more information and view our sample projects on Github.

How to Use Symbl’s Voice SDK to Generate Insights in Your Own Applications

How to Use Symbl’s Voice SDK to Generate Insights in Your Own Applications

Telephony services make the modern workplace, well, work. Enhance your existing telephony system capabilities by integrating Symbl’s Voice SDK.

How : Our SDK analyzes voice conversations on SIP or PSTN networks and generates actionable outcomes through contextual conversation intelligence. Products like call centre applications, audio conferencing, PBX systems, and other communication applications that support a telephony interface can use this SDK to provide real-time or post-conversation intelligence to their customers.

What to expect: You’ll receive a post-conversation summary link in the final response.

How to get the analyzed data: Refer to conversation API to get the JSON response of the conversation analyzed in form of: transcripts, action items, topics, follow-ups, questions and more.

For this guide, we will be using a phone number (PSTN) to make the connection with Symbl. It can be easily interchanged with a SIP URI.

Requirements

Before we can get started, you’ll just need to make sure to have:

  • Node JS installed on your system. Get the latest version here.
  • A Symbl account. Sign up here.

Getting Started

First, create an index.js file where we will be writing all our code.

To work with the SDK, you must have a valid app id and app secret.

| If you don’t already have your app id or app secret, log in to the platform to get your credentials.

For any invalid appId and appSecret combination, the SDK will throw Unauthorized errors.

Initialize the Client SDK
1 Use the command below to install and add to your npm project’s package.json.

2. Reference the SDK

| Finally, initialize the SDK

Connect to Endpoints

Now that we have successfully initialized the SDK, we can begin connecting to endpoints.

This SDK supports dialing through a simple phone number – PSTN endpoint.

What is PSTN?

The Publicly Switched Telephone Network (PSTN) is the network that carries your calls when you dial in from a landline or cell phone. It refers to the worldwide network of voice-carrying telephone infrastructure, including privately-owned and government-owned infrastructure.

For this guide, we will be using PSTN to make the connection. Refer to our blog post [here](https://symbl.ai/blogs/ai) to see how to connect using a SIP URI instead.

| The code snippet below dials in using PSTN and hangs up after 60 seconds.

The above code snippet initializes the sdk, uses sdk.startEndpoint to connect to the pstn connection, starts streaming voice data for 60 seconds and then finally uses sdk.stopEndpoint to end the connection.

Push Events

Events can be pushed to an on-going connection to have them processed.

Every event must have a type to define the purpose of the event at a more granular level, usually to indicate different activities associated with the
event resource. For example – A “speaker” event can have type as started_speaking . An event may have additional fields specific to the event.

Currently, Symbl only supports the speaker event which is described below.

Speaker Event

The speaker event is associated with different individual attendees in the meeting or session. An example of a speaker event is shown below.

Speaker Event has the following types:

started_speaking

This event contains the details of the user who started speaking with the timestamp in ISO 8601 format when he started speaking.

stopped_speaking

This event contains the details of the user who stopped speaking with the timestamp in ISO 8601 format when he stopped speaking.

A startedSpeaking event is pushed on the on-going connection. You can use pushEventOnConnection() method from the SDK to push the events.

Complete Example

Above is a quick simulated speaker event example that

1. Initializes the SDK
2. Initiates a connection using PSTN
3. Sends a speaker event of type `startedSpeaking` for user John
4. Sends a speaker event of type `stoppedSpeaking` for user John
5. Ends the connection with the endpoint

Strictly for the illustration and understanding purposes, this examples pushes events by simply using setTimeout() method periodically, but
in real usage, you should detect these events and push them as they occur.

Send Summary Email

An action sendSummaryEmail can be passed at the time of making the startEndpoint() call to send the summary email to specified email
addresses passed in the parameters.emails array. The email will be sent as soon as all the pending processing is finished after the stopEndpoint() is executed.
Below code snippet shows the use of actions to send a summary email on stop.

Optionally, you can send the title of the Meeting and the participants in the meeting which will also be present in the Summary Email.

To send the title of the meeting populate the data.session.name field with the meeting title.

To send the list of meeting attendees, populate the list of attendees in the user objects in `data.session.users` field as shown in the example. To
indicate the Organizer or Host of the meeting set the `role` field in the corresponding user object.

| Setting the timestamp for speakerEvent is optional but it’s recommended to provide accurate timestamps in the events when they
occurred to get more precision.

Output

This is an example of the summary page you can expect to receive at the end of your call.

Tuning your Summary Page

You can choose to tune your summary page with the help of query parameters to play with different configurations and see how the results look.

Query Parameters

You can configure the summary page by passing in the configuration through query parameters in the summary page URL that gets generated at the end of your meeting. See the end of the URL in this example:

`https://meetinginsights.symbl.ai/meeting/#/eyJ1…I0Nz?insights.minScore=0.95&topics.orderBy=position`

Query Parameter Default Value Supported Values Description
insights.minScore 0.8 0.5 to 1.0 Minimum score that the summary page should use to render the insights
insights.enableAssignee false [true, false] Enable to disable rending of the assignee and due date ofthe insight
insights.enableAddToCalendarSuggestion true [true, false] Enable to disable add to calendar suggestion whenapplicable on insights
insights.enableInsightTitle true [true, false] Enable or disable the title of an insight. The title indicates theoriginating person of the insight and if assignee of the insight.
topics.enabled true [true, false] Enable or disable the summary topics in the summary page
topics.orderBy ‘score’ [‘score’, ‘position’] Ordering of the topics. <br><br> score – order topics by the topic importance score. <br><br>position – order the topics by the position in the transcript they surfaced for the first time

score – order topics by the topic importance score.

position – order the topics by the position in the transcript they surfaced for the first time

Test Your Integration

Now that you’ve seen how the SDK works end to end, lets test the integration.

If you’ve dialed in with your phone number, try speaking the following sentences to see the generated output:

* “Hey, it was nice meeting you yesterday. Let’s catch up again next week over coffee sometime in the evening. I would love to discuss the next steps in our strategic roadmap with you.”

* “I will set up a meeting with Roger to discuss our budget plan for the next quarter and then plan out how much we can set aside for marketing efforts. I also need to sit down with Jess to talk about the status of the current project. I’ll set up a meeting with her probably tomorrow before our standup.”

If you’ve dialed into a meeting, try running any of the following videos with your meeting platform open and view the summary email that gets generated:

At the end, you should recieve an email in your inbox (if you’ve configured your email address correctly) with a link that will take you to your meeting summary page. There you should be able to see the transcript as well as all the insights that were generated.

Wrapping Up

With this output, you can push the data to several downstream channels like RPA, business intelligence platforms, task management systems, and others using [conversation API] (https://docs.symbl.ai/#conversation-api).

Congratulations! You now know how to use Symbl’s Voice SDK to generate your own insights. To recap, in this guide we talked about:

  • installing and initializing the SDK
  • connecting to a phone number through PSTN
  • pushing speaker events
  • configuring the summary page with generated insights
  • tweaking the summary page with query parameters

Sign up to start building!

Need additional help? You can refer to our API Docs for more information and view our sample projects on Github.

Integrating Symbl Insights with Twilio Media Streams

Integrating Symbl Insights with Twilio Media Streams

Capturing audio and deriving real-time insights is not as hard as you may think. Twilio Media Streams provide real-time raw audio and give developers the flexibility to integrate this audio in the voice stack of choice. Couple that with the power of Symbl, and you can surface actionable insights with customer interactions through the Symbl WebSocket API.

What can you expect upon successful installation? A post-conversation email with topics generated, of action items, and a link to view the full summary output

This blog post will guide you step-by-step through integrating the Symbl WebSocket API into Twilio Media Streams.

Requirements

Before we can get started, you’ll need

Setting up the Local Server

Twilio Media Streams use the WebSocket API to live stream the audio from the phone call to your application. Let’s get started by setting up a server that can handle WebSocket connections.

Open your terminal, create a new project folder, and create an index.js file.

To handle HTTP requests we will use node’s built-in http module and Express. For WebSocket connections we will be using ws, a lightweight WebSocket client for node.

In the terminal run these commands to install ws, websocket and Express:

To install the server open your index.js file and add the following code.

Save and run index.js with

Open your browser and navigate to

Your browser should show Hello World

Setting up the Symbl WebSocket API

Let’s connect our Twilio number to our WebSocket server.

First, we need to modify our server to handle the WebSocket messages that will be sent from Twilio when our phone call starts streaming. There are four main message events we want to listen for: connected, start, media and stop.

Modify your index.js file to log messages when each of these messages arrive at the Symbl server.

Now we need to set up a Twilio number to start streaming audio to our server. We can control what happens when we call our Twilio number using TwiML. We’ll create an HTTP route that will return TwiML instructing Twilio to stream audio from the call to our server.

Add the following POST route to your index.js file.

For Twilio to connect to your local server we need to expose the port to the internet. We need to use ngrok to create a tunnel to our localhost port and expose it to the internet. In a new terminal window run the following command:

You should get an output with a forwarding address like this. Copy the URL onto the clipboard. Make sure you save the HTTPS URL.

Open a new terminal window and run your index.js file.

Setting up your Twilio Studio

Now that our WebSocket server is ready, the remaining configuration needed to join Symbl to your customer and agent conversations, will be done through your Twilio Studio Dashboard.

Navigate to Studio and create a new flow.

Twilio offers three different triggers that you can use to build out this integration. Depending on your use case, you can choose to begin the flow from either the message, call, or REST API triggers.

In our example, we want Symbl to join a voice conversation when a customer calls our agent, so we will be using the incoming call trigger to build out our flow.

First, use the Fork Stream widget and connect it to the Incoming Call trigger. In the configuration, the URL should match your ngrok domain.

NOTE: Use the WebSocket protocol wss instead of http for the ngrok url.
startws

Next connect this widget to the `Flex Agent` widget which will connect the call to the Flex Agent:
flexagent

Finally, we need to end the stream once the call is complete. To do so, use the same `Fork Stream` widget but the configuration for `stream action` should be `Stop`.
flexagent

Test the integration

To test the integration, navigate to the Flex tab and click on Launch Flex:

On your Flex dashboard, locate your Twilio phone number and call that number from your mobile device.

When the agent accepts the call, the audio will stream through the WebSocket API. And at the end of the call, you will get an email with the transcript and insights generated from the conversation.

Wrapping up

What else can you do with the data?? You can fetch the data out of the conversation and with this output, you can push the data to downstream channels such as Trello, Slack, Jira.

Congratulations! You can now harness the power of Symbl and Media Streams to extend your application capabilities.

Need additional help? You can refer to our API Docs for more information and view our sample projects on Github.

Integrating Symbl Conversation Intelligence with Twilio Flex for Real-Time Insights and Actions

Integrating Symbl Conversation Intelligence with Twilio Flex for Real-Time Insights and Actions

Twilio Flex is a new cloud-based contact center platform revolutionizing the call center industry through fine-tuned, customizable customer experiences. With the power of Symbl, Twilio Flex users can do even more to surface actionable outcomes with customer interactions through the Symbl API, all while using the same familiar Twilio Flex interface. It’s a win, win.

This blog post will guide you step-by-step through integrating Symbl APIs into the Twilio Flex dashboard.

Requirements

Before we can get started, you’ll need to make sure to have:

Setting up your Twilio Studio

All the configuration needed to have Symbl join your customer and agent conversations will be done through your Twilio Studio Dashboard. Navigate to Studio and create a new flow.

Twilio offers three different triggers that you can use to build out this integration. Depending on your use case, you can choose to begin the flow from either the message, call or REST API triggers.

In our example, we want Symbl to join a voice conversation when a customer calls our agent, so we will be using the incoming call and REST API triggers to build out our flow.

To get started, we will first use the ‘Split Based On’ widget to differentiate between customer calls and Symbl’s PSTN call. This will help direct your customer calls to the Twilio Agent versus Symbl’s call which should simply connect to the existing conference.

Here we are choosing the trigger.call.from as the value to split the calls on.

Now that we have identified which caller is a customer and which is Symbl, we can send the customer to a Flex agent using the ‘Send to Flex’ widget.

 

You can configure this widget based on how you want your agents to handle incoming calls.

Next, we will use the ‘Run Function’ widget to create a function that will make a POST request to invoke the REST API trigger.

 

Click on Create which will take you to https://www.twilio.com/console/functions/manage to create a new function. Select a Blank template.

Replace the starter code with the following:

 

 

Note: the URL should be your REST API trigger which you can find on your Twilio Studio page.

Next, we will create another function to call Symbl APIs and have it call the Twilio agent. Like before, create a new function with a blank template and replace the starter code with the following: 

Click save and head back to your studio. In the function widget we created, set the function URL to the name of your trigger function.
That’s all we have to do for Incoming Call trigger and now we will configure the REST API trigger flow.

Create another ‘Run Function’ widget and set the function URL to that of your start Symbl function like below:

Finally, we will use the ‘Connect Call To’ widget to connect Symbl’s call to the ongoing conference between the Twilio Agent and the customer like so:

 

And that’s all it takes to integrate Symbl into Twilio Flex! The full architecture flow should look like this:

 

Test out the integration

To test out the integration, navigate to the Flex tab and click on Launch Flex:

On your Flex dashboard, locate your Twilio phone number and call that number from your cellular device.

When the agent accepts the call, Symbl will join it and stream the conversation.

Wrapping up

At the end of the call, you will get an email with the transcript and insights generated from the conversation. You can also call the GET conversation ID to fetch the data and push it back to the Twilio Flex interface or to downstream channels like Trello, Slack, Jira, and other platforms. Oh, the possibilities!

And with that, you can now harness the power of Symbl to extend your Twilio Flex applications! To learn more about what all you can do with Symbl, check out some of our other blog posts:

Sign up with Symbl to get started.

Need additional support? You can refer to our API Docs for more information and view our sample projects on Github.