As we move closer to the hackathon, we thought of sharing some approaches that might save time in figuring out ways to mashup APIs and build your project using Symbl’s platform for conversational intelligence.
Why we are so excited about TADHack 2020!
- We released our developer platform in March 2020 and this is the first hackathon we are sponsoring to inspire and equip you with the tools that we have built with early customers and developers to analyze conversations at scale.
- To create evangelists, champions out of developers by getting you excited to use Symbl and build unique conversational intelligence experiences. We are always growing our family .
- Receive product feedback and see different use cases that your creative juices will yield. Tell us how we can do better, we are always improving!
- Yes, to generate more sign-ups and fuel growth – we are a fast growing startup and want to keep it that way 😀
Recapping – What is Symbl?
Symbl’s APIs unlock machine learning algorithms that can ingest any form of conversational data to identify actionable insights across domains and channels (voice, email, chat, social); without the need for upfront training data, wake words, or custom classifiers.
See what all you can do with us 🙂
|If you are using…||You can integrate using…||To build experiences around|
|3rd party Video Application or SDK||
|Telephony/SIP, PBX Interface||
|Speech to Text||
Also sharing below a list of key concepts that will help you understand the capabilities of the platform – like what are non-definitive action items ??!!
Real time Transcription
Symbl generate real time speech to text that can be used a live captioning or a searchable transcript with single or multiple audio streams. The transcription is generated with word level timestamps, speaker information and can map to all the other AI capabilities using message ID. The transcription is available in multiple languages and you can use custom vocabulary to enhance the accuracy of speech recognition. Read more here.
Contextual Summary Topics
These are the most relevant topics of discussion from the conversation that are generated based on the combination of the overall scope of the discussion on this topic and the relevance of the topic. Summary topics are not detected based on the frequency of their occurrences in the conversation, they are instead detected based on context and hence map to a group of message IDs or a defined scope in transcription that they refer to. Read more here.
An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to act in the future. These actions can be definitive in nature and owned with a commitment to working on a presentation, sharing a file, completing a task, etc. Or they can be non-definitive like an idea, suggestion or an opinion that could be worked upon. Action items are not biased on keywords or type of conversation and are generally identified to fit most use cases. All action items are generated with action phrases, assignees and due dates so that you can build workflow automation with your own tools.
This is a category of action items with a connotation to follow-up a request or a task like sending an email or making a phone call or booking an appointment or setting up a meeting. All action items are generated with action phrases, assignees and due dates so that you can build workflow automation with your own tools like Calendar, Project Management, CRM etc. Read more here about action items and follow-ups
The action_phrase type represents the actionable part of an insight. Read more here.
Any explicit question or request for information that comes up during the conversation, whether answered or not, is recognized as a question. Read more here.
Detect and separate unique speakers in a single stream of audio/video without need of separate speaker events. Read more here.
External speaker events or independent audio stream integrations:
Speaker Events can be pushed to an on-going connection to have them processed while using a single stream of audio for multiple speakers. Read more here.
This might come handy!
- Visit Documentation
- Sign-up on the platform
- Getting Started with Postman
- YouTube Videos
- Tutorials and How to Guides
- GitHub Repo
We are excited to see more experiences and use cases that you will build and are excited to support you during / before / after the hackathon!!
During the hack →
- Have fun! This is the new normal of hacking – so have fun at home! We wish to get together soon in a place once things settle. Build something you want to build – and we will support you through the process.
- Try something new – new language / use case / solve a problem for you! We have most of our team available for you – to ask questions or learn.
- Drink lots of fluids 🙂 Wanna make sure you are hydrated.
- Demo it – tell us the why and what ! Even if it’s not working, show the “art of the possible” 😉 Getting NLP and NLU right is hard, don’t worry about scaling generally, focus on one specific scope and we can take it from there.
What can you do after the Hackthon?
If you want to keep digging into this space even after the Hackathon, we have some cool things coming that you might want to try out! If you are interested in the early versions of these API, request early access by sending an email to firstname.lastname@example.org with the subject “Count me in!”, and we will follow-up with you!
- Sentiments by Contextual Topic
- Hierarchical Topics outline parent and child topics with timeline of conversations
- Conversational Analytics like Talk time, Pace, Overlap
- Symbl JS Elements
Happy TAD-Hacking 2020!
Missing something that you might want us to figure out how to add? Write to us at: email@example.com or join our Slack channel. Also, we will reach out to you after the hackathon to get your home address for sending some swag and love in your mail!! See you all virtually. 🙂