One Model for All Interactions’s animating purpose is to organize the world’s communication data into a foundation model that understands every human interaction. To enable this goal, we conduct research in the following areas – each of which represent significant and challenging technical shortcomings in today’s generative AI models.

Active Learning

Foundation models encounter the ultimate “rubber meets the road” challenge – they have to be extensively customized and fine tuned in order to produce robust, high quality, task specific output. To enable businesses and customers to close this loop faster, we are exploring the frontiers of active learning to enable effortless model adaptation from use and feedback. Our models are lifelong learners and can additionally maintain very large context information, enabling the virtualization of endless context.


We strive to customize our foundational insights to the locale and language of the customers that we serve. A key part of that journey is to deliver those insights in as many languages as possible. We take the view that for a language to be rendered accurately and faithfully, it must first be represented equitably. Our research focuses on the underlying lack of diversity in today’s NLP techniques and datasets, and addresses these issues based on our prior track record of delivering a language-inclusive conversational intelligence platform.


A key aspect of natural interaction between an AI model and humans is the proactive nature of the model’s assistance, and fluidity in the interactions. These can only be enabled by a model that is truly multimodal, and takes into account all modalities of interaction data. Such cues also help the model maintain synchronicity within the interaction, by picking up cues that a unimodal representation like text might overlook. The model both understands and generates multimodal content at a native level – every modality is a first class citizen in the model’s representation and architecture. Our research in this area builds on our many years of experience with multimodal data in the form of voice, video, and text from conversations.

Learning to Reason

Today’s LLMs have no inherent reasoning ability – and worse still, the inability to accurately diagnose this failing. Even those models that claim to display reasoning-like behavior merely exploit the first-past-the-post nature of generative model evaluations, and seek to pass off hyper-efficient large-scale retrieval as evidence of intelligent learned behavior. Our models are built on novel learning to reason (L2R) architectures and representations that combine the best of both System 1 and System 2 thinking – fast and efficient reaction on the one hand, and deliberative and intentional action on the other.

Conversation Understanding & Task Evaluation

The biggest challenge in selecting the right generative model to deploy is the incomplete and misleading nature of current evaluation metrics and benchmarks. Although numerous metrics, benchmarks, and frameworks claim to represent the “accuracy” of LLMs, there is very little correlation between performance on these and actual usefulness and performance in real world deployments. Our novel and comprehensive Conversation Understanding and Task Evaluation (CUTE) methodology addresses this problem by evaluating models not on the basis of syntactic overlap with narrowly defined ground truth data, but on the likelihood of the model generating the right output for a given task. This internal methodology iteratively guides our model refinement strategy, and enables us to build more accurate and deployment-ready models faster.

Data Gathering & Engineering

Our data is one of our most valuable assets: highly curated, and in a constant state of evolution to remain relevant. We place a strong emphasis on diversity and breadth of data, particularly within the realms of business and enterprise interactions. Our focus is precise: solving real-world interaction challenges. We prioritize data that represents the specific communication and interaction issues faced by businesses, and offer practically deployable models that offer more specific and actionable output than generic generative models.


We are constantly looking for ways to work with the best and the brightest. We have incubated a vibrant AI research community that includes students, faculty members, academic institutions, industrial research labs, and partner organizations. If you are actively working on any of the research areas mentioned above, and interested in collaborating with us, please reach out to us at [email protected].

Ready to get started?