Use Cases

The Challenges of Effectively Capturing Human Sentiments

Sentiment analysis is a powerful technology with applications ranging from customer support to market research. Successful sentiment analysis systems must include capabilities to handle nuanced levels of meaning, which can often include complexities like sarcasm and bias.

What is sentiment analysis?

Sentiment analysis is the use of natural language processing, text analysis, and biometrics to analyze content in voice, video, and text conversations to detect the underlying affective states. In a business context, sentiment analysis is frequently applied to understand customer attitudes in customer reviews, social media posts, survey responses, and other online media.

In its most basic form, sentiment analysis detects whether the underlying affect is positive, negative, or neutral. In “I love my new headphones,” the speaker is expressing a positive attitude toward a specific object, their headphones.

However, most applications of sentiment analysis are more complex. In “Steve loves his MacBook Pro except for one thing — the keyboard,” the speaker is focusing on a specific opinion holder (Steve) and contrasting his positive attitude toward an object (his MacBook Pro) with his strongly negative attitude toward a specific aspect of the object (the keyboard).

Examples like these show that sentiment analysis can rapidly become highly complex. Let’s take a closer look at some of the top challenges for building an effective sentiment analysis system.

Aspect-based sentiment analysis

In everyday conversation, speakers frequently express nuanced levels of meaning within a single sentence. For example, in “Ben Affleck was a dismal failure in an otherwise stellar ensemble,” the speaker expresses a negative attitude toward Ben Affleck, but a positive attitude toward the rest of the ensemble.

Aspect-based analysis depends on two primary tasks. First, a system identifies the attitude targets mentioned in a given sentence. This process is known as aspect extraction. In the example above, aspect extraction would single out Ben Affleck and ensemble. Once these aspects are identified, a system determines the attitude associated with each target in a process known as aspect-level sentiment analysis.

Rule-based strategies that leverage predefined text classifiers are a prevalent technique for aspect extraction. A variety of approaches have been developed to understand the relationship between attitude targets and their context.

Two promising approaches utilize convolutional neural networks (CNNs) and dyadic memory networks (DyMemNNs). Convolutional neural networks are deep learning algorithms that can take an input, assign importance to various aspects of the input, and learn to differentiate them from one another. Dyadic memory networks are end-to-end neural architectures that enable interactions between aspect and word embeddings, leading to strong approaches on sentiment analysis tasks.

Contextual sentiment analysis

One of the trickiest things about sentiment analysis is the way words change their meaning with context. The same word or phrase can be positive, neutral, or negative, depending on other words in the sentence:

Positive: “We reserved a big house in the Outer Banks for our vacation.”

Neutral: “Buying a big house has advantages and disadvantages.”

Negative: “Dan’s doing time in the big house on a possession charge.”

As we see in these examples, context can not only cause the attitude attached to a word or phrase to shift — it can also change the meaning of the words themselves. In the first two sentences, big house refers to a large house. In the third, the meaning of big house has changed due to context. Here, the speaker is using it as a slang term for prison.

The topic of an utterance has a strong influence on sentiment. As we saw above, when the topic is going on a relaxing vacation, the sentiment is positive, while when the topic is going to prison, the sentiment is negative.

Sentiment is also strongly influenced by background knowledge. People do not express commonsense knowledge that they expect anyone to know. Understanding this implicit knowledge is vital. In analyzing “I haven’t left the house in three weeks” is negative, it’s important to know that staying in one’s house for an extended period of time is generally viewed as undesirable.

Sarcasm analysis

Sarcasm and irony is highly prevalent in everyday conversation, which makes sarcasm analysis a critical area of focus for successful sentiment analysis systems.

Research into sarcasm analysis has historically focused on sentence-level understanding. A variety of approaches have tested focusing on sentence-level features, such as detecting incongruity between the sentiment expressed by different words within a sentence.

Recently, researchers have found greater success looking for cues in the conversational context. Two types of evidence have proven especially promising.

Authorial context looks at speakers’ tendencies to express themselves sarcastically by reviewing historical data. For example, historical analysis would reveal that characters like Chandler Bing tend to express themselves in a sarcastic manner.

Conversational context is also essential for detecting sarcasm. Often, the only way to tell if a sentence is sarcastic is in the context of facts revealed earlier in the conversation. Taken on its own, “Ryan Fitzpatrick sure crushed it on Sunday” initially sounds positive. But if analysis reveals that an earlier statement in the conversation was, “Fitzpatrick threw six interceptions against the Chiefs,” we can see that it is in fact negative.

Bias in sentiment analysis

Whether it’s used in customer care, market research, or reputation management, sentiment analysis typically handles data from a wide variety of demographic backgrounds. With that in mind, it’s critical to remove bias that can introduce error into sentiment analysis.

Bias is frequently introduced into sentiment analysis systems through word embeddings, the underlying representation that results when words and phrases are mapped to a vector space for use by sentiment analysis systems.

In a 2018 study, researchers at Cornell University examined whether biases in training data can lead to bias within sentiment analysis systems. Researchers used 219 sentiment analysis systems to evaluate templated sentences describing a subject’s emotional state, like “This situation makes [person’s name] feel [emotion word].” Researchers randomly substituted male and female names, emotion words, and gendered pronouns into the sentence templates.

The results were surprising. Over 75% of sentiment analysis systems consistently evaluated the emotional intensity of the sentences differently when only the gender of the subject was changed. Systems tended to assign higher emotional intensity to sentences reflecting anger or joy when the subject was a woman, while sentences reflecting fear tended to receive higher emotional intensity scores when the subject was a man. These evaluations likely reflect common stereotypes (for example, that women are more emotional and that men are more likely to be in dangerous situations).

There are many ways bias in training data can cause bias within sentiment analysis systems. Bias can be introduced if one gender is overrepresented (or underrepresented) in the training samples. If a particular demographic category is consistently described in positive or negative terms, the resulting model could also be biased.

Since bias can be easily introduced into sentiment analysis systems, identifying effective de-biasing methods is an emerging area of study. This is an important focus for future development for sentiment analysis.

Additional reading

You can get started with sentiment analysis today with Symbl’s Conversation API, which gives you the power to convert speech to text, surface discussion topics, and conduct sentiment analysis on any conversation. And to learn more about sentiment analysis, check out the resources below:

Sign up for Symbl updates