Understanding Sentiment Analysis in Social Media Monitoring
I just looooooove brussels sprouts.
Can you tell if I’m being sarcastic or if I really do love these….things?
The point is, you’ll never know. Neither would a computer if this was a post on social media.
That’s what makes sentiment analysis such an expansive and interesting field. Sentiment analysis–also called opinion mining–is the process of defining and categorizing opinions in a given piece of text as positive, negative, or neutral.
With the post above, sarcasm is a form of irony that sentiment analysis just can’t detect. Heck, it’s hard enough to do if your human and trying to read someone’s online post.
With technology’s increasing capabilities, sentiment analysis is becoming a more utilized tool for businesses. Social media monitoring tools use it to give their users insights about how the public feels in regard to their business, products, or topics of interest.
It’s widely used by email services to keep spam out of your inbox and by review websites to recommend new content like films or TV shows.
However, it has been used in more murky circumstances. Facebook, for example, came under fire when it was discovered they were using sentiment analysis to see if they could manipulate people’s emotions by altering their algorithms to inject negative or positive posts more frequently into their users’ news feeds.
By using this process of “emotional contagion,” they found that they could decisively influence their users’ emotional output by flooding their news feeds with positive or negative posts. The big problem is that Facebook never informed its users that they were part of an experiment and may have caused emotional distress to them in some cases.
Clearly we can see how this use of sentiment analysis can be problematically unethical.
But our main interest today lies in how sentiment analysis is used with social media monitoring tools.
There are three machine learning classification algorithms that are predominantly used for sentiment analysis in social media listening:
- Support Vector Machines (SVMs)
- Decision Trees
Each has it’s own advantages and drawbacks; however, a few different studies have concluded that the Naive-Bayes classifier is the more accurate of the three.
There are also two main algorithms used within a lexicon based approach:
The most accurate and best approach is a combination of both. However, today we’ll go into one of the more widely used machine learning algorithms which is the Naive-Bayes algorithm.
So what is a Naive-Bayes classifier? It’s a machine learning classification algorithm that asserts an independent value for each feature or datum within a dataset. In other words, each element is valued individually to determine a probability that the sum of these values will constitute a pre-defined label or outcome.
Here is a watered-down example from Analytics Vidhya:
“For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple and that is why it is known as ‘Naive’.”
There’s a ton of math involved in quantifying just how the outcomes are processed. I’ll try to provide you with a quick summation of how it’s done. In terms of sentiment analysis for social media monitoring, we’ll use a Naive-Bayes classifier to determine if a mention is positive, negative, or neutral in sentiment.
With Naive-Bayes you first have to have a dataset. With textual sentiment analysis, this usually comes in the form of a training set bag-of-words already sorted into positive or negative categories.
A positive word may have a +1 scoring while a negative word will have a -1 scoring. You can also assign higher values to certain words that may be more negative in degree. Regardless, if the final score of a mention is positive, then the mention is positive and vice versa for negatives.
Let’s take a look at a mention and see how a computer would score it if we pre-assigned sentiment to our bag-of-words.
Let’s pretend we’ve already assigned sentiment to a group of words within our mixed-bag as they appear below.
Each word only appears once so for time’s sake we don’t need a frequency table. If we ascribe each positive and negative value a “1”, then we can simply divide the positive and negative words by the amount of words (19) in the entire mention.
|Positive words: 3/19 = 0.16|
|Negative words: 2/19 = 0.11|
|(P)0.16 – (N)0.11 = +0.05|
Since the total of our mention comes out as positive, we can say the sentiment of the mention above is positive. This is a pretty clear-cut case as we didn’t encounter polarizing words that might skew the result if a computer can’t understand which category the word belongs to.
Let’s take a look at how two different social media monitoring tools might score the same mention.
The post details a woman’s overall joy of receiving a pair of new shoes. This is easily scored as a positive mention to the human eye. So let’s look at it from a mathematical perspective.
Leaving out the negative words, let’s create a table with all the words or phrases a person might categorize into negative or positive without contextualization. Keep in mind that some word databases may or may not categorize all of the words I have chosen to categorize.
There are 90 unique words in the mention not counting the emoticons or hashtags.
|Positive words: 10/91 = 0.11|
|Negative words: 1/91 = 0.01|
|(P)0.11 – (N)0.01 = +0.10|
We can see that the sentiment scores as positive here. For the sentiment to be neutral then some of the words listed may carry heavier scoring for negative such as “spoiled” or none of the words in our table were categorized into negative or positive, so the entire mention was unscored, resulting in a neutral categorization.
As stated previously, no sentiment analysis algorithm is perfect; humans don’t get it right 100% of the time and you can’t expect machines to either. Even if you try and make sure that your bag-of-words are categorized as correctly as possible, without context it’s impossible to justify any category 100% of the time.
A good philosophical example of how we classify and define our relationship to words can be seen in the following example taken from David Foster Wallace’s Broom of the System:
Which is the more important part of the broom: the handle or the bristles?
We might normally minimize the context of the situation to what the broom is most normally used for; i.e, sweeping. But without context, the answer isn’t so obvious. If you’re sweeping then the answer is the bristles. If you need to break a window, then it’s the handle.
Sentiment analysis used with film reviews can more easily parse words into negative and positive categories based on their contextual relevance to watching films. Here’s an example of a bag-of-words for film review sentiment analysis that have already be categorized from Ataspiner.
These words are categorized by their relationship to sentiment toward a film. With social media monitoring, sentiment analysis is much harder because their isn’t a defined contextualization process. People talk about everything and anything under the sun and their feelings and opinions toward certain topics are almost impossible to contextualize for a computer.
However, when wanting to uncover a sentiment about your brand, it’s important to use sentiment analysis to get to a somewhat definitive answer. I use the word “somewhat” to underscore that assigning sentiment to a mention is a difficult undertaking for humans and computers alike. Even with the ability to detect irony or sarcasm, humans score sentiment correctly about 80% of the time.
For computers it’s even harder, as we’ve outlined above. However, the more they score, the easier it is for you to discern a public opinion about your topic. With most tools, like Unamo Social Media, you can apply your own sentiment score if you feel that the algorithm has scored it incorrectly.
With that being said, the more a computer scores correctly, the less work you have to do in the long-run and can begin using the data to mine significant feelings about your brand over time.
Understanding the sentiment in regard to a specific campaign or time period can underscore the public’s feelings about it and where to go from there. For example, a study out the University of Jordan wanted to uncover the public’s sentiment about car manufacturers in the automobile industry. They decided to use sentiment analysis of Twitter to get their results.
Using this kind of data, consumers and other businesses can discern that Audi has the highest rate of customer satisfaction on Twitter. This is just one applicable use of sentiment analysis within social media monitoring, and the information is valuable not only to the manufacturers, but to public consumers as well.
Additionally, we should always pay attention to high levels of sarcasm and irony within social media when we’re analysing a particular topic. Not to pick on United Airlines as everyone else has, but they experienced one of the larger PR disasters on social media maybe ever last April after forcibly dragging a passenger off a flight they overbooked.
You would expect the majority of sentiment to be labeled as negative; but the entire phenomenon saw a huge influx of memes, sarcasm, and other jokes poking fun at what a huge disaster it was.
Here we come back to contextualization. It’s hard for a computer to understand a lot of these kinds of posts as negative PR for United Airlines, thus a lot of the posts that use sarcasm or irony can be marked incorrectly because they use positive and negative words in a sarcastic tone, or to set up a joke that’s finished within a meme, image, or video.
Ultimately, sentiment analysis isn’t perfect, but neither are we when trying to decipher what someone means. Within social media monitoring, we need sentiment analysis as a starting point to understand general public sentiment in aggregate. From there, we can use the public’s general feelings to initiate campaigns based off of their feedback.
Social media is perhaps the largest pool from which we can mine for public opinion and begin to gather informative data on the success or failure of our brand, products, or marketing campaigns in the eyes of the public.