At Zencity, we put a premium on bringing you the highest quality data takeaways. That means making sure our sentiment analysis algorithm—the engine behind so much of the data that appears in your Zencity Organic dashboards—is as precise as possible, as well as continually training it to adapt to all the ways human beings express emotions and share opinions online.
In this article, we’ll go through how our sentiment model works to accurately process and classify the data items—from posts to sub-comments—that make up your community’s conversation, plus give some examples to give an idea of the model at work.
What does sentiment mean?
When we talk about sentiment, we’re referring to a measure of satisfaction. Rather than being about whether certain words are associated with being “happy” or “sad,” our model is interested in discerning how satisfied the author is with the issue at hand.
In other words, the model asks: “Would the author of this post or comment vote for or against a measure about this issue?” The answer to that question, which often involves incorporating a lot of context about what’s being discussed and previous remarks in a discussion, will determine which label a post or comment receives: positive, negative, or neutral.
What is eligible for sentiment analysis?
As mentioned above, context can be crucial for assigning an accurate sentiment label. Let’s break down the anatomy of online discussions to understand when, and how, context comes in and how that affects the ultimate sentiment decision. The two main building blocks of online discussions—from social media to the comment sections of local news sites—are posts, comments and sub-comments.
-
Posts: Acting as “the source,” posts can be processed to and assigned a sentiment label on their own. This label is then factored into the overall sentiment calculation for your community, as well as the specific sentiment count for whatever topic the post is about.
-
Comments: Branching out from the original post, comments can be highly contextual and respond directly to the point raised in the post—and sometimes, as most of us have probably spotted from our own time on social media, they can go off in an entirely new direction of their own. Since our model needs to be able to attribute sentiment to specific topics, this differentiation is important. As a result, the model evaluates both the sentiment of every individual comment, plus whether it’s related to the original post’s point. That means:
- If there’s a direct connection, the model asks the same question about whether the author is satisfied with the topic being discussed
-
If there’s no connection, the model treats it like we do a post and independently evaluates the sentiment and new topic that appear in its content
- Sub-comments: Similarly to comments, the model determines the relevancy of the sub-comment to the original post itself. Depending on that finding, it either assigns a sentiment label in the context of the original discussion or as a standalone item.
Common examples of context-dependent sentiment classification
There are some quite common instances where context plays a critical role in assigning the accurate sentiment. Therefore, we’ve trained our sentiment model to spot those moments and give the right label. Let’s take a look at some of those examples:
-
Sarcasm! When reading a discussion online, our brains generally know how to pick up on key clues—from the context of what came before in a thread, as well as word choice and emojis—and know if a commenter is being sarcastic or serious. We’ve trained our model to do this, too. Now, if it spots a post or comment that reads something like, “Way to go, Sunnyville, you really got this one right 🙄,” it can understand that the poster is not satisfied and can give the item a negative sentiment label.
-
Finally! Certain phrases, like “finally” or “it’s about time” often sit somewhere at the intersection of sarcasm and positive sentiment. Take this example: A resident shares a link to a news story about expanded bus service with the caption “It’s about time.” While the tone is pretty sarcastic, ultimately the author is often expressing that they are now satisfied with the decision even if they have been frustrated with the process to get there.
-
Wow! Just as we might hear them in conversation, expressions of surprise (“Wow! Did you know?”) often have a “trivia”-esque quality to them. Not necessarily expressing satisfaction or dissatisfaction, they serve more as general fun facts to be shared. Therefore, our model assigns these types of posts and comments a neutral sentiment label. If there’s additional content that can offer important context for the author’s point of view and satisfaction level, the model will take that into account and assign the item a positive or negative label.
- Funerals and other moments of loss. News stories and social media posts about any loss of life tend to generate an outpouring of comments filled with empathy and condolences. While these comments certainly reflect the difficult emotions of the moments, such as sadness or hopelessness, they are not indicators of negative sentiment (unless coupled with explicit additional context, such as a call for stricter gun regulation, in which case the relevant sentiment label would be assigned).
These are just a few of many examples we’ve trained our model to recognize, in order to most accurately understand the conversation happening in your community in the same way each of us might if we were reading a comment thread on our own. You can also be a part of this training! If you ever spot an item in your dashboard whose sentiment label you disagree with, simply click on the dot in the bottom right corner of the post or news story and assign it the label you believe is a better fit based on your own reading and content. Your input here is a vital part of us continually improving our model.
Comments
0 comments
Article is closed for comments.