An Inside View of your Competitors’ Spend on Media

I am sure you have wondered about the amounts your competitors’ spend on social platforms to boost their posts or tweets. You would also have wondered what media spend strategy you should use in response.

What if we said, you can understand the media spends on competitive pages/handles, in real time? Sounds promising? You bet!

The true measure of any media planning exercise lies in the influence one is able to have over their target audience. If you have set out to achieve a specific Share of Voice (SOV) or Share of Engagement (SOE) for your brand, you should consider the options of boosting your Facebook posts or promoting your tweets to achieve your media goals.

Directing your media spend towards boosting your posts on social media is an effective way to reach a larger audience, increase social engagement, get more followers and drive direct-response conversions. There’s a method to this. Read on.

Take Stock

At the outset, decide what your targets for SOV and SOE should be. For example, you may want a third of the possible reach and engagement amongst your audience to be attributable to your brand. Measure your standing in the existing landscape and consider your current SOV and SOE to check whether you are reasonably close to the targeted goal.

  • If the gap from your stated goal is insignificant, implying that you are already on track towards achieving your SOV, SOE goals without incremental media $s, you are good. This will leave you free to spend your media budget on other activities.
  • It is likely that this may not be the case, as most brands today find that organic reach for their pages is being reduced to single digit percentages of the community size. If you happen to find that your SOV and SOE are lagging, you would do well to understand how to drive these metrics up to the desired level.

Decide on what spend is right

  • Estimate how much your competitors spend on boosted posts and tweets and what SOV, SOE they have achieved. Auris can help you here.
  • Back calculate to see what you may need to spend on boosted posts to achieve the right SOV and SOE against your competition.
  • Keep a constant track each month and quarter to check if your assumptions on competitive spends have changed.

By pegging your strategy to the results obtained by your competitors, you can make sure that your $$ are spent wisely and well, with a superior return on investment (vs. deciding on spends in an ad hoc manner). Sign up for a free 14 day trial of Auris and see this in practice!


What can Your Competition’s Content Teach You?

How does a champion player, who plays in the major leagues, prepare for a match? One of the important aspects would be to study the opponent’s game, his/her strategy to understand the opportunities offered and counteractions needed. To win against competitors, we must learn from them.

Social publishing is no different. Anyone tasked with the job of creating their company’s marketing content must ask themselves this question: What can I learn from my competition’s publishing? This is a definite sign of being strategic in your approach. It’s about understanding what is expedient and checking whether we are on the path to success.

The first step towards this is to understand who your competition really is. To begin with, list out who your competitors are, including direct and indirect competitors, as well as role model peers. Having done that, look at what a competitor’s published content can teach us.

What are your competitors trying to position themselves as?

To stand out from the clutter, a brand needs to position itself uniquely and meaningfully when compared to its competition. What are the brand attributes which consumers associate your brand with? Are your competitors perceived as being modern or traditional, premium or value-for-money, innovative or rooted? Given what perceptions exist for your competitors, where would you like to stand? Understanding these attributes can help you plan your positioning in an ideal manner, while ensuring that it’s differentiated from the competition’s efforts. This in turn can determine your overall publishing theme.

Are there content themes they follow?

Learning from your competition should include understanding patterns in their publishing. Publishing frequency, forms, themes and sub-themes, their proportion and your hypothesis on the rationale for such themes will help derive good insights. This exercise can help you choose your own themes.

What kind of content engages the most?

To ensure that content from your re-worked strategy engages significantly better than today, you should look at objective data on what type of content and what form of content engages the best. This helps you decide how many campaigns/contests you must do. How many videos you might want to create. Such inputs help improve the quality of your publishing.

What kind of media spends are being done to boost posts or promote tweets?’

If you are deciding on media spends in an ad hoc manner, its time to change. Look at the landscape to determine what kind of media spends your competition is using. How is their spend distributed? Is it uniformly spread out across content or used only for a few posts? This can help you formulate your own media budget and plan.

A thorough analysis of these points should help you lay out the following:

  • Question your positioning – Where in the landscape do you uniquely stand? What space is available for occupation? This will help determine what unifying theme should be followed while publishing your content.
  • Learning about which type of content works or which forms work better can help you identify opportunities to improve engagement on your own handles or pages. If videos and GIFs work significantly better for others, perhaps you might consider using those. Experiment with content types, and add images, video links and SlideShare presentations, infographics and other visual formats which engage your readers more and help to set you apart from the competition.
  • User engagement enhancement through campaigns, contests, trivia, gamification – you might get cues or ideas which you could improvise upon for your own brand.
  • Determine the optimal publishing frequency keeping the competition in mind. After all, you might be competing for the share of voice and share of mind within the same consumer segments.
  • Map out influencers who help advocate competition. This can help define the profile of the right advocates for your brand and also to device your advocacy building strategies.
  • Media planning – make an intelligent estimate of what media spend on social media might be best suited for your cause. Work backwards considering your desired share of voice and engagement and what your competition is spending. This can help decide spends more scientifically as opposed to using an ad hoc approach.

It’s to be expected that your competitors will be similarly focused on tracking and following your content strategy, setting up a healthy spirit of competitiveness and constant improvement all around, ably aided by competitive insights platforms such as Auris. It all hinges on who has the first mover advantage.

 


6 Ways Auris Weaves its Magic to Help Create Engaging Content

Content publishing aims to build conversations with the target audience, which eventually lead to conversions. Decisions on what content to publish are either based on hypotheses (audience ‘might’ connect with this) or follow a set pattern of themes, with each theme having its specific audience and rationale.

Auris’ consumer insights platform can eliminate the guesswork and enhance the probability of engaging your target audience with the content you publish. There are six ways in which Auris helps create content which works.

       1. Determines what is working and what’s not.

You’ve been creating and publishing content across different content buckets/topics/themes and forms. Do you know what works? Knowing this is the first step towards improving your publishing.

Auris can sort this for you. It can help you run a rigorous content audit which can help you identify the most engaging content that you’ve published within a defined time frame. All you have to do is, take a broad view of the past few month’s worth of content and pick the ones which performed. After identifying, draw patterns which might have led to performance – for example communication style, design, form and the like. Continuously enhance the proportion of such content in your conversation calendar. Repeat. The picture below shows how the brand’s content around events, communication showing empathy and the buzz around their brand ambassador worked best. More of such ideas should help improve engagement.

Auris top performing content

        2. Craft user-generated content by leveraging influencers

Influencer marketing works and we know it for a fact. The question is how do you identify influencers – people who have an influence score and have had an affinity for your brand?

Every brand has a set of consumers who engage with the brand consistently because they love the brand. With the help of Auris, your brand can identify and collaborate with this set of influencers. These influencers contribute to building a strong community and increase the brand’s credibility. A much better alternative to manufacturing influencers who did not have anything to do with your brand. Note that this is more economical (and in several cases, free) because you seek out influencers who are already happy with your brand and only needed that nudge to spread the word!

influencer identification social listening        3. Identify common concerns and address them through educative blogs or posts

Customer feedback is gold dust. If you mine this data (and Auris helps you do that), you can understand what user concerns are. Your content now has a purpose – to comprehensively address these concerns or questions. When you do this, your content is bound to engage. The classification chart below helps illustrate how such topics could be identified. For instance, in the case of an organic food brand, if people have concerns on why the product tastes different, content explaining why organic food should taste different will offer  value-addition.

Consumer issue identification using listening       4. Ride the wave of trending topics

Riding an existing wave is smart. Auris allows you to identify the trends not only in your industry but also amongst your audience network. You can now have a look at what’s popular amongst your audience network and publish interesting content that has a higher probability of resonating with your potential consumers and generating engagement. Monitoring buzz trends helps identify ‘seasons’, for example in the example below, it shows the buzz is around examinations and therefore content around this topic might be relevant for consumers.

develop content using trends        5. Closely watch how your competitors are performing

One of the most interesting things you can do with Auris is watch what your competitors are up to. Add them to your watch list and gather insights from their content strategies to pull out a more powerful and engaging strategy for your brand.

content cues from competition

        6. Content which is context aware.

A consumer’s journey can be broadly divided in to three stages – Awareness, Consideration and Purchase. A macro view of the audience and where they belong can help craft the right content strategy. For those who are not aware of the brand, content which drives recall should be used. For those who are considering a purchase, building brand preference is key. For those buying from the brand, being available and enabling advocacy becomes more important.

user context for content decisions

Content Marketing costs significantly less than traditional marketing but brings in significantly more leads. Using Auris helps you hone your content publishing pushing the ROI further up!


Does Your Social Listening Tool Comprehend Sector-specific Context?

Context is everything they say. And it should be no different w.r.t. drawing social listening insights from the buzz around your topic of interest. While it intuitively makes sense, CMOs and the marketing function in general have been using a variety of insights platforms which are generic, off-the-shelf products – delivering the same goods regardless of whether the brand belongs to the FMCG sector or to hospitality. That is the status-quo, which we believe should change. But, is there a real need for context?

For those asking this question, the reasons are not far to seek. Here are a few of the important ones to drive home the point.

  • The drivers of customer sentiment vary by sector. The telecom industry values customer service most. The consumer electronics goods sector is all about product features and functionality. In contrast, FMCG customers look for availability and product attributes such as fragrance or flavors. Adding this context while tracking customer sentiment allows one to derive insights which are strategic.
  • Today’s social listening tools use artificial intelligence and deep learning methods to train their algorithms to deliver insights automatically. Making this training sector-specific, helps improve the model’s predictive capabilities and precision.
    • ‘Old’ or ‘vintage’ may be complimentary words when referring to wine or high-end cars. When referring to FMCG products or political leaders, not very much so.
    • Words like ‘capital’, ‘tire’, ‘mine’ etc. could mean different things to different sectors and training an algorithm to discern them by context would prove nearly impossible.
    • How would a typical AI classifier tag this comment: “The patient came in to the emergency bleeding and unconscious. A lot of blood was already lost. The trauma center administered care immediately.” You’ve guessed it right!
  • A sector-specific approach involves listening to platforms which are specific to the sector. For example, RateMDs is a forum you’d listen into if you are into healthcare, whereas you’d be concerned about negative reviews on TripAdvisor if you are into hospitality. Sector specificity also implies selecting the sources of data which are more relevant.
  • Root cause analysis works better when your data enrichment is specific to the sector your brand belongs to. Else, the analysis stops at sentiment charts.
  • Find more people who need your product or services and convert them faster, by responding in time and meeting customer expectations while gaining visibility in the process.

Sector specificity helps you draw out insights which would get lost in sentiment charts. It helps you draw out inferences which can be translated into action. Adding context makes a big difference.


AI-Powered Data Enrichment

If technology has made the collection and storage of data extremely easy, Artificial Intelligence (AI) has made the task of organizing such data and analyzing it even more easy. Machine Learning plays a very important role too, in enabling AI to continuously perfect its decisions by honing in on what is efficient and discarding whatever is inefficient. Data analysis yield superior results when data is enriched by tagging it with information relevant to the business. This could be achieved by manually tagging the data or by enriching it using AI-powered methods. Let’s discuss the significant differences between these two approaches and how AI-based automated tagging becomes essential for a higher volume of buzz.

Much of the data collected from disparate sources is pretty difficult to use in its raw form. Data enrichment is an important aspect of any analysis as it has disparate sources. It will need some cleansing, uniformity in classification and tagging before it can be of any use. Tagging a document is a data enrichment method which associates a document with a name so it is pulled up in any search, even if the document itself doesn’t contain the name and wouldn’t be pulled up when using any automatic text analysis.

These efforts can be taken up manually or using an AI-powered tool. Let’s look at the essential differences between the two processes and their performance:

Essential Differences between Manual Tagging and AI-Enriched Data Enrichment

Human knowledge is superior to artificial intelligence, optical character recognition (OCR) and automatic ranking algorithms. Machines on the other hand never get tired, bored, make uncalled-for errors, insist on working fixed hours, apply for a leave of absence or enjoy labor laws which protect them.
Domain experts can add words and offer interpretations and qualitative values in a way that can’t be automated. Training your machine to reach the level of a domain expert needs time, patience, a lot of data enrichment and context-based training, as well as a suitable budget.
Humans can find any documents they need even without tagging. Data needs to be tagged before it can be analyzed, but the process is far faster than by any other means.
Documents which don’t contain the name, word, query or alias to be tagged require human editors. Documents which contain the words to be tagged can be handled without problems, provided there are no typos or OCR errors.
Carries the personal bias of the editors Standardizes the content and removes bias
Slow and time-consuming process. May take days for what AI accomplishes in minutes. Saves significant amounts of time, especially when chatter is upwards of a few hundred a day.
Won’t be able to discover issues. Helps you discover issues which you are unlikely to find with human tagging.
May flounder and lose focus when large amounts of data need to be processed. Understands semantic relationships using other numerical methods helps an AI powered data enrichment effort.
Manual tagging brings in a lot more work, when compared to using an AI-powered tool. Creates auto-tags in little or no time, saving you a lot of time and effort.
May provide insights but is not guaranteed to. Turns a huge amount of data into well-structured and actionable insights.

Enriched data is a real asset to any organization, as it improves outcomes and enables informed and insightful decision-making. Do you have large amounts of data which requires dedicated efforts over time before it can yield any reliable insights, or plan to start collecting it from now on? Do take a look at what Auris, our AI powered insights platform has to offer by way of collecting and enriching your data and offering powerful and actionable consumer and business insights.


Reply Bots – The Risk-Reward Tradeoff

Businesses today use AI-powered chatbots to offer responsive customer service online, at all times of the day, on their company’s website and as a live chat on the mobile app. They are considered a great alternative to human employees who are pressed for time and bored with the tedium of digital work. They are also considered useful to predict safety concerns, manage maintenance schedules and streamline performance. The trend is heading for AI-powered market targeting which is about creating personalized emails and promotions based off collected customer data.

Chatbots are currently deployed in customer interaction whether in sales, customer service and marketing. They are expected to improve service quality and automate administrative tasks.

  • By deploying a bot, the business is offering to be available to customers at all hours of the day, and even during hours when human intervention is not possible.
  • It adds predictability to responsiveness
  • With a chatbot, a customer interaction can have a faster Turn-around-Time (TAT)

However, they constitute a risk to a business, even without adding cybersecurity concerns and the threat of unencrypted channels, unauthorized access to stored conversations or hackers using the bot for phishing attacks to the mix. This is because:

  • Your customers are looking for a human behind the chat. Bots are the anti-thesis to everything ‘social’ stands for and the last thing to enthuse a stressed or irate customer would be a bot, offering to deal with their issue.
  • A chatbot which is meant for human interaction is also vulnerable to human intervention with malicious intentions.
  • The bot could be taught to adopt terrible communication skills, much like Microsoft’s Tay, a chatbot which started spewing offensive, racist language thanks to its machine learning skills and the ‘training’ it received from trolls. With enough interactions to convince it, a bot could be trained to say that the Sun rises in the West or divert customers to malware sites. Such potential for sabotage and subversion should make any business adopt caution before using a chatbot.
  • A stolen system with the chatbot open could yield sensitive customer information which could prove detrimental to the business and its interests.

In summary, launching a chatbot is easy but ensuring that it works effectively is not so easy.When using a chatbot, the risks make anything beyond listening to the customer and responding assuring with a “We will get back to you …” not worth incurring.

The way forward with this technology, which is still evolving, would be to develop workflows which alert the right people within a response team. It can also be trained to escalate to the right level by detecting the urgency of the situation. Establishing distributed ownership to responses, coupled with oversight to the response process, is a far better alternative to the use of automated responses when using chatbots. With Auris, our listening platform, we have done just that.


What does Real Time Listening do for You?

So, what is the big deal about listening in real time? What if you listen in to what your consumers are saying every other day or perhaps during your monthly review meeting? The short answer is that when users are experiencing your brand and engaging with it in real time, they expect the brand to reciprocate. But that’s just the short answer.

Listening to your consumers in real time provides your brand with an edge. Let’s take a look at some common situations and how real time listening makes all the difference:

  • Real time listening leads to real time responses. Responsiveness is a virtue that consumers appreciate. Responsiveness, in turn, leads to improved customer loyalty and enhanced brand reputation.
  • You’ve discovered a customer who is looking for products and services similar to what your brand offers. Would you want to waste even a minute if you come to know in real time about this prospective client? We should think not!
  • There is a crisis in the making. Real time listening can equip you to douse the fire, just in time.
  • An issue has already escalated into a crisis. Every minute counts before it snowballs further into a national issue/topic, finding its way to households through mainstream media. Real time listening gives you the few hours or minutes you need to execute your defensive public relations strategy.
  • Running a campaign? Well, you can improvise in real time. For example, even in the middle of a campaign, based on what excites your consumers the most, you could tweak the campaign elements and enhance the results.
  • Looking for influencers who have recently had a good experience with the brand? Catching them early when their experience is recent, works best.

Any marketer will be happy to be a fly on the wall, if it means having access to customer sentiment or gaining an understanding for the decision-making methods of a prospective client. Big data and Artificial Intelligence have now made this dream come true, without much effort.


An Incredible Alternative to Questionnaire-based Research

Questionnaire-based research methodologies have their own well-known limitations. It is tragic but true, that we realize this fact only in retrospect, after we spent tens of thousands of dollars! Most often, they are derailed by the use of either the wrong sample or the wrong questions, or both. What if there was no such limitation? What if we could structure quantitative studies in the same open-ended manner that we deploy in qualitative research and still derive the quantitative insights we wanted? We explore the limitations of questionnaire-based methods and propose a possible solution.

The practices of market research might seem simple to the lay person. All it involves is this: We determine/choose a sample of respondents, present them with a pre-designed questionnaire to which we elicit their answers, collate them and extrapolate the insights offered to a larger audience and publish our findings. This may all seem very simple, but the reality is that market research is an involved science. Designing the questionnaire, selecting the sample of respondents and administering the questionnaire are all as scientifically determined as the methods applied in running a credible statistical analysis on the results. Each one of these steps offer multiple nuances,  on account of the multiple options and methods involved in the process.

The result of such complexity and nuances in market research methods is that not all studies are successful. How many times have we questioned the results after an expensive study? There are several things which could have gone wrong.

Let’s look at some potential sources of such errors in a questionnaire-based survey. You’d understand that these limitations and biases impact consumer research more frequently than one would imagine:

  • Selecting the population: The sampling frame selected and the weightages attributed to extrapolate it to a larger group may not truly represent the population. We see this error particularly affecting the prediction of election outcomes. Collecting the opinions of the wrong crowd is bound to result in wrong findings.
  • Biased Questionnaire: The researcher’s own point-of-view could creep into the way the questions are designed and the answers are interpreted or analyzed. This error is not measurable and can be very subtle or quite purposeful in the way it could cause the results to go wrong. Even senior researchers fall prey to this error, mainly because opinions about some topics, like religion, secularism, marriage, discipline or education (to name a few), are strongly entrenched for most people.
  • Not from a Random Sample: The very act, of distributing a questionnaire and collecting responses to it, makes the possibility of obtaining a random sample difficult, which affects the reliability of the statistically derived predictions. To illustrate, the very fact that all are willingly agreeing to fill out the questionnaire makes the survey non-random, but it can’t ever be fixed because the ones who refused to fill it out will not change their minds anyway. People who are offered incentives to take a survey could be biased.
  • Data not Weighed: It is important to weigh the data collected by the sub-group which willingly agreed to cooperate and respond to the questionnaire. Sometimes, surveyors fail to collect or incorporate this information, like gender or ethnicity, to weight the data and compensate for its effect on the predictions made based on the survey responses. Another way to control this error would be to quota the sample by deciding how many people belonging to each sub-group will participate in the survey.
  • Over-correction: When faced with concerns about possible errors and biases creeping into one’s data, researchers tend to over-correct it for every imaginable error. This could potentially skew the characteristics of the data and make it lose its capability to represent the larger population.
  • Miscellaneous:
    • Closed-ended questions have a pattern in responses, as respondents go by primacy and recency which weigh in for the first and last options.
    • The order in which questions are posed could influence the responses.
    • Leading/pointed questions, however subtle, impact the results obtained.
    • Saying ‘Yes/Agree’ comes easier than saying ‘No/Disagree’.

Research is still exploring ways and means to correct these errors and biases which could creep into one’s survey. Informed opinion asserts that an excellent alternative to a questionnaire-based survey would be collecting the voluntary responses of consumers, given without any prompting or incentive, making social listening important for market research.

What if there was no need to set out with a set questionnaire? Just like in qualitative research, but then with a sample large enough to enable us to derive quantitative insights? What if we could eliminate, to some extent, such biases from impacting our research?

The answer to these “what ifs” is a method where one could have access to an infinite set of consumer views which could be mined to understand their possible responses to multiple questions. Large data sets of, voluntarily offered, consumer opinion help remove the constraints on the number of ways we could slice and dice the sample of our choice, answering a hundred different questions.

Auris, our AI-powered consumer insights tool is an attempt to make this wish come true. With Auris you have an incredibly large sample of all consumer feedback which they have expressed of their own volition. A researcher can now look at the cohorts of their choice and seek answers to as many questions as come to mind. Your research therefore yields trustworthy results because you gain the ability to listen to an unlimited amount of feedback in real time, from a random selection of people who voluntarily provide it to you.


What Sample Size is Acceptable for Qualitative Market Research?

All marketing practitioners who have taken up consumer research would have encountered the question – how big should the sample size be? For qualitative studies, the answer typically is: “As big a sample size as you can afford to have. The more the better!”. Auris takes the ‘sample size’ out of the equation and makes this question redundant. Imagine qualitative research with significantly large, unprompted views of the consumers, rather than just a handful of views. For those interested in research techniques, here’s some more detail you might find interesting.

When deciding the sample size for a qualitative market research project:

  • Do we follow Patton (2002) by accepting that even 1 case is enough for research which does not have theory-building as core focus, and plans only to explore an issue or offer depth to quantitative data? After all, according to Patton (1990), a researcher needs to focus on what is doable or feasible; because sampling to the point of redundancy requires unlimited timelines as well as unlimited resources.
  • Or, do we follow Saunders, Sim et al. (2012) who ask us to stop sampling only after we reach theoretical or conceptual saturation, if we aim to build a theory? If saturation makes further data collection unnecessary, we need to ensure that its operationalization is consistent with the research question, the theoretical position and the analytic framework adopted.

Let’s consider the issues involved.

  • Qualitative research indicates that it does not rely on any quantitative necessities, like sample size. Typically, it requires smaller sample sizes than quantitative analysis would.
  • Sample size in qualitative data analysis is measured by the number of themes or categories identified within the data.
  • Saturation is the point where adding new data does not improve the explanations of the themes or the categories or add any new perspectives or information.
  • Diminishing returns on qualitative data occur when more data leads to no new information, as one occurrence is enough to add the information to the analytical framework.
  • Gathering too much data and too many samples could make a research project impractical and time consuming, as each new piece of data brings in additional complexities.
  • It is important to capture all perceptions, but there is no need to do it on a repetitive basis, without adding any additional inputs to the research points being inquired into.

So, we accept that it is prudent for a qualitative researcher to take up enough cases to reach a saturation point where additional efforts will not be able to yield further useful information from the sample.

More about reaching saturation and why the sample size is important:

  • The size of a sample and the design of the project determine the time taken for a project to reach saturation. To illustrate, a study covering Waldorf education in India up to the seventh standard would naturally take less time to reach saturation than a study of education in general.
  • The amount of information obtained will also depend upon the interviewing techniques and skills of the interviewer.
  • Some projects have large sample sizes because:
    • The sample has heterogeneous population
    • The project uses multiple selection criteria
    • The study contains multiple samples
    • The project needs an extensive ‘nesting’ of criteria
    • The sample contains groups of special interest requiring intensive study.
  • There’s no scientific explanation or method for determining that saturation has been reached at a specific point in the survey.
  • Saturation is considered to be a matter of degree by some, with the potential for some new data to emerge.

Various factors decide the sample sizes in qualitative studies, and these suggestions range from 5 to 60, with 30, 40 and 50 being other significant reference points. Ultimately, the decision to call a halt to the study can be taken by the researchers, when they finally find that with each new interview they are unable to extract any significant new information to add to their data bank. That would prove to be the ideal sample size for their qualitative research study.

This is where an AI-powered social listening tool like Auris offers a market research team the unique advantage of being able to listen to an unlimited amount of feedback in real time. What’s more, these opinions, views or feedback are as random and unprompted as they can be, yielding highly dependable predictive analyses as the subjectivity which researchers tend to bring in is eliminated completely.