The insurance company case study of how we researched the relationship between spam detection and low success rates of cold calls
Cold caller: “Hi Ben! I’m Alex, a sales representative from the X insurance agency. I’ve heard that you’re looking for a car insurance renewal, and I’d like to share with you a great offer we have. Do you have a couple of minutes to discuss this?”
What would you answer if you were Ben? The high chance is “no” if we trust the study conducted by Keller Research Center at Baylor University that informs about a 1% of success rate in cold calling.
A common reason for all those nos is that many cold callers sound like spammers. However, an insurance company can improve its chances of making successful cold calls with the help of machine learning (ML). Alike in the case study, we describe below, ML provides the company with great insights into the relationship between spam calls and underperformance so the company could use this info to refine the strategy of cold calls.
“Customers don’t answer my phone calls” — a high chance exists that you’re hearing this complaint more and more often from your sales reps.
The 2021 annual survey of business communication informs that more businesses struggle to connect with their leads and customers via phones. Half of the businesses say that they reach only the answering machine, and this problem has intensified by 133% since 2020 (the researchers link this change in consumer behavior to the Covid pandemic).
This isn’t really surprising, though, as the same study notifies about 97% of consumers admitting to not answering calls from businesses and unknown numbers. Customers explain they are just trying to save themselves from spam, especially with the increase in robocalls. But the problem is that your cold calls risk getting ignored or being sent directly to voicemail.
Spam detection can be messy. Since it’s difficult to draw a clear line between spammy and non-spammy business phone calls, customers or phone carriers can mark your number as spam by mistake. Eventually, this could affect your cold call effectiveness.
Here are the most common factors that affect a business phone number to be detected as spam:
- The number of calls per day or hour made from the same number: Too active numbers, such as those making over 100 calls a day, raise suspicion and get blocked more frequently.
- Frequent reports as spam from customers: If more recipients flag the number as spam risky or spam likely, phone carriers will take measures and mark the number as spam to protect their customers.
- Bad scripts of conversations: Some agents start from worn-out phrases or too promotional phrases, which encourages leads to end the conversation early and flag the number as spam. Having good scripts or, even better, providing agents with strong talking points instead of scripts could be more effective. This can lead to fewer hangups, longer calls, and more involved leads.
Thus, if an insurance company is used to calling its leads on Monday morning and makes more than a hundred cold calls until midday, it risks having its number detected as spam quite a lot.
It’s a tricky question, by the way. At first glance, mislabelling business phone numbers as spam can have adverse effects on the company. Here we’re talking about the company’s inability to reach potential customers, which can eventually result in lost leads and lower revenue.
Our own insurance company case study proves that too many variables affect the success rates of cold calls, and we didn’t find a direct correlation between spam detection and number effectiveness. Today, ML algorithms are challenging the conception of cold calls and can provide you with enough insights about your cold calls strategy to make it more effective.
Our Customer is a midsize insurance business specializing in property and auto insurance. Cold calls comprise a considerable part of the Customer’s sales strategy. Sales representatives call leads regularly in order to offer insurance to them.
The company makes calls from several phone numbers, while its base gets replenished with new leads regularly. The idea is to call and talk to as many people as possible and close the maximum amount of deals. Cold calling is one of the key channels how our Customer wins new policyholders, and that’s why it matters to the company a lot.
Here are a few other particularities of how this insurance company works that mattered in solving the case study:
- Different agents work with various types of leads, depending on what sources these leads have come from.
- There are also different pipelines on how these leads are getting processed.
- Different agents and carriers could use the same phone number for cold calling.
- Since agents use multiple pipelines while working with leads, the same phone number could be used differently depending on the agent who is calling from it.
The company contacted Intelliarts AI to research spam detection and the risk of their number being marked as spam. The Customer worried that the fact that their numbers were sent to spam affected the company’s performance. So the insurer wanted us to use a machine learning approach to:
- Understand whether we can track if the number was detected as spam
- Discover a pattern of how the Customer could make cold calls more effectively
In other words, it was more of a knowledge discovery task where we had to research the Customer’s number effectiveness and understand whether there is any correlation between the number marked as spam and its success rate. Ideally, we should say whether it was possible to determine factors that favor the insurer’s cold call success so the company could work on its weaknesses and improve its success rate of cold calls.
To complete this task, the Customer provided us with some information about leads that the company collected via different data pipelines. This involved important information like who the caller was, whether or not the deal was closed, the data about leads, and so on. We also received the set of phone numbers used by the agents. The Customer pointed out that they had a pool of numbers that they were using and that got replenished regularly with new numbers.
Since cold calls are truly important to the Customer to win new policy-holders, the insurer has tried solving its business challenge earlier. In particular, the company asked a third-party vendor to manually check its phone numbers for spam.
That’s how we received one more type of data — the labels provided by this vendor that marked the Customer’s phone numbers flagged as spam. This data was very expensive, and after analyzing this information, our data scientists noticed that almost all numbers that were checked by the system were marked as spam. The insurer submitted 420 numbers for spam detection, and only 2% of the numbers were not marked as spam.
In our analysis, we used only 2021 data as the most relevant. At a high level, we tried to use this historical data to understand the difference between the data before and after numbers were flagged as spam.
1. General phone call analysis
We began with the general phone call analysis to see the bigger picture in this case. Our team wanted to know how the amount of successful cold calls grew over time and whether there was any correlation between successful and all calls generally.
We noticed that the data on cold calling effectiveness differed regarding before and after October. That’s why we decided to analyze the calls during these two periods. The data proved that, before October, the number of successful calls (0.23), as well as the number of calls overall (2.98), was growing. The correlation between the calls was high (0.82).
After October 2021, the situation changed:
- The number of successful calls was falling (-0.55)
- The number of calls overall was growing (4.04)
- The correlation was lower (0.66)
You can review this data in the graph:
We assumed this downturn was connected to the fact that numbers have been marked as spammy more often, and that’s why we decided to dive deeper. Our initial hypothesis was that cold call performance falls after a number is being flagged as spam. To check this, we conducted a visual analysis of the numbers that the third-party vendor marked as spam.
However, the results proved us wrong: we didn’t see any trend in the number of successful calls falling after being flagged as spam. The two graphs below demonstrate that even when the number was marked as spam, its performance continued to change and was not so low.
One more interesting observation was that 96% of the phone numbers we analyzed were used for less than 10 days (not necessarily in succession but generally). The latter complicated the analysis in terms of time series. Surely, we could analyze calls made by hours, but this isn’t a period of time suitable for looking for a pattern.
2. Number migration analysis
As said earlier, different phone numbers were used to migrate between carriers, types of leads, and so on, and our team assumed that this could affect performance too. For example, we tracked the success of calls over time for a phone number and how it changed when the number was flagged as spam. It was expected that the performance would drop after the number was marked as spam, and this actually happened. What we didn’t expect was that the performance started to grow again when a few more calls were made from the same number. What’s more, the number analyzed could have had different carriers, and this also affected its success rate.
3. Underperformance vs spam analysis
Since the analysis in terms of time series was not possible to do, our data scientists then focused on the analysis in terms of the number of calls. The idea was to understand how the underperformance of a particular number correlated with spam.
The solution was the following one:
- We have chosen a group of calls from the same number and analyzed it in terms of different call characteristics, such as whom the company called from this number or which carriers were engaged.
- As our next step, we were looking for the groups of calls that had similar characteristics and compared the performance of the calls we analyzed and the performance of those other groups of calls. The purpose was to see whether and how the success rate was statistically different in each case.
- We also paid attention to the success rate before and after spam detection to compare the success rate of different groups of calls as well.
Our results are visible in the next graph. The success rate of the groups of calls we analyzed was relatively low (2.14%). Most groups of calls with similar characteristics performed better, which suggested that there was the underperformance of the groups of calls we analyzed in relation to the success rate of similar calls.
Below, you can also look through the example of our solution output. As said, the idea was to choose the group of calls and check whether there was any spam detection.
If to consider the results about the success rate of cold calls before and after spam detection, we discovered that there wasn’t serious underperformance between the calls we analyzed. Altogether, we have checked the success rate of 1000 calls (five groups of 20 calls each) before and after spam detection and didn’t find any significant statistical difference between them.
Overall, the analysis proved what we suspected earlier: the underperformance of the Customer’s cold calls wasn’t caused by spam, which disproved all the insurer’s earlier assumptions. However, the success rate was greatly impacted by who was calling, whom the company was calling, and how the cold calls were managed.
The research project ended with lots of market insights and surprising results for the Customer. Machine learning proved an invaluable source of information for insurance companies. If to be more precise with the project outcomes, we can mention the following:
- We have provided the Customer with the insight that there wasn’t a strong correlation between spam and the low cold calling success rates from data in this case. We also recommended the insurer detect underperformance and try to reduce it.
- Our second important insight from the Customer’s data was many factors that influence the performance of a phone number. Our recommendation here was to build a system for number management and contribute to the success rate of cold calls.
- We also built the MVP that can detect the underperformance of phone numbers for the Customer. In the long run, we can help the insurer detect which factors specifically cause the low performance of cold calls.
Since our collaboration with this insurance company continues, we keep the option open that we’d return to these recommendations later and could improve the cold calling effectiveness of the company even further.
As said, data science and machine learning could help insurers understand specifically what causes the low success rate of cold calls. Our team can build the model with recommendations on how to improve performance. Some examples of data-driven recommendations could include the prompts on when it’s the best time to call leads, how to better craft the conversation, and leave the most effective voicemails. Eventually, insurers can also use these insights in employee training as important fresh cold calling tips.