A cold shiver went up my spine when I reviewed the call notes that an account manager had taken during a recent client onboarding meeting. Buried deep in the notes was a quote made by a well-respected head of sales operations at a large global software company.

“At the start of last year, we instituted a strict policy of managing to clearly defined activity metrics. As a result, we experienced a 15% sales increase.”

Otherwise stated:

I took an action. The team had success. Therefore, this specific action caused success.

As a proud data geek this horrifies me because there is an implied lack of understanding of the meanings of correlation and causation by someone who should know better. (For a quick primer on the difference between the two, @KirkDBorne recently tweeted a brilliantly simple explanation: twitter.com/KirkDBorne.

More importantly, I have had a front row seat to this same movie more times than I can count. It always has a sad ending.


It’s not uncommon for business leaders to point to their innovations as “the” drivers for team success. A little self-promotion is needed in some organizations to navigate around sticky political situations or climb the corporate ladder.

In this case, however, the SVP of Sales had hired our company to help his team achieve greater success through the application of advanced analytics. This was not self-promotion. He believed his story and encouraged us to believe that this change was the only factor in the year-over-year improvement. Nothing else had changed, he claimed. Further, he pointed to an incredibly powerful statistic: His own data analysis showed that tracked activities (phone calls, opened opportunities, meetings, emails) in aggregate did increase substantially. Fifteen percent to be precise – the exact level that sales increased by.

This statistic is powerful, but ultimately misleading.

It took a data scientist on our team less than an hour to poke holes in the sales leader’s theory that the company’s new management style caused the increased sales levels. We have seen many companies fall into this trap of over-reliance on activity metrics before so we knew how to test the hypothesis that the 15% increase in activities directly led to the 15% uptick in sales.

As a first step we divided the sales reps into four equally sized quartiles based on sales. We also divided the sales reps into quartiles based on win rates. Not surprisingly, most reps landed in the same quartile in either trial (reps who with the highest gross sales tended to be the reps with the highest win rates).

When we look at activities and results in these more granular buckets it quickly becomes interesting: There was only a negligible increase in activities for the top two quartiles (less than 2%). The bulk of the activity increase came from the bottom half of sales performers. The bottom quartile witnessed a whopping 34% increase in activity levels. However, when we looked at actual results (gross sales and win rates), the results were flipped upside down: The top half of performers, whose activity levels barely changed, saw the greatest increase in win rates and gross sales.

Clearly, something other than the new activity based management approach was driving the sales increase.

In this fairly standard case, we see evidence that the bottom half of the sales organization is pumping the pipeline full of questionable opportunities and logging more calls and emails for these questionable opportunities in order to avoid negative consequences. The reps have done as instructed, but there is not a corresponding focus on the quality or validity of the opportunity or the interaction. Moreover, managers who are focusing on the activity tally for their reps (because they are being evaluated on their implementation of the strategy) are less likely to focus on the viability of the opportunity or the quality of the interaction.

This sales leader made a common mistake in interpreting data. More importantly, when companies focus their sales strategies exclusively on metrics associated with quantity of activities they are bound to be disappointed. Here’s why:

• We expect to see lower performing reps entering more unqualified opportunities and closing at lower win rates. More time is wasted on fudging bad data. Less time is focused on expanding skills sets.

• Higher performing sales reps tend to have a laser focus on achievement: hitting their sales targets, getting the next sale, achieving the next rung in the commission ladder. They understand how to sell and loathe being micromanaged. Focusing on activities fixes a problem that doesn’t exist.

• Even the best reps have weaknesses. If sales management has made the investment in coaching reps, management should include some efforts to identify and coach to those weaknesses. These weaknesses might only be microscopically visible when we see some combination of other predictive variables like: a certain type of account, size of the prospect, deal size, type of meeting, funnel stage, deals in which a certain competitor is involved, industry of the prospect, or product(s) being sold. ??]

• The data might show that a rep’s aggregate sales activities are high, but frequency, kind or quality of these interactions are predictive of the final outcome. Some combination of several of these individual weaknesses may be a blind spot for the rep and the manager. Coaching to activity metrics typically involves coaching to metrics designed from a whole team instead of coaching to the individual rep’s most impactful activities or behaviors.

• Coaching to average metrics takes significant management focus and prevents a search for all the other predictors of success to which managers can coach.

None of this should suggest that we don’t value activity metrics as an important part of the sales process. However, activities like anything else need to be considered in the context of all the other sales data when implementing sales strategies and interpreting results. They may sometime correlate to success, but do not confuse that with causing success alone. Organizations that measure the quality of interactions between reps and prospects, as opposed to just the quantity of those interactions, are much better positioned to understand the drivers of sales success and failure.

Postscript: After this article was first published in an industry blog several readers wanted to know what led to the 15% growth rate that the sales leader quoted. Despite the sales leader’s initial claims that nothing else had changed, there were more subtle moves that appear to have had some impact. These moves involved changes to sales territories and comp plans. This was also the company’s 4th consecutive year with a double digit growth rate.

Jim Dries is the sales rep in chief and head data geek for piLYTIX.