Sales enablement is suddenly very hot!
Just a few years ago, very few companies had a sales enablement leader. Almost no one had a full team. Now, most large B2B sales teams are investing heavily in sales enablement departments.
I talk to sales enablement professionals every day of the week, but I still struggle to define exactly what it is. I can’t define sales enablement because I am not a sales enablement practitioner and there is an awful lot of debate in the community of practitioners around what the role should entail. Like any nascent stage business discipline, there continues to be rapid evolution of the role.
I think that the word itself – enablement – is driving some of the confusion. The word is open to interpretation. And has it ever been interpreted differently by different people! When we google “sales enablement” we find scores of varying and complex definitions.
I debated this topic over breakfast with a founding member of the Sales Enablement Society a few weeks back. He has been an influential thought leader in this space for a long time. He opined that much of this debate could have been avoided if we had used a simpler term than enablement from the beginning. He suggested a term that gets to the heart of what everyone in this field is ultimately striving to achieve for their companies:
Now, when I go back and review all those complex definitions of the function, productivity seems like the perfect word.
But even if everyone agrees that productivity is the desired goal of the sales enablement function, there are still basic structural questions lacking consensus. These outstanding questions threaten to prevent the function from achieving its full potential and revolve around things like reporting structures and the level of interaction with other functions (for example, To whom should the enablement function report? What is the “correct” level of interaction with other departments like marketing and H.R.?). The question that my company often discusses with enablement professionals is the role that data science and analytics should play.
Some sales enablement leaders see data science and analytics at the core of their roles. Others, not so much.
If sales enablement is inherently about driving sales productivity, shouldn’t the sales enablement function be tasked with a very detailed understanding of the driving forces of success and failure for their companies? Why certain opportunities win and others lose? Why some salespeople successful and others decidedly not? Why certain goals are achieved and others are missed? What value sales technologies are bringing to the company? What content is needed to drive one sale or another?
The answers to these questions are never black and white. They can only be found through a scientific examination of a confluence of several different data points: buyer attributes, the salesperson’s strengths or weaknesses, content that was used, activities, technologies that were used, stage velocity and on and on.
To drive true productivity, the first order of business should be to have a thorough understanding of a company’s drivers of sales success and failure. As sales enablement practitioners continue to debate their roles and increase their influence within their organizations, data science must become a driving force.
Jim Dries is the CEO of piLYTIX, a sales enablement technology company.
Here’s a fun story. There is a case study that is currently on the website of a sales technology company. Other than having a mutual client, our company has no affiliation with them, so we’ll just call them AwsumTek. Unless we had systematically dissected all of the claims in AwsumTek’s case study and compared them to our mutual client’s raw sales data, we would have dismissed the study as totally unbelievable. But our examination was thorough. Every single claim that this company makes is 100% factually accurate. It’s really a fantastic story.
The case study focused on a pilot launch of AwsumTek’s sales efficiency product. AwsumTek’s website claims to make every sales rep more effective from the moment they first start using the product. They promise that the technology will: Increase sales! Reduce Cycle Times! Increase Deal Sizes! Remove All Predictability Challenges!
The case study goes like this: AwsumTek’s product was piloted with 1,000 sales opportunities. At the end of the pilot, here’s what they found:
The sales team won 898 of those 1,000 opportunities. 102 of the deals lost or were still open at the time of publishing. Allow us to take the liberty of doing that math for you: This equates to an 89.8% win rate. This is especially staggering considering the company’s 5.7% average win rate over the same period.
The average cycle time on these 898 winning deals was only 13 days, which compares to the company’s average cycle time of 79 days!
The average price discount on these deals was less than 1%! WOW! The company’s average price discount was 12.5% over the same period.
Shortly after AwsumTek published the case study, our mutual client asked us to review their sales data and make sure that the data supported the claims made in the case study.
We reviewed the case study word by word.
At the end of our exhaustive review, we can definitively say that there was not a single factually inaccurate claim on the website or in any of the case studies! Every data point that the company provided is provably correct. We assume that the same can be said of the dozen or so other case studies on AwsumTek’s website. There’s a nice case study library right in the middle of their homepage.
Sign me up, you say? But wait! There’s more! These guys must really believe in their product because they are willing to give it away for FREE for thirty days! F-R-E-E! Zero cost whatsoever and they assume all the risk if you don’t like it!
We sell a mathematics-based sales technology, so we should know a good technology when we see it. We had been searching for the Holy Grail and were sure that we had found it!!
Except there’s one tiny detail that we uncovered (unraveled, really). And – ouch – this was a tough one.
It’s all a steaming pile of BS.
AwsumTek’s case study didn’t directly make any specific factually inaccurate claims. It’s what they didn’t say that is causing the gut-wrenching stench that you are probably starting to get a whiff of.
They didn’t lie. They were just expecting the readers to connect a few creatively placed dots and lie to themselves. Shockingly, it turns out that this happens all the time. Are marketing classes suddenly all teaching from the same text book? Maybe we just weren’t paying close enough attention.
This creative storytelling school of marketing has an unflappable willingness to cleverly use data to convince the audience of the powers of whatever they are selling. In this case study and their supplementary messaging, all the textbook’s dirty tricks are on display.
Comparing Apples and Antelopes:
AwsumTek paints a picture of legitimacy by giving us a few details that one might expect to find in a scientific study. They tell us that they are looking at distinct populations – one of which used the technology (indicative of a scientific test group) and one that didn’t (control group). They never tell us that there are major differences between the test group and the control group (which of course there are); That would obliterate the narrative that they were trying to tell us. Instead, they slyly mention lots of other data point that we would see in a scientific study. The message is, it’s an apples-to-apples, scientific study! Now stop asking questions so we can get to the good stuff!
Of course, they skip over a few details to on their way to the good stuff. Comparing an 89.8% win rate and a 15.7% win rate was totally irrelevant. The company piloted the technology with the renewal managers for a product line that has almost no viable competition. For several years running, the client had successfully won approximately 90 percent of these renewal opportunities…exactly what they produced during this pilot with the new technology. Likewise, the cycle time and price discounting levels hadn’t budged when we look at this product line’s renewal history over several years.
My, What Big Numbers You Have:
Big numbers tend to strike a psychological nerve. 1,000 sales opportunities and the 898 wins feel like big numbers. AwsumTek’s clever marketers hit us with these stats in the first paragraph to ensure that our collective subconscious would be screaming “This is legit!” For most companies in their target market, 1,000 sales opportunities is a healthy subset of the total opportunity pool that they would see in a year. However, this case study was done on a multinational company with dozens of business units and products. 1,000 opportunities represented a drop in the bucket for this company which had tens of thousands of opportunities open at the time of the “experiment.”
Trust us! But if you don’t its free anyway!
By combining these case studies with “free trials,” marketers are encouraging prospective buyers to think that they have nothing to lose. Anyone would be crazy not to try it! But once again, we have to look at what isn’t being said. They’re not accepting your money today, but they certainly aren’t going to take on any expenses either. The free trial business model is a do-it-yourself model. In giving away their products for free, they know that a prospective buyer’s IT resources will be taxed over the trial period. This comes at cost of doing something more valuable for the company over that time. This opportunity cost is still a cost, and they know that many people will be hell bent on making something work when they have made an investment – regardless of the lack of value that has been gained at the end of the trial period.
In the end, AwsumTek’s marketing tactics should draw suspicion. The tactics don’t necessarily prove that their product is useless. That’s for their clients and prospective buyers to decide. However, until buyers start asking tough questions about what hasn’t been stated in the case study and how much a free trial really costs, the tactics are sure to continue.
I am constantly energized by the lightning-fast pace at which the predictive analytics industry is evolving. I am proud to be a part of a company that is making a difference for our users. However, my job is so much more difficult than it needs to be because of the wounds that the predictive analytics industry continues to inflict upon itself in the form of marketing messages that only serve to confuse audiences thirsty for knowledge.
Good companies in the predictive analytics space are all fundamentally founded on the premise of helping business leaders and/or consumers efficiently make better decisions. When successful, these better decisions will result in finding previously unseen value. By reducing the guesswork involved in decision making, the value for the end user should be savings of cash or time, waste reduction or revenue increases. All of these are the intended consequences for consumers of certain predictive analytics tools. That’s a pretty simple premise.
And yet as an industry, we do everything to complicate the premise by refusing to engage in a basic dialogue of how our tools work. Instead, we rely on buzzwords to describe our products. Sometimes these buzzwords have actual definitions (AI, Proprietary Machine Learning Algorithms, et al). Other times we use or clichés (black box, secret sauce, et al).
More often than not, use of these buzzwords seems intended to avoid a reasonable discussion about what happens to a dataset to arrive at the insights generated by the predictive tools. The message is basically, “trust us, we’re smart.”
As money continues to pour into companies throughout the nascent analytics industry from investors and consumers alike, there is not a natural impetus to change this messaging. Yet.
“Trust us, we’re smart” won’t be a message that sells forever. Even analytics companies that are successful today with good products will be burned if our markets develop a strong enough distaste for our industry. That distaste will be sharpened by buyers of ineffective products when they learn that the sauce isn’t so secret, the black box is empty, and that the A.I. is more artificial than intelligent.
Some of the analytics companies that continue to rely on the buzzwords actually have products that are powered by truly ingenious mathematical approaches. Other companies use the buzzwords to avoid discussions about mathematical engines that a decent high school math student could produce.
Without a high level description of how the “black boxes” work, the buzzwords subtly connote magic. People love the entertainment value of magic acts because they appreciate that they can be tricked. However, when people face a decision of any financial consequence and they instinctively feel that magic is involved in a decision making tool, they quickly discount – or totally ignore – the output. The mystery that my industry has shrouded itself in by its continued overreliance on these terms has caused suspicion. That suspicion has led to underutilization of analytics tools by the people and the companies that need the most help.
Those of us who believe in the decision making power that our industry can bring – producers and consumers of analytics alike – are responsible for changing this environment.
Good analytical products are built on a foundation of solid mathematics, but we insult the intellect of our purchasing populations when we over-rely on buzzwords and too-good-to-be-true ROI claims. Sellers of these products need to ensure that their users understand how the products work. This doesn’t mean that I encourage anyone to publish their algorithms for the world to see, but we can all discuss the inner workings of our mathematical engines in a way that is accessible to broad audiences. Allowing users to understand the basic inputs that drive our models is a step in the right direction. Deeper discussions of how changes to these inputs impact the outputs will bring us a lot closer to credibility.
Until competition or a tightening market compels changes in analytics providers’ messaging, it will ultimately require the buyers of these products to demand answers to basic questions. “How does it work” seems like a good place to start.
Jim Dries is the CEO of piLYTIX, a provider of predictive analytics solutions for unique sales organizations.
“I know my salespeople.”
On the surface this simple four word sentence sounds pretty innocuous, doesn’t it? It certainly doesn’t sound like something that should cause alarm.
But that’s exactly what it does for us. When our account leaders hear this from a sales manager that they are tasked with serving, we know that we are in for a bumpy start to our relationship. When we hear this during a sales call with prospective clients, we instantly know that our own sales job just got a lot harder.
We recognize that the nature of our company likely causes knee-jerk reactions for some managers. Our company systemically dissects companies’ sales data to find the most impactful drivers of success and failure in their sales organizations. Individual rep skills, tendencies and biases always factor into our calculus. Absent a deeper understanding of our mathematical approach and our mission to truly complement their skills as managers, natural defense mechanisms kick in for a few managers. For these managers, “I know my sales reps” is just a more dignified way of saying “Bugger off eggheads! I have worked hard to get my team this far and your team of math geeks will never know more about my reps than I do.”
We take no offense to the characterization and some managers quickly drop their defenses. Others dig their heels in.
Our technical output allows sales managers to see very clearly all of the factors that make individual opportunities more likely or less likely to close. The factors might include things like the price of the deal, a buyer’s history, the product or products being sold, age of the opportunity, velocity at which the deal progressed through various stages, activities, or engagement data that has been captured by other technologies. For tenured reps, the output is heavily influenced by the individual rep’s historical performance in each of the areas. It also includes a recorded history of how the rep feels about a deal’s likelihood to close.
We have come to learn that this last point – the rep’s feelings – are usually what sales managers are attaching to when they say “I know my salespeople.” Tenured managers develop this sense over time on their periodic pipeline reviews or forecast calls. After these calls, the manager will speak with a Sales V.P. and say things like:
“Bob is perpetually optimistic, so I am highly doubtful that he is going to land this whale.”
“Diane is a sandbagger. All her deals are showing that they are in early stages, but she always comes through in the end, so I will just put her at quota.”
“Ed’s committed deals never seem to be on time, but they always come in, so I will put half of them this month and half next month”
As it turns out, many of these managers are often directionally correct. So what’s the big deal then?
These generalizations, based on solid observations over time, in all likelihood will help achieve marginally better sales forecasts. Sales forecasting is an important component of many sales managers’ jobs. However, it is never the primary role for which managers were hired. Sales managers’ first responsibility must be to help drive sales for their organizations. We find that most managers who continue to express some form of “I know my salespeople” inhibit their own ability to drive sales.
Let’s use the above example of “Bob the Optimist” to illustrate this point:
For argument’s sake, let’s say that we can quantifiably prove that Bob aggressively counts his chickens before they hatch. That doesn’t make him a bad sales rep. It makes him a bad forecaster. Presumably, he still wins contracts or we would be talking about an ex-employee named Bob instead of his forecast. So all we really know about Bob is that he is good enough to not get fired and that he regularly has disappointing surprise losses.
Bob doesn’t know why he is losing these deals. Bob is doing the sales equivalent of spiking the football on the 1-yard line when he embarrassingly commits deals that he doesn’t win. His manager, who presumably was hired to help Bob ensure Bob’s success, doesn’t know why these deals are losing or he would have saved Bob some embarrassment and ensured that Bob understood the obstacles that are reducing the likelihood of this deal closing.
Instead of patting himself on the back for his grasp of the obvious trend that Bob overcommits deals, the sales manager would better serve his company by striving to understand where Bob’s disconnect lies. Bob does win some of his committed deals. Both Bob and his manager need to understand the fundamental differences between the deals that win and the deals that end in surprise losses. Is there a price point where Bob begins to struggle? Are there certain products that Bob is less effective selling? Are there certain types of prospects that Bob can’t seem to close? Are there certain activity metrics that might foreshadow success or failure? Are there seasonal buying habits that Bob is oblivious to? Is Bob using the wrong content or the wrong communications tools for certain types of deals? What other stories are hidden in the sales data?
Bob’s manager – who says he knows his reps – rarely knows any of these answers. And he certainly can’t conceptualize that all of these elements (and more) come into play on every deal. Nor does he understand that the signals may have different levels of strength on each deal, since no two deals are identical and no two reps capture sales data identically.
In many cases, while Bob was celebrating the win that never happened, his manager could have helped him shore up weaknesses in the deal. Maybe this deal had three or four specific and addressable weaknesses. Perhaps there was data that would have shown that our prospect wasn’t as engaged as Bob might have thought. Bob’s manager could have asked pointed questions to truly assess the prospect’s position. Maybe Bob struggles selling to a certain industry. The manager could have paired Bob with a colleague to serve as an industry expert or ensure that proven industry-specific content was made available to Bob. Maybe Bob is our best sales rep for small deals, but simply can’t close a deal over a certain price threshold. His manager should insert himself into negotiations so Bob can learn how to ask for more money. Maybe Bob is not seeing as much benefit in a new sales technology as his peers. The company spent a lot of money on the technology and has every interest in ensuring reps understand best practices for extracting maximum value.
In other cases, poor Bob will spend months chasing an opportunity as his top priority that was virtually doomed from the beginning. He will work day and night to close a deal that has hidden roadblocks. He will give the manager enthusiastic reasons for exerting so much energy on these deals. Those reasons may be totally valid. Bob may even be leaning on mathematically provable deal predictors that have been present in other deals that he won. But Bob’s inability to understand all the predictors of win and loss prevented him from understanding the deal’s inherent weaknesses. The time that Bob wastes on these deals comes at the direct expense of other deals in his pipeline that were mathematically more likely to win.
Unfortunately for Bob and his manager, it’s almost never just one data point that predicts whether a deal will win or lose. Bob happens to be a human being who makes decisions partly based on emotion or gut feelings. We all do. We all have blinders and biases that prevent us from making the right decision. When this happens, we need someone to get us back on track. In sales organizations, that job belongs to sales managers.
When sales managers – who also happen to be a human beings– can’t acknowledge their own blind spots, it renders them totally incapable of helping their teams achieve the best possible results. By failing to understand what drives some of Bob’s deals to succeed and others to fail, Bob’s manager can’t give tailored coaching. The end result is that Bob’s will never achieve his full potential and our company will lose deals that it should have won. But Bob is only one rep. If a company has 10 or 100 or 1000 salespeople, the aggregate impact of arrogantly “knowing our sales people” is catastrophic.
Jim Dries is the CEO of piLYTIX.
There was a time not too long ago when sales ops positions were viewed by executive leadership teams as a needed expense to keep the trains running on time. Too often, though, sales ops professionals lacked the organizational support to think in bigger terms of the wider strategic impact they could have on the rest of the organization – marketing, finance, and corporate strategy. The big data revolution which has been accelerated by advances in technology and business schools’ focus on “data driven decision making” has rapidly changed the profile of sales ops leaders. More often than not, we are seeing today’s sales ops leaders having a sharper quant focus. They are seen as having a more integral role in the C-suite.
Exactly as expected with this newly refined profile of sales ops leaders, a focus on better sales data has taken center stage. However, at company after company, we see a common mistake repeating itself. Too many sales ops leaders and their marketing counterparts are simply equating “more data” with “better data.” And, in plain fact, more data very often is better. Since most CRM systems are so easy to modify, we see incredibly elaborate customizations in which sales reps can enter sales data into several dozens of fields.
However, in seeking so much feedback from field reps, there are too many practical realities of 21st Century B2B sales that are being ignored. Ask yourself this, have you ever heard a good sales rep ask the following question: “What do you want me to do, close deals or enter data all day?” Whether the question is fair or not is irrelevant. Of course we want reps to close business AND comply with company CRM standards. But before we just brush off the combative rep, perhaps it is worth examining our CRM expectations.
At piLYTIX, we closely monitor CRM usage stats. Remarkably, we have found an inverse relationship between the number of added custom fields and the level of rep CRM usage. Hidden deals, surprise short term closes, and clear “sandbagging” indications tend to be highest at the organizations that have asked reps to enter the most fields of data.
While our clients benefit from specific recommendations for CRM adaptation, we encourage all senior sales ops leaders to consider the following when considering their data policies:
Take the time to educate sales reps how they will directly benefit from complying with your CRM standards. Hint: if you can’t convince sales reps of what’s in it for them, you will never solve your data collection problems.
Focus on those fields that directly speak to the most important priorities of stakeholders throughout the organization.
Learn which fields correlate with deal success (or failure) and ensure that there is focus on those fields. Don’t assume that these fields will be identical from one company to the next.
Recognize that while sales reps are tremendous sources of market intel, they are not professional market researchers. What information is better collected via full time professional market researchers?
If you use third party prospect engagement technologies, ensure that the quantitative outputs of those systems is integrated into your sales data. If your sales rep is doing the hard work of selling and you have paid a vendor to track open rates, phone meeting time, or web conferencing data, then it seems reasonable that you would want that technology seamlessly integrated.
They hated our website. We launched it anyway.
In late October, we sent a preview link of our new website to a trusted group of twenty advisors. They were friends and family of our leadership team, investors and clients. Smart people. Accomplished business leaders. All of them.
The note was sent by the CEO and had a decidedly friendly tone. We simply referenced our relationship and mentioned that we would value any feedback they could give us on our soon-to-launch website.
Within a week, we received 11 responses.
While we weren’t fishing for compliments, we expected some level of warmth. After all, we had a personal relationship with each person that received the note. The responses we received were akin to that brutal honesty that only a best friend can deliver. We’ve all been on the receiving end of this kind of unfiltered, but ultimately loving advice (i.e. “Yes, that shirt does make you look fat” or “No, I don’t think that she’s ‘the one’ for you.”) Hearing this kind of feedback stings, but you need to hear it and ultimately you are humbled that someone would care enough to shoot you straight.
The closest thing we received to a positive response was, “It’s not terrible.” Aww. I’m blushing.
No one was impolite, per se, but there were three recurring themes.
You forgot to devote a section to your products! A website is your best sales tool. I wouldn’t buy a car that I couldn’t see a picture of first. You have a great product. Show it on every page.
You took most of the other information about your company out of the site too. You have a cool story. Tell it!
What the hell are you doing with your FAQ page?!? Nobody will read it, but you need to tone it down anyway. You can’t be rude to a sales prospect and you are going to alienate future employees.
We probably expected some of that.
We are so appreciative that we have loyal advisors who genuinely care about our success. We know that they don’t have time to give responses that are going to be ignored. They need to know that we seriously considered every word that they said.
In the end, however, we ran the other way. Not only did we buy that unflattering shirt, we bought the whole rack. We didn’t just keep seeing that wrong person, we got married. Prenup? Forget about it.
We launched without making a single edit. Here’s why:
Not only is piLYTIX a developer of sales technology, but we are also voracious consumers of other sales technologies. As such, we are bombarded with sales pitches on a daily basis. While we can empathize with the difficulty of attracting an audience, we simply don’t have time to listen to every pitch.
In holding the mirror to ourselves and examining our own purchase habits, we recognized that we buy from companies that focus more intently on our challenges than their solutions. We have typically done some level of homework on vendors before we ever interact with their sales teams. The companies whose products we purchased always understood our challenges. Their sales people, marketing collateral and websites all spoke directly to these challenges. Conversely, we found that we have wasted entirely too much time listening to sales pitches from other companies whose products were ultimately viewed as “nice to haves” but not “need to haves.”
Just like other companies wasted a lot of time selling mismatched product to us, we have also wasted too much of our own time selling to the wrong audience.
So instead of talking about ourselves, we devoted most of our relatively minimalist website to talking about our clients. We talk about their challenges. We talk about their industries. We talk about other unique traits that make their challenges hard to solve. Not in our words, but in theirs. We also proactively answer questions that come up on nearly every sales call. Instead of tiptoeing around the answers with typical marketing speak, we simply answer the questions in the bluntly honest language that we use amongst our own colleagues when clients and sales prospects are not in the room. Our basic belief is that if we have to avoid the truth with a sales prospect, we will probably have a painfully difficult time serving them as clients.
We don’t agree with the advisors who believe that our website is our best sales tool. With sincerest respect to the advisor told us that she wouldn’t buy a car without seeing a visual, we are not selling a commoditized consumer product to a mass audience. We are selling complex solutions to very specific buyers. We typically speak to no fewer than 5 people representing different stakeholders groups in deals that we win. We believe it to be a fool’s errand to build a website that attempts to speak to all of their individual concerns. That is why we pay our talented sales reps, who we believe to be our most valuable sales assets.
Quite simply, we want our website (www.pilytix.com) to start a discussion with the right audience.
A cold shiver went up my spine when I reviewed the call notes that an account manager had taken during a recent client onboarding meeting. Buried deep in the notes was a quote made by a well-respected head of sales operations at a large global software company.
“At the start of last year, we instituted a strict policy of managing to clearly defined activity metrics. As a result, we experienced a 15% sales increase.”
I took an action. The team had success. Therefore, this specific action caused success.
As a proud data geek this horrifies me because there is an implied lack of understanding of the meanings of correlation and causation by someone who should know better. (For a quick primer on the difference between the two, @KirkDBorne recently tweeted a brilliantly simple explanation: twitter.com/KirkDBorne.
More importantly, I have had a front row seat to this same movie more times than I can count. It always has a sad ending.
It’s not uncommon for business leaders to point to their innovations as “the” drivers for team success. A little self-promotion is needed in some organizations to navigate around sticky political situations or climb the corporate ladder.
In this case, however, the SVP of Sales had hired our company to help his team achieve greater success through the application of advanced analytics. This was not self-promotion. He believed his story and encouraged us to believe that this change was the only factor in the year-over-year improvement. Nothing else had changed, he claimed. Further, he pointed to an incredibly powerful statistic: His own data analysis showed that tracked activities (phone calls, opened opportunities, meetings, emails) in aggregate did increase substantially. Fifteen percent to be precise – the exact level that sales increased by.
This statistic is powerful, but ultimately misleading.
It took a data scientist on our team less than an hour to poke holes in the sales leader’s theory that the company’s new management style caused the increased sales levels. We have seen many companies fall into this trap of over-reliance on activity metrics before so we knew how to test the hypothesis that the 15% increase in activities directly led to the 15% uptick in sales.
As a first step we divided the sales reps into four equally sized quartiles based on sales. We also divided the sales reps into quartiles based on win rates. Not surprisingly, most reps landed in the same quartile in either trial (reps who with the highest gross sales tended to be the reps with the highest win rates).
When we look at activities and results in these more granular buckets it quickly becomes interesting: There was only a negligible increase in activities for the top two quartiles (less than 2%). The bulk of the activity increase came from the bottom half of sales performers. The bottom quartile witnessed a whopping 34% increase in activity levels. However, when we looked at actual results (gross sales and win rates), the results were flipped upside down: The top half of performers, whose activity levels barely changed, saw the greatest increase in win rates and gross sales.
Clearly, something other than the new activity based management approach was driving the sales increase.
In this fairly standard case, we see evidence that the bottom half of the sales organization is pumping the pipeline full of questionable opportunities and logging more calls and emails for these questionable opportunities in order to avoid negative consequences. The reps have done as instructed, but there is not a corresponding focus on the quality or validity of the opportunity or the interaction. Moreover, managers who are focusing on the activity tally for their reps (because they are being evaluated on their implementation of the strategy) are less likely to focus on the viability of the opportunity or the quality of the interaction.
This sales leader made a common mistake in interpreting data. More importantly, when companies focus their sales strategies exclusively on metrics associated with quantity of activities they are bound to be disappointed. Here’s why:
• We expect to see lower performing reps entering more unqualified opportunities and closing at lower win rates. More time is wasted on fudging bad data. Less time is focused on expanding skills sets.
• Higher performing sales reps tend to have a laser focus on achievement: hitting their sales targets, getting the next sale, achieving the next rung in the commission ladder. They understand how to sell and loathe being micromanaged. Focusing on activities fixes a problem that doesn’t exist.
• Even the best reps have weaknesses. If sales management has made the investment in coaching reps, management should include some efforts to identify and coach to those weaknesses. These weaknesses might only be microscopically visible when we see some combination of other predictive variables like: a certain type of account, size of the prospect, deal size, type of meeting, funnel stage, deals in which a certain competitor is involved, industry of the prospect, or product(s) being sold. ??]
• The data might show that a rep’s aggregate sales activities are high, but frequency, kind or quality of these interactions are predictive of the final outcome. Some combination of several of these individual weaknesses may be a blind spot for the rep and the manager. Coaching to activity metrics typically involves coaching to metrics designed from a whole team instead of coaching to the individual rep’s most impactful activities or behaviors.
• Coaching to average metrics takes significant management focus and prevents a search for all the other predictors of success to which managers can coach.
None of this should suggest that we don’t value activity metrics as an important part of the sales process. However, activities like anything else need to be considered in the context of all the other sales data when implementing sales strategies and interpreting results. They may sometime correlate to success, but do not confuse that with causing success alone. Organizations that measure the quality of interactions between reps and prospects, as opposed to just the quantity of those interactions, are much better positioned to understand the drivers of sales success and failure.
Postscript: After this article was first published in an industry blog several readers wanted to know what led to the 15% growth rate that the sales leader quoted. Despite the sales leader’s initial claims that nothing else had changed, there were more subtle moves that appear to have had some impact. These moves involved changes to sales territories and comp plans. This was also the company’s 4th consecutive year with a double digit growth rate.
Jim Dries is the sales rep in chief and head data geek for piLYTIX.
This intervention is long overdue.
You know you have a data problem but haven’t fixed it. You know that data holds power and there are tremendous benefits getting this problem under control. You know that you are losing sales because of your inability to derive insights from your data and you know that your company needs to learn from its own sales data in order to stay competitive. But your data challenges run deep and it would take serious effort to get your data cleaned up. Given the perceived time commitments, you feel that you can’t make this a priority.
However, if we acknowledge the root causes of your data difficulties, the most efficient path forward will become easier to find:
You Inherited It
Perhaps you arrived at a company that is younger and hadn’t yet established rigorous data collection standards – or had made frequent changes to how it collected data. Maybe it’s an older company that fell into bad habits. Maybe your predecessor failed to see the benefit of basic CRM hygiene – or wasn’t able to convince a diverse sales team of its benefits. Maybe there have been changes because of new reps or expanding products or markets and data quality standards were put on the back burner.
Organizational Support Lacking / Unrealistic Expectations
You were hired to help grow sales immediately. Not next month, next quarter or next year. Any data project will take time and won’t be reflected by a revenue increase on this quarter’s income statement.
You have a few people or teams who don’t enter sales information until immediately before deals close. Or others who downplay the likelihood of deals closing.
Inferior Old CRM
You are planning a move to a new CRM system and will focus on quality then.
These root causes, however, are becoming excuses. You need to take ownership of the problem. Your revenue team has the most direct line of site to the market. Ensuring your company’s maximum growth requires constant interpretation of market feedback.
If you are going to finally fix your data problem, you need an airtight plan. As you build that plan, consider the following:
Consider all stakeholders’ needs – most importantly your sales reps and managers. Most sales professionals are competitive by nature and financially motivated. When they are sloppy with their data habits it’s because they don’t see the upside in doing it any other way. Be ready to answer the question of “what’s in it for me” early and often. Tap into the individual, selfish motivations that define each rep and each manager. Your data quality plan has to connect the dots for the reps in such a way that shows them a path to more sales. (For more thoughts on sales teams’ motivation, see: http://pilytix.com/blog/3-Reasons-Your-Sales-Team-Shuns-Sales-Technology)
Blaming sandbagging reps in your plan will get you nowhere. Sandbaggers exist for two reasons: 1) The failure to answer the aforementioned question “what’s in it for me” and 2) selective outrage depending on results. Reps who hit their targets are usually given a free pass while those who don’t hit their numbers are reminded of their data habits as an item on a laundry list of behaviors that need to be improved. Ask yourself, has inconsistent messaging ever resulted in universal adoption of…anything?
“Perfect data” should never be a goal. Many organizations who see the value in data collection have frequently ignored practical realities of selling in the 21st Century. They have built and modified their CRMs in such a way that requires sales reps to fill out too much data. This mission for perfect data ultimately prevents companies from getting any insightful data. (For more thoughts on this common misstep, see: http://pilytix.com/blog/more-vs-better-data).
Before you make any changes, start by learning which of the fields of data actually have the potential to provide useful insights about your reps or your sales opportunities. And don’t trick yourself into believing that all sales data is equally valuable.
If a major technical overhaul is required, assume that it will take longer than expected. Behavioral changes cannot wait for the completion of the new technical infrastructure. The idea that good habits can wait only ensures that the timeline to ultimate success will be indefinitely extended. When the plan is complete, its benefits need to be communicated throughout the organization immediately.
Measuring, Monitoring and Correcting:
As your data improves, sales leadership needs to stay vigilant or bad habits will creep back in. Part of this vigilance is measuring adoption of the plan. Every other major corporate expense is typically scrutinized and assessed for effectiveness, yet we rarely see sales leaders measuring the usage of their single largest technical expense – the CRM system. If your messaging has been effective and your expectations are realistic, sales reps and their managers should want to ensure quality data. There will always be exceptions, though. Sales leaders who measure individual rep CRM usage can take much more effective corrective actions with the outlier individuals and prevent individual habits from becoming systemic challenges.
Jim Dries is the CEO for piLYTIX. He is immediately suspicious of sales managers who claim to have excellent sales data.
piLYTIX regularly receives calls from business leaders in desperate need of forecasting help. The calls typically spike right after corporate earnings are announced. Panic has already set in at companies that recently posted disappointing results. Our original business offering – simple as it was – provided detailed sales forecasts for large companies who needed forecasting help because of missed targets or disconnects between sales, marketing, operations and finance.
In analyzing client data to assist with forecasting, we quickly learned that missed sales targets are largely symptoms of crippling diseases. Like a doctor who only treats a patient’s symptoms, we found that companies that did not address their underlying challenges would always be at risk of missing forecasts. .
These challenges broadly align to four core competencies. Companies that understand these four competencies are better positioned to hit their forecasts. More importantly, they are better positioned to close more business and save time, effort and money in the process.
Opportunity Management: Most organizations struggle to understand the driving forces that make deals more or less likely to close or they lack the mechanisms to enable their sales talent to recognize these forces. Missed sales forecasts are usually accompanied by pipelines that yield many disappointing surprises. Companies that can analytically interpret the data about their opportunities without relying on rep intuition will have fewer surprises. Likewise, when reps’ blinders are removed and they can see their deals’ underlying weaknesses, they are much better positioned to take corrective measures. In some cases, they may choose to focus their energies on opportunities that are statistically more likely to close and spend less time on deals that have little chance to close.
Talent Management: Coaching and training programs tend to be based on average profiles or a couple traits of past top performers rather than quantifiable strengths and weaknesses of each individual rep. As a result, too many organizations rely on a “star system” in which tenured reps receive better opportunities or more territory. Simply stated, too many tenured reps are successful simply because they are tenured. This often comes at the expense of more recent hires who might be more capable of closing specific types of opportunities than the tenured reps. The outcome: lost revenue or, at a minimum, significant inefficiencies in the sale organization.
Data Quality: Too many companies blindly accept as fact that more data equals better data. They mistake quantity for quality. These organizations often attempt to implement major data capture policies where enforcement causes friction; reps want to be selling, not acting as data entry clerks. Other companies are paralyzed by the “garbage in, garbage out” mantra. They assume that if it’s not great, then it must be garbage. If it’s garbage it will take time and money to fix it. Time and budget tend to be in short supply, so they kick the can down the road. Presumably, they are waiting for the day when they won’t have short term sales emergencies or when their reps magically begin uniformly entering data. Neither of these situations is ideal but managers often don’t have the time, resources or know-how to properly address data collection and quality. Consequently, bad habits persist making long term predictability (and sales success!) challenging.
Pipeline Health: Traditional metrics of pipeline health are directionally helpful, but too many organizations over-rely on these metrics and struggle when conditions change. For example, some managers will assume that if they hit their target last year and this year’s target increases by a certain percentage, they just need to ensure that the number of opportunities in their pipeline this year increases by the same percentage. Without a firm understanding of the quality of the opportunities in the pipeline and the capabilities of the individual reps that make specific deals more or less likely to close, relying on this type of oversimplified metric tees these organizations up for disappointments.
Poor opportunity management, mismanaged and misaligned talent, poor data quality and an unhealthy pipeline all are underlying diseases that should be cured to heal forecasting accuracy. The solution to all of these challenges begins with a dedicated organizational focus to capture, interpret and act on the stories in your sales data.
Jim Dries is the sales rep in chief and head data geek for piLYTIX.
Your company has just made a big investment in a new sales technology. Maybe you were the new product’s champion or you gave final approval for the expenditure. You know that this should have a major impact on sales. And now, no one seems to be using it. Your frustration and embarrassment grows with each passing day.
You’re not alone.
The sales world descended upon San Francisco last week for DreamForce, SalesForce’s annual product celebration for its throngs of loyal users, developers, employees and consultants.
Hundreds of companies that live in the SalesForce ecosystem were there too, and some of these companies were pitching truly innovative solutions to the myriad challenges that sales leaders face. And yet in discussion after discussion we ran into sales leaders who repeatedly lamented the fact that they can’t seem to coax their sales reps and managers to effectively use the technology that they already have.
We’ve been doing this for a while. If you are one of these leaders, here are a few of the reasons you’re struggling:
You haven’t addressed the only question that matters for most sales reps.
Let’s all stop with the nonsense that “reps should do what I tell them to do if they want to keep their jobs.” The best case scenario when you take this approach is that your reps will nod and smile and pretend to play along or do the bare minimum required to not get fired. They will not be inspired to extract value from your new sales tool. They won’t be inspired period.
Let’s try a different angle, no? Let’s acknowledge who we have hired. The best sales people tend to be smart, competitive, financially-driven and self-motivated. Whether you like it or not, the first question that they will ask is “What’s in it for me?” So tell them. Show them. Just be sure that the answer includes an obvious nod to the things that they care about: closing more business, making more money and climbing the leaderboard (or retaining their position at the top). If you can’t make these arguments to yourself, there is no chance you can get buy-in from your end users.
You haven’t proven that there is something in it for them.
Telling reps and managers how they will benefit is a good start, but it isn’t enough. You need to offer some proof and unfortunately, you alone are not the best positioned to convince the rank and file members of your sales team. Just like your sales prospects are more likely to buy based on the recommendation of a trusted confidant, your reps are more likely to follow the guidance of their colleagues who are in the field selling.
The smoothest technology implementations result when senior leadership enlists a handful of successful reps and mid-level managers to serve as internal “beta testers.” Ensure that they understand the cachet associated with being selected for this group. Take extra time with these reps to ensure that they understand their personal upside. Deputize them to help you sell it to their colleagues. They’re good sales people, after all. Prepare them to buy credibility with their colleagues by airing and addressing contrarian positions before the wider team launch. When you do roll it out to the wider team, avoid a monologue and instead guide a discussion amongst the beta testing group.
You actually bought a clunker.
Unlikely. You’re too smart for that. However, if the rep who sold the useless technology did a disservice to the noble profession of sales and snookered you into a bad deal, circle back with someone higher in the organization. You would fix this if this happened in your own sales organization. Most companies will.
Jim Dries is the sales rep in chief and head data geek for piLYTIX.
FOR IMMEDIATE RELEASE
Contact: Anne Broeker [firstname.lastname@example.org] 414-217-2173
piLYTIX adds four executives
Early-stage predictive analytics company growing to meet client demand
Austin, Texas (June 8, 2016) – Early-stage predictive sales analytics company piLYTIX has boosted its team with the addition of four vice presidents.
“Businesses are finding untapped opportunity and hidden risk in their sales data via predictive analytics,” said Jim Dries, CEO of piLYTIX. “The addition of these talented leaders will strengthen our ability to serve sales leaders across many industries who struggle to understand their sales opportunities, their sales reps, and their sales forecasts.”
Hendrik Kits van Heyningen – Vice President, Data Science
In his role as vice president of data science, Hendrik Kits van Heyningen oversees all aspects of the predictive modeling and analytics that power the piLYTIX software product. His career began as an engineer at KVH Industries, Inc. (Nasdaq: KVHI), where he worked on research and development for inertial navigation systems, inventing and testing a novel approach to magnetometer calibration. Recently, he was employed at Analytics Operations Engineering, Inc., where he worked on scheduling and pricing optimization projects before ultimately taking leadership of the data science team that had developed the original piLYTIX models.
An accomplished musician, Kits van Heyningen has performed as a pianist at Carnegie Hall, and he served as Music Director for the Yale Davenport Pops Orchestra.
Kits van Heyningen graduated summa cum laude with degrees in mathematics and physics from Yale University.
Julia Jacobs – Vice President, Web Engineering
Julia Jacobs is an industry veteran with over 20 years of Web Application Engineering experience working for Fortune 500 companies. Julia co-founded and served as CTO for digital media agency Currant Media working with clients like Turner Construction, Marriott Vacation Club International and other major hospitality industry companies. More recently, she has held senior development positions at diverse multinational companies including Home Depot Supply, Disney, AT&T Labs and Rackspace.
Outside the office, Jacobs serves as an organizer of a weekly pair programming meetup. She lives in Austin, TX with her husband, two sons and dog Rudy. Jacobs attended New York University.
Marcelo Labardini – Vice President, Development Operations
Labardini brings more than 15 years of experience in building highly scalable technology platforms from early stage startups to enterprise level corporate environments. Prior to joining piLYTIX, he held engineering roles at IBM, Spredfast, BMC, Boundary and most recently Blackboard.
Labardini is an enthusiastic lifelong learner who remains hands-on writing code, attending conferences and engaging with the Austin tech community. Labardini holds degrees in biochemistry and computer science from University of Texas - Austin.
Blake Glatstein – Vice President, Online Solutions
In his promotion to vice president, Blake Glatstein oversees client online solutions from first impressions to user success for piLYTIX. The third employee of piLYTIX, Glatstein has held roles in product development, sales and marketing and general management. He began his career at CEB in Washington, D.C. and quickly moved into sales leadership roles at Ajilon and Dallas Medical Supply.
Glatstein has a bachelor of science from Washington University in St. Louis. A serial entrepreneur, he founded, grew and sold his first company while still a student at Washington University.
piLYTIX was founded in 2014 to help companies accurately understand where sales opportunities exist. Absent sophisticated data analytics tools, sales leaders have historically viewed questions about their sales opportunities, their sales reps, and their sales forecasts as a series of artistic judgement calls. The piLYTIX team of data scientists see these issues as a series of complex, but eminently addressable, mathematical challenges. Through our work with dozens of leading companies in their fields, piLYTIX designed a series of advanced algorithms to provide sales leadership with phenomenally precise sales forecasts, and deep insights into the opportunities and the sales reps contributing to the forecast. Using piLYTIX models to dissect each opportunity and eliminate the human biases injected into the CRM system by sales reps, we are able to identify opportunity-specific levers that can be pulled to maximize a company’s revenue capture.
A couple of weeks ago, the Oklahoma City Thunder roared to a shocking three games to one lead in their best-of-seven NBA Playoff series against the Golden State Warriors. While a few loyalists held out hope that their team could come back and sweep the final three games, the rest of the sports world wrote the Warriors off. Media pundits cited the highly unlikely statistical probability that the Warriors would be able to come back.
It happened again last week. The financial world held its collective breath in the minutes leading up to Apple’s release of its quarterly earnings statement. Shortly after 4:00 EDT on Tuesday afternoon, the announcement came. Five minutes and a couple disappointing numbers later, Apple’s stock dropped 9% in after-hours trading. It only took these five minutes for more than 52 billion U.S. dollars of market cap to disappear.
Several months ago, I was approached by a young startup CEO who was offering to pair his company’s analytical tools with ours. The firm’s senior leadership team told me fantastic stories about the power of their company’s proprietary algorithms. I would never need to hire a new data scientist or statistician they assured me. They confidently threw around adjectives that suggested their models were powerful and unique and sophisticated – though I admit having to consult a dictionary after every discussion.
The world of predictive analytics has become a crowded field. To get an edge, I see players in this field making some fantastic promises about the “proven ROI” that their offerings provide. The claims often sound fairly compelling. An X% increase in sales! A Y% decrease in cycle times! Z% higher price points!
There was a time not too long ago when sales ops positions were viewed by executive leadership teams as a needed expense to keep the trains running on time. Too often, though, sales ops professionals lacked the organizational support to think in bigger terms of the wider strategic impact they could have on the rest of the organization – marketing, finance, and corporate strategy.